![]() For example, the cache scheme in Figure 6.3 maps address zero to cache line #0. Perhaps the biggest problem with a direct-mapped cache is that it may not make effective use of all the cache memory. However, direct-mapped caches to suffer from some other problems.įigure 6.3 Selecting a Cache Line in a Direct-mapped Cache Extracting nine (or some other number of) bits from the address and using this as an index into the array of cache lines is trivial and fast. The direct-mapped cache scheme is very easy to implement. ![]() Since there are 512 cache lines, this example uses bits four through twelve to select one of the cache lines (bits zero through three select a particular byte within the 16-byte cache line). For example, Figure 6.3 shows how the cache controller could select a cache line for an 8 Kilobyte cache with 16-byte cache lines and a 32-bit main memory address. Generally, some number of bits in the main memory address select the cache line. In a direct mapped cache, a block of main memory is always loaded into the same cache line in the cache. Most L1 and L2 caches are not fully associative for this reason.Īt the other extreme is the direct mapped cache (also known as the one-way set associative cache). The extra circuitry to achieve full associativity is expensive and, worse, can slow down the memory subsystem. While this is a very flexible system, the flexibility is not without cost. In a fully associative cache subsystem, the caching controller can place a block of bytes in any one of the cache lines present in the cache memory. ![]() When the cache controller reads a cache line from a lower level in the memory hierarchy, a good question is "where does the data go in the cache?" The most flexible cache system is the fully associative cache. four bits of the address of the first byte in the cache line are always zero). In this example, the cache lines are 16 bytes long, so a cache line holds blocks of 16 bytes whose addresses fall on 16-byte boundaries in main memory (i.e., the L.O. Generally, if a cache line is n bytes long (n is usually some power of two) then that cache line will hold n bytes from main memory that fall on an n-byte boundary. So cache line #0 might correspond to addresses $10000.$1000F and cache line #1 might correspond to addresses $21400.$2140F. The idea of a cache system is that we can attach a different (non-contiguous) address to each of the cache lines. Instead, cache organization is usually in blocks of cache lines with each line containing some number of bytes (typically a small number that is a power of two like 16, 32, or 64), see Figure 6.2.įigure 6.2 Possible Organization of an 8 Kilobyte Cache Therefore, the cache design has got to accommodate the fact that it must map data objects at widely varying addresses in memory.Īs noted in the previous section, cache memory is not organized as a group of bytes. In general, the data is spread out all over the address space. Unfortunately, the data rarely sits in contiguous memory locations usually, there's a few bytes here, a few bytes there, and some bytes somewhere else. If the cache is the same size as the typical amount of data the program access at any one given time, then we can put that data into the cache and access most of the data at a very high speed. The basic idea behind a cache is that a program only access a small amount of data at a given time. However, a good question is "how exactly does the cache do this?" Another might be "what happens when the cache is full and the CPU is requesting additional data not in the cache?" In this section, we'll take a look at the internal cache organization and try to answer these questions along with a few others. Up to this point, cache has been this magical place that automatically stores data when we need it, perhaps fetching new data as the CPU requires it.
0 Comments
Leave a Reply. |