Showing posts with label Cache. Show all posts
Showing posts with label Cache. Show all posts

Wednesday, 5 July 2017

L2 vs. L3 cache: What’s the Difference?


The cache is a special buffer memory that is located between the memory and the processor.

So that the processor does not have to get every program command from the slow memory individually, a whole command block or data block is loaded into the cache. The probability that the subsequent program instructions are in the cache is relatively high. Only when all program instructions have been executed or a jump command to a jump address outside the cache, the processor must access the memory again. Therefore, the cache should be as large as possible so that the processor can run the program instructions one after the other without waiting.

Typically, processors work with multi-level caches that are different in size and fast. The closer the cache is to the computing core, the faster it works.

Inclusive cache and exclusive cache

With the multicore processors the terms inclusive and exclusive cache came up. Inclusive cache means that data in the L1 cache is also present in the L2 and L3 cache. This makes data consistency between the cores more secure. Compared to the exclusive cache, some storage capacity is given away because the data is redundant in the caches of several CPU cores.

Exclusive cache means that the cache is available to a processor core exclusively, that is, for it alone. It does not have to share the cache with another core. A disadvantage of this is that several processor cores can then exchange data with one another only by way of a detour.

L1 cache / first-level cache

As a rule, the L1 cache is not particularly large. For reasons of space it moves in the order of 16 to 64 kByte. Usually, the memory area for commands and data is separated from each other. The importance of the L1 cache increases with the higher CPU speed.

In the L1 cache, the most frequently used commands and data are buffered so that as few accesses as possible to the slow memory are required. This cache avoids delays in the data transfer and helps to optimally utilize the CPU.

L2 cache / second-level cache

In the L2 cache, the data of the working memory (RAM) is buffered.

The processor manufacturers supply the different market segments with specially modified processors via the size of the L2 cache. The choice between a processor with more clock speed or a larger L2 cache can be answered in a simplified manner as follows: With a higher clock, individual programs, especially with high arithmetic requirements, run faster. As soon as several programs run at the same time, a larger cache is an advantage. Typically, normal desktop computers with a processor that has a large cache are better served than with a processor that has a high clock rate.

When the memory controller was shifted from the chipset into the processor and the processor was able to access memory much faster, the importance of the L2 cache decreased. While the size of the L2 cache has decreased, the L3 cache has been properly upgraded.

L3 cache / third-level cache

As a rule, multicore processors use an integrated L3 cache. With the L3 cache, the Cache Koheranz protocol of Multicore processors can work much faster. This protocol compares the caches of all cores to maintain data consistency. The L3 cache thus has less functions of a cache, but is intended to simplify and speed up the cache coherency protocol and the data exchange between the cores.

As modern processors now contain several data cores - so-called cores - the manufacturers have already donated a third cache, the L3 cache, to these multi-core processors. All processor cores work together, which is particularly beneficial in parallel processing. This allows data shared by different CPU cores to be retrieved from the fast L3 cache. Without it, these data would always come from the slow main memory. In addition, the L3 cache also facilitates data management with multiple CPU cores and caches (data coherency).