MIT researchers have developed a new cache system that could boost processor performance by a good 20 to 30 per cent. The system, aptly named Jenga, can create new cache hierarchies on the fly that are optimized for the needs of different programs.
Processors today rely on caches to store frequently used data and reduce the time and energy needed to access this data. Modern chips generally have three or sometimes four different levels of caches, where each successive level is larger and slower than the previous.
However, because these caches are essentially fixed, they’re a compromise between the needs of various programs and are rarely suited to any particular application.
To solve this, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory made the new Jenga cache system dynamic, which means it can reallocate cache access on an ad hoc basis and also create new cache structures that fit better with the needs of different programs.
Various applications have different requirements to function optimally, and things like the size of the data they access and whether they would benefit from a hierarchy of progressively larger caches matter.
As a result, these application-specific cache hierarchies can more effectively maximize the performance for a certain application.
Most multi-core chips comprise two levels of private cache in each core and a third level of shared cache that is divided into discrete memory banks spread around the chip.
Jenga knows the physical location of each memory bank and how long it takes to retrieve information from each bank, so it can calculate the path of least resistance.
This means switching up how to store data on a case-by-case basis, even if it means changing the cache hierarchy.
The researchers ran a simulated 36-core chip to test out Jenga, and it ran up to 30 per cent quicker and consumed 85 per cent less power. This might ease the power penalty of huge multi-core chips, especially in mobile devices where power consumption must be kept low.
Jenga is only a simulation for now, so it will be a while before we can see proof of its benefits in the real world.
But chipmakers have shown an increasing willingness to explore new ideas as it comes more difficult to squeeze out tangible performance gains by shrinking process nodes, so we might one day see a CPU with this dynamic cache system.
Source: MIT News