Any primer on the basics of the hierarchy within a computer would place the CPU as the central component. It is however serviced by many other supporting units and one of the most important would be the memory. In the modern computer architecture, there are different levels of memory, from the registers on the processor die to the many levels of cache and finally the separate memory modules installed on the motherboard. After all, a CPU is nothing but a giant calculator and the role of memory is as a form of temporary data storage for the CPU to extract and store the raw data and the results from the various operations performed on the data.
The modern computer uses various forms of memory, though they are all usually of the random access type, meaning that the data stored on these memory locations can be retrieved when needed in any order. Excluding the memory embedded within the processor die, the main form of memory found in most PCs today are dynamic random access memory modules or DRAM modules for short.
These modules have undergone quite a few changes in the past ten years but they have stayed recognizable over the years as integrated circuits on a rectangular PCBs. The number of pins have varied as the different formats came and went, from the 168-pin on the SDRAM module to the 240-pin on the current DDR2 and DDR3 SDRAM modules. While memory standards are determined by an industry standards body, JEDEC, they are co-dependent on the micro-architecture created by CPU firms like AMD and Intel.
In 1998, synchronous SDRAM which was only introduced in 1996 was beginning to dominate the industry. Yet by 1999, there was a new player, RAMBUS and the company's RDRAM had a large backer in the form of Intel, which licensed the use of RDRAM for its processors. Various issues associated with RDRAM, like its high cost and increased latency dulled its bandwidth advantage and the format was unpopular with consumers.
While RAMBUS was to fade into a bad memory (pun intended) for consumers who had bought into the technology via Intel's platform, the company was to haunt memory manufacturers for years with litigation, asserting that it owned patents on DDR technology. This would lead to an epic sequence of trials and appeals in American courts involving major players like Samsung, Micron and Infineon among others and lasting almost a decade. Anti-trust and price fixing were some of the related issues that came from the litigation, though RAMBUS has emerged as the eventual winner for most of the cases after many rounds, the most recent concluded in 2008.
This failure of RDRAM set the stage for an uninterrupted progress for SDRAM, with the introduction of DDR SDRAM in 2000, which doubled the minimum read or write unit of the memory module to two words of data and hence increased the memory bandwidth, especially coupled with increased clock speeds ranging from 133 to 200MHz.
2003 was to see the debut of DDR2 SDRAM. As we mentioned: "The major difference between DDR2 and DDR was the doubling of its bus speeds to the memory clock, thus allowing higher data transfer per memory clock cycle." Clock speeds were from 200 to 400MHz (or DDR2-400 to DDR2-800).
DDR3 SDRAM entered the picture in 2007 and with all the major chipsets supporting the new format, should be slowly gaining traction in 2008 and more mainstream adoption in 2009. While the same concept is used to result in a higher pre-fetch of 8-bits and higher data rate, the latencies on DDR3 are also significantly higher. On the other hand, the new memory modules use less power than DDR2, 1.5V compared to 1.8V while the density of the memory chips are also increased. And that is the present state of memory on the desktop now.
Ever since Moore's Law was observed, the computing industry has largely kept to the promise of exponential growth in performance. Yet in one area, it has been looking quite bleak for the better part of almost twenty years. Computer scientists have been bemoaning the inability of memory bandwidth to match the increase in CPU speed. Termed the memory wall, this is a problem that doesn't seem to have an easy answer and even as the advent of newer memory standards is unlikely to cease, eventually it seems, the memory bandwidth will be the limiting factor in computing.