If you recall some two years back, AMD debuted their much publicized open architecture systems initiative (Torrenza) of creating specialized co-processors in an AMD platform. However that has pretty much fizzled out of spotlight and there have been several references as of late on AMD's headway into Accelerated Processing Units (APU). So how do these two developments differ when both are supposed to assist in accelerated processing? We touched on these subjects as well with John to get a better understanding of where AMD is heading.
First off, the Torrenza initiative was about opening up their architecture, whether one wanted to expand via the PCI slot, PCI Express, and enabling much greater acceleration for specific tasks rather than having another general purpose core or higher frequencies as compared to a co-processor where far greater gains can be made.
The whole trick of all these various forms of accelerated processors/co-processors is getting the software to recognize these hardware capabilities. AMD is putting a lot resources (and energy) into working with Microsoft, the Linux community and software developers to unlock the future of accelerated computing with Torrenza and Stream processor cards being plugged into.
The first generation or example of accelerated processing units (APUs) will be the Fusion initiative itself which would take AMD's next generation graphics core that has not yet been announced and integrate it along with two Phenom-architecture based cores. AMD predicts that this will undoubtedly consume a much lower wattage and enable a much simpler notebook hardware platform. The chipset probably doesn't exist in this platform. Everything is integrated in that CPU and they can be much more competitive in the Thin-and-Light notebook market segment. The real promise comes about in unlocking the GPU to perform more general purpose computing tasks, which was what the Torrenza, Stream and APU initiative was about.
AMD isn't the only one hoping to unlock the GPU with their CTM interface because NVIDIA is also hard at work with their CUDA initiative to unleash the GPU for general purpose computing. Though Intel has no comments about these developments other than why would developers budge to another platform and possibly more work to recompile their codes, the reality is that we believe there will be a light shift occurring, at least slowly and steadily. The big guns in the software world may not be the first movers, but the second or third tier software developers would surely give them a shot - after all, GP-GPU computing has in selective areas such as stream processing proven to be far more efficient than a traditional CPU. Still, there are several data types and math precision requirements that the GPU currently cannot handle, so the CPU is far from being rivaled. Here are some expectations and scenarios from John Taylor:-
"So the workloads and where it happens is going to change in the future, but we've got to work with the development community to take advantage of the new computing architectures that will have two, three, four or more general purpose serial computing capable cores sitting besides hundreds of shader cores (or GPU-GPU cores) that can have amazing floating-point performance and can perform several parallelized tasks such as transcoding and encoding.
The things that drive us crazy today, let's say you use Nero and you take a digital file, transcode it and burn it on a DVD for whatever reasons. That takes hours, especially if it's a 1GB file or larger. The promise of accelerated computing is get the plug-in done with Nero, have a high-end discrete graphics card or when Fusion is adopted, everybody will be having those GPU cores sitting on their CPU too, and what took hours will just take minutes. And those are perhaps one of the more computationally intensive tasks that consumer do today."