No longer a rumor or a project for the distant future, the once questionable intent of Intel getting into the 3D graphics solutions space hotly contested by AMD and NVIDIA is now an actual threat to the established visual leaders.
We've all known and accepted for several years where Intel offers basic graphics capabilities for general day-to-day productivity tasks that most users require, whereas those who require more graphics prowess for professional tasks or just playing games would head for one of the add-on options from NVIDIA and AMD. However in today's context where CPUs are getting ever more powerful with several computing cores and GPUs boasting ever higher count of shader processing units, the problem herein is that there is a large amount of untapped potential on both types of processors; some of which not being utilized (in CPUs especially) or not optimized for general tasks (GPUs in particular). Both processors are designed and built for differing needs and have also been using differing application programming interfaces (APIs) as well. As such the current problem exists where one can't utilize the potential of both processors combined at any one time (usually), but that's something that Intel hopes to tackle with their Larrabee project.
CPUs are very high precision processing cores with massive caches that are tuned to process/handle all sorts of general purpose computing tasks. GPUs are however designed for a very specialized purpose of crunching 3D graphics and thanks to the nature of the tasks, the processors are heavily engineered to tackle floating-point workloads. With their highly programmable nature ever since DirectX 9.0c, and especially since the DirectX 10 API, many users from researchers to end-users have been busy trying to take advantage of the properties of GPUs to accelerate specialized tasks on video processing and the likes since these tasks too highly depend on floating-point performance. A dedicated purpose-made processor is anytime much faster than a general purpose CPU. This is why we're currently seeing GP-GPU computing initiatives that are trying to offload video transcoding to the GPU (and other such tasks), which can complete the job several magnitudes faster than a general purpose CPU can.