MSI NX8800GTS-T2D640E-HD (GeForce 8800 GTS 640MB)


Introduction

Introduction

Eagerly anticipated by enthusiasts (and probably by Microsoft too), the new GeForce 8 series of graphics cards has hit the streets. Hyped as the first generation of graphics cards to support DirectX 10 and also to the surprise of pundits, NVIDIA's first foray in developing a unified shader architecture, the new core, codenamed G80 is larger than ever at 681 million transistors and naturally, faster than ever. Two models are available at launch, the flagship GeForce 8800 GTX and the relatively more affordable GeForce 8800 GTS, which is the focus of this review.

For our comprehensive analysis of all the features and highlights of NVIDIA's new architecture, you should refer to to our comprehensive article but in case you find it too technical, here's the Cliff Notes version:

First, if all you can remember is one catchphrase, then remember this: unified shader architecture. To use an analogy, the new shader processors (now known as stream processors) are like the stem cells of graphics. Just like how stem cells have the potential to develop into virtually any kind of cells in the body, all the shader processors in the GeForce 8 series are general processors instead of the inflexible and dedicated pixel or vertex processors found in previous graphics cards. They are perfectly capable of handling either instruction type with equal ease. Therefore, any available and free processor can be roped in to deal with the workload anytime, instead of being left idle because they are not equipped to handle the work, thereby increasing the overall efficiency.

Of course, to ensure that things don't turn chaotic due to the fact that all the processors are now equally proficient, there needs to be someone directing the traffic. This is done by a thread manager dubbed GigaThread Technology, which will dynamically allocate the processors according to the current workload. For the GeForce 8800 GTX, the stream processors are clocked at a very fast 1350MHz and organized in groups of sixteen.

The other key word that you'll probably find on NVIDIA's packaging for the GeForce 8 is the Lumenex engine, which is mainly responsible for getting rid of the ugly jaggies in your applications. Anti-aliasing (AA), anisotropic filtering (AF) and high dynamic range (HDR) are some of the usual stuff handled by this new engine, which actually consists of some old fashioned components, mainly raster operators (ROP). However, these are enhanced versions of the ones on the older graphics cards and thanks to a new algorithm known as Coverage Sampling AA, allows 16x AA, which was previously not possible for a single GPU.

More importantly for some enthusiasts, the GeForce 8 can now do HDR with AA. This has been a glaring shortcoming of the GeForce 7 architecture which NVIDIA has solved in some style. The GeForce 8 series has support for 128-bit HDR, twice that of ATI's present implementation on its Radeon X1000 series and together with a full 10-bit display pipeline offering up to 1 billion unique colors, it seems that NVIDIA has taken criticisms of its image quality to heart.

Finally, the GeForce 8 is DirectX 10 compliant, meaning that it supports the new Shader Model 4.0. Remember that DirectX 10 is only found on Windows Vista, so to most users it's not relevant yet. As part of the DirectX 10 specifications, the GeForce 8 has a geometry shader that can act upon and manipulate 3D models without the CPU, thus reducing its workload and at the same time, improving execution time due to its closer proximity to the data. There are also other changes made in DirectX 10 to give more responsibilities to the GPU and remove the CPU from the equation. Therefore, you can expect reduced CPU utilization and it really gives free rein to the GPU to do what it does best.

As mentioned earlier, you can refer to our complete article on the GeForce 8 for the details but for now, let's take a look at MSI's GeForce 8800 GTS, which should be sitting pretty on a retail shelf as we speak.