NVIDIA GeForce GTX 750 Ti - Good to the Last Watt (Updated)
- < Prev
-
Page 1 of 4 - NVIDIA GeForce GTX 750 Ti - Good to the Last Watt
Page 1 of 4
- Next >
NVIDIA GeForce GTX 750 Ti - Good to the Last Watt
*Updated as of 12th March 2014 - Originally published as a preliminary review on 18 Feb, we've since added more comparison results with previous generation GeForce GTX 650 Ti and GeForce GTX 660 graphics cards, a more comprehensive conclusion and our ratings for the new GeForce GTX 750 Ti.
Introducing Maxwell
Late last year, AMD introduced its new "Volcanic Islands" GPUs and many wondered when NVIDIA would respond with its eagerly anticipated "Maxwell" graphics cards. Wonder no more, they are finally here.
With an updated architecture, NVIDIA is also adopting a new introduction strategy. Previously, with every brand new architecture, NVIDIA would typically begin with a high-end card and then work its way downwards. With Maxwell, however, NVIDIA is taking an opposite approach - working bottom up instead. This makes sense if you also consider that it was only just recently that NVIDIA introduced its flagship GeForce GTX 780 Ti. But more importantly, NVIDIA is eager to make the new Maxwell architecture and technologies available to the masses.
Hence, the two new cards to feature the Maxwell architecture are positioned as mainstream ones and are powered by the new GM107 GPU. The two new cards are named the GeForce GTX 750 Ti and GeForce GTX 750, and it is quite confusing because the existing GeForce GTX 700 series also includes graphics cards that are powered by the older Kepler architecture. Really, a move to GeForce GTX 800 series would have made more sense, but perhaps NVIDIA has something special up their sleeves reserved for that.
Also interesting to note is that in NVIDIA's literature, the two new cards here are featuring what they call the "first generation" Maxwell architecture. Does this mean there would be an improved second generation architecture? Or is there already a second generation architecture lying in wait and reserved for the high-end parts? We shall see.
Anyway, much like Kepler, NVIDIA is pushing for greater power efficiency and performance with its new Maxwell architecture. However, the focus is undeniably on the former, especially now that it is trying to standardize its GPU architecture across all platforms - the Tegra K1 utilizes a 192-core GPU that is based on its desktop Kepler architecture. It therefore makes sense for NVIDIA to strive for greater power efficiency for its new architecture, which will eventually make its way to its mobile offerings.
Now let’s take a look at the new improvements and features that Maxwell brings.
SMM: Enabling Greater Power Efficiency
Key to Kepler’s efficiency was NVIDIA’s all-new Streaming Multiprocessor (SM) design known as the SMX. Maxwell features a redesigned Stream Multiprocessor and it is now known as the SMM (Maxwell Streaming Multiprocessor). This has enabled Maxwell to deliver 35% more performance per core and up to twice the performance per watt.
This is particularly impressive when you consider also that unlike Kepler, which marked a transition from Fermi’s 40nm manufacturing process to a 28nm one, Maxwell will continue to use a 28nm manufacturing process. What this means is that Maxwell’s increased efficiency and performance will have to come from architectural improvements as opposed to the associated power and performance improvements that usually come from transitioning to a newer manufacturing process.
How Maxwell achieves its power efficiency and performance improvements is due to a number of changes. First and foremost, the scheduler architecture and algorithms have been rewritten to be more intelligent and more adept at avoiding stalls. Also, each SMM is now partitioned into four processing blocks, each with its own control logic block (instruction buffer, scheduler) and 32 CUDA cores with which to carry out the operations. Taken as a whole, this means each SMM has a grand total of 128 CUDA cores. Such an implementation simplifies the scheduling logic which translates to less idle time and less time spent waiting for instructions, thus putting the cores to better use.
Each pair of the processing blocks also share four texture mapping units and a texture cache. This gives each SMM a grand total of eight texture mapping units and two texture caches, compared with the SMX which has 16 texture mapping units and a single texture cache.
Besides boasting more texture mapping units, some of you might have also realized that if we were to compared each SMM to SMX, the latter would also boast more CUDA cores per Steaming Multiprocessor. Strictly speaking, yes, a single SMX is more powerful, but that would be missing the big picture. NVIDIA has designed each SMM such that it delivers 90% of the performance of the SMX but with a smaller footprint. This means that more can be crammed into a single GPC and therefore GPU. The table below shows how the new GM107 GPU (used in the new GTX 750 Ti) compares against the GK107 (used on the GTX 650) - note that both are made up of one GPC unit.
As you can see, the new GM107 boasts over 66% more CUDA cores, which also translates to a 60% improvement in pure compute performance - 1305 GFLOPs vs. 812 GFLOPs. Impressively, 1305 GFLOPs is nearly equivalent to that of the GeForce GTX 480, which was NVIDIA's flagship card just four years ago. Also amazing is that the GM107's rated TDP is lower and that it in terms of die size, it is just 25% larger.
Featuring All of NVIDIA’s Latest Technologies
In the past year, NVIDIA has introduced a couple of new exciting technologies and all of these will be supported by the new GM107 GPU and consequently the two new cards - GeForce GTX 750 Ti and GeForce GTX 750.
One of the most exciting technologies to be introduced by NVIDIA is GPU Boost 2.0, which debuted with the GeForce GTX Titan Very briefly, this technology improves on the first generation GPU Boost, by including operating temperature as a monitoring parameter to be determine how much the card can be overclocked to maximize performance.
In addition, the two new cards will also support recently announced technologies such as GameStream, ShadowPlay and G-Sync. GameStream lets users stream games from their PC to their NVIDIA Shield device, while ShadowPlay is a video recording and streaming technology that lets players record and stream their gameplay footage without taking a significant hit in performance. Finally, G-Sync a revolutionary technology that synchronizes the monitor’s refresh rates to the GPU’s draw rates, we have covered G-Sync in greater detail here.
The GeForce GTX 750 Ti
As we have mentioned, the GeForce GTX 750 Ti is powered by the full GM107 GPU - the GeForce GTX 750 features the same GPU but is missing a SMM - and it is positioned as a mainstream GPU. Both SKUs effectively replace the GeForce GTX 650 Ti, which will be discontinued. For this review, we'll be focusing on how the GeForce GTX 750 Ti fares with other recent GPUs.
With the GeForce GTX 750 Ti, NVIDIA’s main goal was ensure that its TDP was kept to a minimum, while ensuring it could still play the latest games at 1080p, but with modest graphics settings of course. Hence, the GeForce GTX 750 Ti boasts a TDP of just 60W, which also means that it does not require a PCIe power connector. Therefore, NVIDIA safely recommends a minimum PSU rating of just 300W to power the card, which makes the GeForce GTX 750 Ti an excellent choice for HTPC and Mini-PC enthusiasts, allowing these users to, for the first time, really enjoy casual gaming without the usual performance restrictions.
The GeForce GTX 750 Ti has a launch price of US$149 and goes up against cards like the GeForce GTX 650 Ti / GTX 660, Radeon R7 260X. However, since these are mostly older cards, we're interested in how it stacks up with the existing GeForce 700 series and the Radeon R9 270 graphics cards.
Update: Since our earlier published version of this article, we have updated our competing lineup of GPUs to include the GeForce GTX 650 Ti and the GTX 660. This is because the two new SKUs from the GTX 750 series are the direct replacements for the older GeForce GTX 650 Ti models. We also threw in the GeForce GTX 660 just to see how it stacks up against the new Maxwell-based GTX 750 Ti.
Here’s a table showing how it measures up against other recent comparable graphics cards.
NVIDIA GeForce GTX 750 Ti | NVIDIA GeForce GTX 660 | NVIDIA GeForce GTX 650 Ti | NVIDIA GeForce GTX 760 | AMD Radeon R9 270X | AMD Radeon R9 270 | |
Core Code |
|
|
|
|
|
|
---|---|---|---|---|---|---|
GPU Transistor Count |
|
|
|
|
|
|
Manufacturing Process |
|
|
|
|
|
|
Core Clock |
|
|
|
|
|
|
Stream Processors |
|
|
|
|
|
|
Stream Processor Clock |
|
|
|
|
|
|
Texture Mapping Units (TMUs) |
|
|
|
|
|
|
Raster Operator units (ROP) |
|
|
|
|
|
|
Memory Clock (DDR) |
|
|
|
|
|
|
Memory Bus width |
|
|
|
|
|
|
Memory Bandwidth |
|
|
|
|
|
|
PCI Express Interface |
|
|
|
|
|
|
Power Connectors |
|
|
|
|
|
|
Multi GPU Technology |
|
|
|
|
|
|
DVI Outputs |
|
|
|
|
|
|
HDMI Outputs |
|
|
|
|
|
|
HDCP Output Support |
|
— | — |
|
|
|
DisplayPort Outputs | — |
|
— |
|
|
|
Test Setup
These are the specifications of our graphics testbed:
- Intel Core i7-3960X (3.3GHz)
- ASUS P9X79 Pro (Intel X79 chipset) Motherboard
- 4 x 2GB DDR3-1600 G.Skill Ripjaws Memory
- Seagate 7200.10 200GB SATA hard drive (OS)
- Western Digital Caviar Black 7200 RPM 1TB SATA hard drive (Benchmarks + Games)
- Windows 7 Ultimate 64-bit
As mentioned above, instead of comparing existing older generation cards, we decided to check the standings of the newcomer with other recent cards of a grade just above it. As such, we've got the below list of cards tested. For the reference GTX 760 card, we had to clock down the Palit GeForce GTX 760 JetStream OC 2GB GDDR5 to the default operating values of the intended reference card. The same applied for our GeForce GTX 660 and GeForce GTX 650 Ti comparison cards; we had to throttle down the following cards, the Gigabyte GeForce GTX 650 Ti OC 1GB GDDR5 and the MSI GeForce GTX 660 Twin Frozr III OC 2GB GDDR5. For our missing reference AMD Radeon R9 270 card, we used the PowerColor R9 270 2GB GDDR5 OC, and we operated it right out-of-the-box (since the Radeon R9 270 SKU comes in many custom clock speeds and price points from various vendors).
- NVIDIA GeForce GTX 750 Ti 2GB GDDR5 (ForceWare 334.69)
- NVIDIA GeForce GTX 660 2GB GDDR5 (ForceWare 334.69)
- NVIDIA GeForce GTX 650 Ti 2GB GDDR5 (ForceWare 334.69)
- NVIDIA GeForce GTX 760 2GB GDDR5 (ForceWare 332.21)
- AMD Radeon R9 270X 4GB GDDR5 (AMD Catalyst 13.11 Beta 9.2)
- PowerColor R9 270 2GB GDDR5 OC (AMD Catalyst 13.11 Beta 9.2)
In addition, we also introduced a new gaming benchmark, Call of Duty: Ghosts. Since this was a very recent addition, we don't yet have results for all the comparison cards above. However, we did managed to do a quick preview to see how the add-in card partners' GeForce GTX 750 Ti stack up against its reference counterparts, as well as against each other. The following is the list of cards fielded for the Call of Duty: Ghosts benchmark:-
- NVIDIA GeForce GTX 750 Ti 2GB GDDR5 (ForceWare 334.69)
- NVIDIA GeForce GTX 660 2GB GDDR5 (ForceWare 334.69)
- NVIDIA GeForce GTX 650 Ti 2GB GDDR5 (ForceWare 334.69)
- NVIDIA GeForce GTX 760 2GB GDDR5 (ForceWare 334.69)
- ASUS GeForce GTX 750 Ti OC 2GB GDDR5 (ForceWare 334.69)
- Palit GeForce GTX 750 Ti Storm Dual 2GB GDDR5 (ForceWare 334.69)
Note 1: In temperature and power consumption comparisons, the results used were from the data gathered from the actual reference cards. Please refer to our reviews for the NVIDIA GeForce GTX 760 and AMD Radeon R9 series.
Benchmarks
Here's the full list of benchmarks that we'll be using for our assessment; we would have included Crysis 3, but due to some technical glitch, this particular Steam title refused to operate with the NVIDIA GeForce GTX 750 Ti reference graphics card. Therefore, the following benchmarks were utilized:-
- Futuremark 3DMark 2013
- Unigine 4.0 "Heaven"
- Hitman: Absolution
- Far Cry 3
- Call of Duty: Ghosts
For our temperature and power consumption tests, 3DMark 2011 was used.
Note 1: For the new gaming benchmark, we measured the average frame rate of the fielded graphics cards. We used Fraps to capture the data over a pre-determined game scene, and we varied the resolution to determine the variance of each card's performance. The other video settings were set to high levels, and we decided to fix the anti-aliasing level at 4x, in order to keep rendered graphics vivid with sufficient visual details.
- < Prev
-
Page 1 of 4 - NVIDIA GeForce GTX 750 Ti - Good to the Last Watt
Page 1 of 4
- Next >