Shootouts

NVIDIA GeForce GTX 950 shootout: Which card should you get for Dota 2?

By Koh Wanzi - 27 Sep 2015

Introduction

We pit custom versions of the NVIDIA GeForce GTX 950 from ASUS, Gigabyte and MSI against each other to find out which one comes out on top.

Going after MOBA gamers

NVIDIA certainly has many cards (pun intended) up its sleeve. While AMD just trotted out the Radeon R9 Fury X, Fury, and its 300 series of rebranded graphics cards, NVIDIA isn’t even done with its GeForce GTX 900 series cards, which first made an appearance as the GeForce GTX 980 back in September 2014.

Now almost a year later, NVIDIA has a new, more affordable card for us. And coming so soon after the super high-end GeForce GTX Titan X and 980 Ti wowed us with their performance, it’s somewhat refreshing to see NVIDIA release a card for the average gamer.

With that said, NVIDIA still has a rather specific target audience in mind – multiplayer online battle arena, better known as MOBA, gamers. Players of competitive MOBA games like Dota 2, League of Legends and Heroes of the Storm require high frame rates, low input latency and great visual quality, and NVIDIA says the GeForce GTX 950 is designed to do just that.

But as is typical of more affordable cards, NVIDIA isn’t releasing a reference version, so we’ve rounded up custom versions of the GeForce GTX 950 from ASUS, Gigabyte and MSI to give you a better idea of which one you should get.

Before we look at the individual cards in greater detail, we’ll first take a look at the card’s architecture and how it differs from the GeForce GTX 960.


Hello again, Maxwell

The GeForce GTX 950 is based on the same GM206 GPU as the GeForce GTX 960, bringing with it support for key features like DirectX 12 and feature level 12_1, including support for more efficient rendering techniques like volume tiled resources, conservative raster, and raster ordered views.

The GM206 GPU on the GeForce GTX 950 is a pared-down version of the chip found on the GeForce GTX 960. Compared to the 8 SMMs and 1,024 CUDA cores on the latter card, the GeForce GTX 950 is equipped with 6 SMMs and just 768 CUDA cores (128 per SMM, 32 in each processing block), a 25% reduction. Like the GeForce GTX 960, each SMM houses 8 texture units, so a reduction in the number of enabled SMMs means that the texture units has similarly taken a 25% reduction to 48. The number of Raster Operator Units (ROPs) remains at 32, the same as on the GeForce GTX 960.

The GM206 GPU on the GeForce GTX 950 sports just 6 SMMs and 768 CUDA cores, a 25% reduction from the GeForce GTX 960. (Image Source: AnandTech)

Most workloads are heavily dependent on the number of shaders or SMMs, so it goes without saying that the GeForce GTX 950 will suffer when compared against the GeForce GTX 960 in this respect.

With two Graphics Processing Clusters (GPC), the GeForce GTX 950 also has a 128-bit memory bus width, running at a 6,600MHz effective memory clock (down from the 7,000MHz on the GeForce GTX 960). The reference specifications list a base clock of 1,024MHz and boost clock of 1,188MHz. Finally, it comes outfitted with a modest 2GB of GDDR5 video memory.

GeForce GTX 950 and GeForce GTX 960 compared
  NVIDIA GeForce GTX 950 NVIDIA GeForce GTX 960
  NVIDIA GeForce GTX 950 NVIDIA GeForce GTX 960
Core Code
  • GM206
  • GM206
GPU Transistor Count
  • 2.94 billion
  • 2.94 billion
Manufacturing Process
  • 28nm
  • 28nm
Core Clock
  • 1024MHz (Boost: 1188MHz)
  • 1126MHz (Boost: 1178MHz)
Stream Processors
  • 768
  • 1024
Stream Processor Clock
  • 1024MHz
  • 1126MHz
Texture Mapping Units (TMUs)
  • 48
  • 64
Raster Operator units (ROP)
  • 32
  • 32
Memory Clock (DDR)
  • 6600MHz
  • 7010MHz
Memory Bus width
  • 128-bit
  • 128-bit
Memory Bandwidth
  • 105.60 GB/s
  • 112.16 GB/s
PCI Express Interface
  • PCI Express 3.0
  • PCI Express 3.0
Power Connectors
  • 1 x 6-pin
  • 1 x 6-pin
Multi GPU Technology
  • SLI
  • SLI
DVI Outputs
  • 1
  • 1
HDMI Outputs
  • 1
  • 1
DisplayPort Outputs
  • 3
  • 3
HDCP Output Support
  • Yes
  • Yes

Like its bigger brother the GeForce GTX 960, the card is powered by a single 6-pin PCIe connector. However, it features a lower Thermal Design Power (TDP) of 90 watts, compared to the 120 watts on the GeForce GTX 960. This is likely due to its cutdown architecture, with its disabled SMMs and 25% fewer CUDA core, and lower base clock speed.

But that's not all. The GeForce GTX 950 is more than just about a cutdown GM206 chip and in fact is highly optimized to deliver a smooth and responsive gameplay experience through speedier renders and latency optimizations that vastly improve response time. There's also optimizations to the software side of things with an improved GeForce Experience that helps you get more of the GeForce GTX 950 in a fuss-free manner for your respective games. We've covered these aspects and more in a dedicated GeForce GTX 950 feature article.


GeForce GTX 950 vs. the competition?

At US$159, there’s a lot of competition in the sub-US$200 range for the GeForce GTX 950, particularly from the GeForce GTX 960 and AMD Radeon R9 380, both of which are only slightly pricier at US$199. However, NVIDIA has singled out the US$149 AMD Radeon R7 370 as its nearest competitor. We’re not surprised to see it highlight advantages in power consumption, given that the Radeon R7 370 is actually based on the aging Pitcairn GPU that first debuted in 2012. In addition, while the Radeon R7 370 supports DirectX 12, it only supports feature level 11_1, whereas the GeForce GTX 950 will support feature level 12_1.

For the sake of clarity, we’d like to point out that a DirectX version number (e.g. DirectX 11 or 11.1) is not the same as a feature level, which is typically signified with an underscore (instead of a point), as in feature level 12_1. A DirectX version update adds a new set of standardized capabilities that give developers more tools to do their jobs better. In the case of DirectX 12, that would be the ability to have lower API overheads, improved utilization of multi-core CPUs, and the ability to combine the graphics processing capabilities of non-identical GPUs.

On the other hand, a DirectX feature level is more like a subset of a DirectX version update. It defines the exact level of support a particular GPU offers while still supporting the underlying DirectX specification. As a result, the Radeon R7 370 will support the key benefits of DirectX 12 like lower API overheads, but it won’t be able to take advantage of aspects of feature level 12_1 like the aforementioned volume tiled resources and conservative raster.

So does the scoreboard read NVIDIA 1: AMD 0? Perhaps on paper, but the actual feature sets that define feature level 12_1 are probably not going to have any real impact on how the average gamer experiences their games.

Having said that, it's time to check out the contenders of the shootout and scrutinize their performance on the following few pages, so read on!

Join HWZ's Telegram channel here and catch all the latest tech news!
Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.