Launching Today: GTX 980 & GTX 970

Now that we’ve had a chance to take a look at the architectural and feature additions found in Maxwell 2 and GM204, let’s talk about the products themselves.

Today NVIDIA will be launching 2 products. These are the GeForce GTX 980 and GeForce GTX 970. As with past 80/70 parts this is a two tier launch, with GTX 980 being NVIDIA’s new flagship card and 1st tier GM204 card, while GTX 970 offers 2nd tier performance at much lower pricing.

NVIDIA GPU Specification Comparison
  GTX 980 GTX 970 (Corrected) GTX 780 Ti GTX 770
CUDA Cores 2048 1664 2880 1536
Texture Units 128 104 240 128
ROPs 64 56 48 32
Core Clock 1126MHz 1050MHz 875MHz 1046MHz
Boost Clock 1216MHz 1178MHz 928Mhz 1085MHz
Memory Clock 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5
Memory Bus Width 256-bit 256-bit 384-bit 256-bit
VRAM 4GB 4GB 3GB 2GB
FP64 1/32 FP32 1/32 FP32 1/24 FP32 1/24 FP32
TDP 165W 145W 250W 230W
GPU GM204 GM204 GK110 GK104
Transistor Count 5.2B 5.2B 7.1B 3.5B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Launch Date 09/18/14 09/18/14 11/07/13 05/30/13
Launch Price $549 $329 $699 $399

Starting with the GeForce GTX 980, this is a fully enabled GM204 part. This means that 16 SMMs are enabled (2048 CUDA cores), as are all 64 ROPs and the full 256-bit memory bus. It is in other words GM204 at its best.

For clockspeeds NVIDIA is shipping GTX 980 with a base clockspeed of 1126MHz, a boost clockspeed of 1216MHz, and in our samples we have found the maximum clockspeed (highest stock boost bin) to be 1252MHz. This is a higher set of clockspeeds than any NVIDIA consumer GPU thus far, surpassing GTX 770, GTX Titan Black, and GTX 750 Ti. Curiously NVIDIA’s self-defined (and otherwise arbitrary) boost clock is much higher than on past parts; normally it would only be 50MHz or so above the base clock. This indicates that NVIDIA is getting more aggressive with their boost clock labeling and are picking values much closer to the card’s maximum clockspeed. This is a subject we will be revisiting later.

Meanwhile the memory clock stands at 7GHz, the same as with NVIDIA’s past generation of high-end cards. With GDDR5 clockspeeds all but tapped out, NVIDIA appears to have reached the limits of GDDR5 as a technology, hence their long-term interest in HBM for future architectures and improved color compression for current architectures. In any case this 7GHz of GDDR5 is attached to a 256-bit memory bus, and is populated with 4GB of VRAM. NVIDIA for the longest time has held to 2GB/3GB of memory for their cards, so it is a welcome sight to see that they are now making 4GB their standard, especially if they are going to target 4K gaming.

For power delivery GTX 980 has a rated TDP of 165W. This is significantly lower than the 250W TDPs of the GTX 780/780Ti/Titan and even the 225W TDP of the GTX 770, and heavily contributes to NVIDIA’s overall power efficiency advantage. Meanwhile NVIDIA does not specify an idle TDP, however in our testing idle power usage is lower than ever for a high-end NVIDIA card, indicating that NVIDIA should have it down to the single watt range.

Moving on, we have the GTX 980’s lower price, lower performance counterpart, the GTX 970. Compared to GTX 980, GTX 970 drops 3 of the SMMs, reducing its final count to 13 SMMs or 1664 CUDA cores. It also sheds part of a ROP/L2 cache partition while retaining the 256-bit memory bus of its bigger sibling, bringing the ROP count down to 56 ROPs and the L2 cache down to 1.75MB, a configuration option new to Maxwell.

As expected, along with the reduction in SMMs clockspeed is also reduced slightly for GTX 970. It ships at a base clockspeed of 1050MHz, with a boost clockspeed of 1178MHz. This puts the theoretical performance difference between it and the GTX 980 at about 85% of the ROP performance or about 79% of the shading/texturing/geometry performance. Given that the GTX 970 is unlikely to be ROP bound with so many ROPs, the real world performance difference should much more closely track the 79% value, meaning there is a significant performance delta between the GTX 980 and GTX 970. Elsewhere the memory configuration is unchanged from GTX 980. This means we’re looking at 4GB of GDDR5 clocked at 7GHz, all on a 256-bit bus.

GTX 970’s TDP meanwhile is lower than GTX 980’s thanks to the reduced clockspeeds and SMM count. The stock GTX 970 will be shipping with a TDP of just 145W, some 80W less than GTX 770. NVIDIA’s official designs still include 2 6-pin PCIe power sockets despite the fact that the card should technically be able to operate on just one; it is not clear at this time whether this is for overclocking purposes (150W would leave almost no power headroom) or for safety purposes since NVIDIA would be so close to going over PCIe specifications.

Due to the launch of the GTX 980 and GTX 970, NVIDIA’s product lineup will be changing to accommodate these cards. GTX 780 Ti, GTX 780, and GTX 770 are all being discontinued; their replacements offer better performance at better prices for lower power consumption. GTX 980 will be launching at $550, meanwhile GTX 970 will be launching at the surprisingly low price of $329, some 40% cheaper than GTX 980. On a historical basis GTX 980 is a bit higher than most of the past GTX x80 cards – which are often launched at $500 – while GTX 970 immediately slots in to GTX 770’s old price.

NVIDIA’s target market for the GTX 900 series will be owners of GTX 600/500/400 series cards and their AMD equivalents. GTX 980 and GTX 970 are faster than their 700 series predecessors but not immensely so, and as a result NVIDIA does not expect 700 series owners to want to upgrade so soon. Meanwhile 600 series owners and beyond are looking at 70%+ improved performance for cards at the same tier, along with some degree of a reduction in power consumption.

For today’s launch NVIDIA will be doing a reference launch of the GTX 980, so reference cards will be well represented while production of customized cards ramps up. Meanwhile GTX 970 is a pure virtual launch, meaning there will not be any reference cards at all. NVIDIA’s partners will be launching with customized designs right away, many of which will be carried over from their GTX 600/700 card designs. This will be a hard launch and cards should be readily available, and while NVIDIA should have no problem producing GM204 GPUs on the very mature TSMC 28nm process, it is difficult to predict just how well supplies will hold out.

On the competitive basis NVIDIA’s direct competition for the GTX 980 and GTX 970 will be split. GTX 980 is an immediate challenger for the Radeon R9 290X, AMD’s flagship single-GPU card which outside of a couple of sales continues to be priced around $499. GTX 970’s competition meanwhile will be split between the Radeon R9 290 and Radeon R9 280X. From a performance perspective the R9 290 is going to be the closer competitor, though it's priced around $399. Meanwhile the R9 280X will undercut the GTX 970 at around $279, but with much weaker performance.

NVIDIA for their part will not be running any promotions or bundles for the GTX 900 series, so what you see is what you get. Otherwise AMD will have their continuing Never Settle Forever bundle in play, which offers up to 3 free games in order to add value to the overall product.

Finally, there will be price cuts for the GTX 700 series. Officially GTX 760 stays in production with a new MSRP of $219. Meanwhile GTX 770, GTX 780, and GTX 780 Ti will go on clearance sale at whatever prices retailers can manage, and are still part of NVIDIA’s Borderlands bundle offer. That said, from a performance and power efficiency angle, the GTX 900 series is going to be a much more desirable product line.

Fall 2014 GPU Pricing Comparison
AMD Price NVIDIA
Radeon R9 295X2 $1000  
  $550 GeForce GTX 980
Radeon R9 290X $500  
Radeon R9 290 $400  
  $330 GeForce GTX 970
Radeon R9 280X $280  
Radeon R9 285 $250  
Radeon R9 280 $220 GeForce GTX 760

 

Better AA: Dynamic Super Resolution & Multi-Frame Sampled Anti-Aliasing Meet the GeForce GTX 980
Comments Locked

274 Comments

View All Comments

  • squngy - Wednesday, November 19, 2014 - link

    It is explained in the article.

    Because GTX980 makes so many more frames the CPU is worked a lot harder. The W in those charts are for the whole system so when the CPU uses more power it makes it harder to directly compare GPUs
  • galta - Friday, September 19, 2014 - link

    The simple fact is that a GPU more powerful than a GTX 980 does not make sense right now, no matter how much we would love to see it.
    See, most folks are still gaming @ 1080, some of us are moving up to 1440. Under this scenarios, a GTX 980 is more than enough, even if quality settings are maxed out. Early reviews show that it can even handle 4K with moderate settings, and we should expect further performance gains as drivers improve.
    Maybe in a year or two, when 4K monitors become more relevant, a more powerful GPU would make sense. Now they simply don't.
    For the moment, nVidia's movement is smart and commendable: power efficiency!
    I mean, such a powerful card at only 165W! If you are crazy/wealthy enough to have two of them in SLI, you can cut your power demand by 170W, with following gains in temps and/or noise, and and less expensive PSU, if you're building from scratch.
    In the end, are these new cards great? Of course they are!
    Does it make sense to up-grade right now? Only if you running a 5xx or 6xx series card, or if your demands have increased dramatically (multi-monitor set-up, higher res. etc.).
  • Margalus - Friday, September 19, 2014 - link

    A more powerful gpu does make sense. Some people like to play their games with triple monitors, or more. A single gpu that could play at 7680x1440 with all settings maxed out would be nice.
  • galta - Saturday, September 20, 2014 - link

    How many of us demand such power? The ones who really do can go SLI and OC the cards.
    nVidia would be spending billions for a card that would sell thousands. As I said: we would love the card, but still no sense
    Again, I would love to see it, but in the forseeable future, I won't need it. Happier with noise, power and heat efficiency.
  • Da W - Monday, September 22, 2014 - link

    Here's one that demands such power. I play 3600*1920 using 3 screens, almost 4k, 1/3 the budget, and still useful for, you know, working.
    Don't want sli/crossfire. Don't want a space heater either.
  • bebimbap - Saturday, September 20, 2014 - link

    gaming at 1080@144 or 1080 with min fps of 120 for ulmb is no joke when it comes to gpu requirement. Most modern games max at 80-90fps on a OC'd gtx670 you need at least an OC'd gtx770-780. I'd recommend 780ti. and though a 24" 1080 might seem "small" you only have so much focus. You can't focus on periphery vision you'd have to move your eyes to focus on another piece of the screen. the 24"-27" size seems perfect so you don't have to move your eyes/head much or at all.

    the next step is 1440@144 or min fps of 120 which requires more gpu than @ 4k60. as 1440 is about 2x 1080 you'd need a gpu 2x as powerful. so you can see why nvidia must put out a powerful card at a moderate price point. They need it for their 144hz gsync tech and 3dvision

    imo the ppi race isn't as beneficial as higher refresh rate. For TVs manufacturers are playing this game of misinformation so consumers get the short end of the stick, but having a monitor running at 144hz is a world of difference compared to 60hz for me. you can tell just from the mouse cursor moving across the screen. As I age I realize every day that my eyes will never be as good as yesterday, and knowing that I'd take a 27" 1440p @ 144hz any day over a 28" 5k @ 60hz.
  • Laststop311 - Sunday, September 21, 2014 - link

    Well it all depends on viewing distance. I use a 30" 2560x1600 dell u3014 to game on currently since it's larger i can sit further away and still have just as good of an experience as a 24 or 27 thats closer. So you can't just say larger monitors mean you can;t focus on it all cause you can just at a further distance.
  • theuglyman0war - Monday, September 22, 2014 - link

    The power of the newest technology is and has always been an illusion because the creation of games will always be an exercise in "compromise". Even a game like WOW that isn't crippled by console consideration is created by the lowest common denominator demographic in the PC hardware population. In other words... ( if u buy it they will make it vs. if they make it I will upgrade ). Besides the unlimited reach of an openworld's "possible" textures and vtx counts.
    "Some" artists are of the opinion that more hardware power would result in a less aggressive graphic budget! ( when the time spent wrangling a synced normal mapped representation of a high resolution sculpt or tracking seam problems in lightmapped approximations of complex illumination with long bake times can take longer than simply using that original complexity ). The compromise can take more time then if we had hardware that could keep up with an artists imagination.
    In which case I gotta wonder about the imagination of the end user that really believes his hardware is the end to any graphics progress?
  • ppi - Friday, September 19, 2014 - link

    On desktop, all AMD needs to do is to lower price and perhaps release OC'd 290X to match 980 performance. It will reduce their margins, but they won't be irrelevant on the market, like in CPUs vs Intel (where AMD's most powerful beasts barely touch Intel's low-end, apart from some specific multi-threaded cases)

    Why so simple? On desktop:
    - Performance is still #1 factor - if you offer more per your $, you win
    - Noise can be easily resolved via open air coolers
    - Power consumption is not such a big deal

    So ... if AMD card at a given price is as fast as Maxwell, then they are clearly worse choice. But if they are faster?

    In mobile, however, they are screwed big way, unless they have something REAL good in their sleeve (looking at Tonga, I do not think they do, as I am convinced AMD intends to pull off another HD5870 (i.e. be on the new process node first), but it apparently did not work this time around.)
  • Friendly0Fire - Friday, September 19, 2014 - link

    The 290X already is effectively an overclocked 290 though. I'm not sure they'd be able to crank up power consumption reliably without running into heat dissipation or power draw limits.

    Also, they'd have to invest in making a good reference cooler.

Log in

Don't have an account? Sign up now