Overclocking GTX 980

One of GTX 750 Ti’s more remarkable features was its overclocking headroom. GM107 could overclock so well that upon initial release, NVIDIA did not program in enough overclocking headroom in their drivers to allow for many GTX 750 Ti cards to be overclocked to their true limits. This is a legacy we would be glad to see repeated for GTX 980, and is a legacy we are going to put to the test.

As with NVIDIA’s Kepler cards, NVIDIA’s Maxwell cards are subject to NVIDIA’s stringent power and voltage limitations. Overvolting is limited to NVIDIA’s built in overvoltage function, which isn’t so much a voltage control as it is the ability to unlock 1-2 more boost bins and their associated voltages. Meanwhile TDP controls are limited to whatever value NVIDIA believes is safe for that model card, which can vary depending on its GPU and its power delivery design.

For GTX 980 we have a 125% TDP limit, meanwhile we are able to overvolt by 1 boost bin to 1265MHz, which utilizes a voltage of 1.25v.

GeForce GTX 980 Overclocking
  Stock Overclocked
Core Clock 1126MHz 1377MHz
Boost Clock 1216MHz 1466MHz
Max Boost Clock 1265MHz 1515MHz
Memory Clock 7GHz 7.8GHz
Max Voltage 1.25v 1.25v

GTX 980 does not let us down, and like its lower end Maxwell 1 based counterpart the GTX 980 turns in an overclocking performance just short of absurd. Even without real voltage controls we were able to push another 250MHz (22%) out of our GM204 GPU, resulting in an overclocked base clock of 1377MHz and more amazingly an overclocked maximum boost clock of 1515MHz. That makes this the first NVIDIA card we have tested to surpass both 1.4GHz and 1.5GHz, all in one fell swoop.

This also leaves us wondering just how much farther GM204 could overclock if we were able to truly overvolt it. At 1.25v I’m not sure too much more voltage is good for the GPU in the long term – that’s already quite a bit of voltage for a TSMC 28nm process – but I suspect there is some untapped headroom left in the GPU at higher voltages.

Memory overclocking on the other hand doesn’t end up being quite as extreme, but we’ve known from the start that at 7GHz for the stock memory clock, we were already pushing the limits for GDDR5 and NVIDIA’s memory controllers. Still, we were able to work another 800MHz (11%) out of the memory subsystem, for a final memory clock of 7.8GHz.

Before we go to our full results, in light of GTX 980’s relatively narrow memory bus and NVIDIA’s color compression improvements, we quickly broke apart our core and memory overclock testing in order to test each separately. This is to see which overclock has more effect: the core overclock or the memory overclock. One would presume that the memory overclock is the more important given the narrow memory bus, but as it turns out that is not necessarily the case.

GeForce GTX 980 Overclocking Performance
  Core (+22%) Memroy (+11%) Combined
Metro: LL
+15%
+4%
+20%
CoH2
+19%
+5%
+20%
Bioshock
+9%
+4%
+15%
Battlefield 4
+10%
+6%
+17%
Crysis 3
+12%
+5%
+15%
TW: Rome 2
+16%
+7%
+20%
Thief
+12%
+6%
+16%

While the core overclock is greater overall to begin with, what we’re also seeing is that the performance gains relative to the size of the overclock consistently favor the core overclock to the memory overclock. With a handful of exceptions our 11% memory overclock is netting us less than a 6% increase in performance. Meanwhile our 22% core overclock is netting us a 12% increase or more. This despite the fact that when it comes to core overclocking, the GTX 980 is TDP limited; in many of these games it could clock higher if the TDP budget was large enough to accommodate higher sustained clockspeeds.

Memory overclocking is still effective, and it’s clear that GTX 980 spends some of its time memory bandwidth bottlenecked (otherwise we wouldn’t be seeing even these performance gains), but it’s simply not as effective as core overclocking. And since we have more core headroom than memory headroom in the first place, it’s a double win for core overclocking.

To put it simply, the GTX 980 was already topping the charts. Now with overclocking it’s another 15-20% faster yet. With this overclock factored in the GTX 980 is routinely 2x faster than the GTX 680, if not slightly more.

OC: Load Power Consumption - Crysis 3

OC: Load Power Consumption - FurMark

But you do pay for the overclock when it comes to power consumption. NVIDIA allows you to increase the TDP by 25%, and to hit these performance numbers you are going to need every bit of that. So what was once a 165W card is now a 205W card.

OC: Load GPU Temperature - Crysis 3

OC: Load GPU Temperature - FurMark

Even though overclocking involves raising the temperature limit to 91C, NVIDIA's fan curve naturally tops out at 84C. So even in the case of overclocking the GTX 980 isn't going to reach temperatures higher than the mid-80s.

OC: Load Noise Levels - Crysis 3

OC: Load Noise Levels - FurMark

The noise penalty for overclocking is also pretty stiff. Since we're otherwise TDP limited, all of our workloads top out at 53.6dB, some 6.6dB higher than stock. In the big picture this means the overclocked GTX 980 is still in the middl of the pack, but it is noticably louder than before and louder than a few of NVIDIA's other cards. However interestingly enough it's no worse than the original stock GTX 680 at Crysis 3, and still better than said GTX 680 under FurMark. It's also still quieter than the stock Radeon R9 290X, not to mention the louder yet uber mode.

Power, Temperature, & Noise Final Words
Comments Locked

274 Comments

View All Comments

  • Dribble - Friday, September 19, 2014 - link

    Looks at prominently placed "AMD CENTRE" link at top of page.
  • wolfman3k5 - Friday, September 19, 2014 - link

    Don't forget, this is a prominent pro Intel/NVIDIA site. What did you expect?!
  • Samus - Friday, September 19, 2014 - link

    That's because everything AMD touches is a joke. ATI was at the top of their game, far ahead of NVidia on price:performance, then AMD buys them. Look where ATI's been at since. GCN is a crappy architecture. The Netburst of GPU's. Ridiculously high power consumption. The only corner market is has is FP64. If you actually game, GCN is a dud. The only reason Sony/Microsoft went with AMD GPU's was because it's the best (only) integrated CPU/GPU solution (something NVidia has no IP) other than Intel which is way too expensive for consoles and still not as good.
  • bwat47 - Friday, September 19, 2014 - link

    Until maxwell AMD's gpu's were handily outperfoming nvidia when it came to price/performance. GCN isn't a bad architecture, its just outdated compared to nvidia's brand new one. I'm sure AMD has architecture improvements coming down the road too. This happens all the time with nvidia and AMD where one of them leapfrogs the other in architecture, the cycle will continue. Right now nvidia is ahead with maxwell, but acting like AMD is doomed or saying that their architecture is the 'netburst of gpu's is silly.
  • Laststop311 - Sunday, September 21, 2014 - link

    AMD is going to have to perform a miracle to get their performance and energy use close to nvidia. AMD may keep up in performance but they are going to do that with a 300+ watt dual 8 pin required + liquid cooling required or triple slot required cause it's going to hog the power to keep up and surpass nvidia. Nvidia's solution is much more elegant and there is little hope for amd to match that
  • Laststop311 - Sunday, September 21, 2014 - link

    This isnt even counting gm210. When big maxwell drops at 20nm I just don't see how amd will be able to match it. Like I said it will take an engineering miracle from amd
  • Yojimbo - Friday, September 19, 2014 - link

    There's a difference between bias and impression. At what point does what you call "tempering bias" become a source of bias in and of itself, because one cannot give one's impression? For instance, the removal of branding the R290X reference card "loud" (which it is). I encourage Mr. Smith neither to "temper his bias" nor to get involved in the "ongoing battle between Nvidia and AMD GPUs", and instead to report the facts and relay his honest impression of these facts. If he does this well, I think the majority of readers of this site will support him for attempting to give a real representation of the state of the products available, which is what they are presumably looking for.
  • wolfman3k5 - Friday, September 19, 2014 - link

    Looks like these GTX 970 and 980 cards are shit when it comes to compute, especially double precision floating point operations. I don't game, so I don't care about FPS. I do more productive things with video cards though.
  • Laststop311 - Friday, September 19, 2014 - link

    than maybe you should buy a workstation card and not a gaming card or at the very least a titan if you cant afford the super outrageously priced quadro's and tesla's
  • Nfarce - Sunday, September 21, 2014 - link

    Then you simply aren't the target market for these types of cards. Like Lastop311 said...go bend over for a professional level workstation GPU that most people here care nothing about. :-/

Log in

Don't have an account? Sign up now