Although GTX 770 is already a very high clocked part for GK104, we still wanted to put it through its paces when it comes to overclocking. Of particular interest here is actually memory overclocking, as this is the first video card shipping with 7GHz GDDR5 standard. This will let us poke at things to see just how far both the RAM itself and NVIDIA’s memory controller can go.

Meanwhile the switch to GPU Boost 2.0 for GTX 770 is going to change the overclocking process somewhat compared to GTX 680 and GTX 670. Overvolting introduces marginally higher voltages and boost bins to play with, while on the other hand the removal of power targets in favor of TDP means that we only get 106% – an extra 14W – to play with in TDP limited scenarios. Thankfully as we’ve seen we’re generally not TDP limited on GTX 770 at stock, which means our effective headroom should be greater than that.

GeForce GTX 770 Overclocking
  Stock Overclocked
Core Clock 1046MHz 1146MHz
Boost Clock 1085MHz 1185MHz
Max Boost Clock 1136MHz 1241MHz
Memory Clock 7GHz 8GHz
Max Voltage 1.2 1.212v

We’re actually a bit surprised we were able to get another 100MHz out of the GPU itself. Even without the extra overvoltage boost bin, we’re still pushing 1200MHz+ on 1.2v, which is doing rather well for GK104. Of course this is only a 9% increase in the GPU clockspeed, which is going to pale in comparison to parts like GTX 670 and GTX 780, each of which can do 20%+ due to their lower clockspeeds. So there’s some overclocking headroom in GTX 770, but as to be expected it's not a lot.

More interesting however is the memory overclock. We’ve been able to put another 1GHz on 6GHz GTX 680 cards in the past, and with the 7GHz base GTX 770 we’ve been able to pull off a similar overclock, pushing our GTX 770 to an 8GHz memory clock. The fact that NVIDIA’s memory controller can pull this off is nothing short of impressive; we had expected there to be some headroom, but another 14% is beyond our expectations. At this clockspeed the GTX 770 has a full 256GB/sec of memory bandwidth, 33% more than both a stock GTX 680 and the 384-bit GTX 580. Of course we’ll see if GTX 770 can put that bandwidth to good use.

The end result of our overclocking efforts nets a very consistent 9%-12% increase in performance across our games. 9% is the upper bound for improvements due to GPU overclocking, so anything past that means we’re also benefitting from the extra memory bandwidth. We aren’t picking up a ton of performance from memory bandwidth as far as we can tell, but it does pay off and is worth pursuing, even with the GTX 770’s base memory clock of 7GHz.

Overall overclocking can help close the gap between the GTX 770 and 7970GE in some games, and extend it in others. But 10% won’t completely close the gap on the GTX 780; at best it can halve it. GTX 780’s stock performance is simply not attainable without the much more powerful GK110 GPU.

Moving on to power consumption, we can see that the 106% TDP limit keeps power usage from jumping up by too much. In Battlefield 3 this is a further 12W at the wall, and 21W at the wall with FurMark. In games this means our power usage at the wall is still below GTX 780, though we’ve equaled it under FurMark.

The fan curve for GTX 770 appears to be identical to that of GTX 780. Which is to say the fan significantly ramps up around 84C, keeping temperatures in the low-to-mid 80s even though GPU Boost 2.0 is allowed to go up to 95C.

Finally for fan noise, we see a small increase under Battlefield 3, and no change under FurMark. 1.5dB louder under Battlefield 3 puts noise levels on par with the GTX 780, sacrificing some of GTX 770’s abnormally quiet acoustics, but still keeping noise below the 50dB level. Or to put this another way, the performance gains for overclocking aren’t particularly high, but then again neither is the cost of overclocking in terms of noise.

Power, Temperature, & Noise Final Thoughts
Comments Locked

117 Comments

View All Comments

  • pandemonium - Friday, May 31, 2013 - link

    Wait, you want to compare FPS/dollar and then turn around and say you choose which one has PhysX? Well, the marketing team certainly succeeded with you, lol.

    Apparently you don't know that PhysX is a software code path that is supported and available regardless of what hardware you run. There isn't an abundant pile of evidence, through benchmarking or otherwise, that having a Nvidia card while running a PhysX supported engine will yield superior results compared to a similarly priced AMD card. Example? Take Metro 2033; probably the more demanding DX11, PhysX supported engine games available: http://www.anandtech.com/bench/GPU12/377
  • inighthawki - Saturday, June 1, 2013 - link

    PhysX is CUDA accelerated with an nvidia card present, and thus will have hardware accelerated physics (of course at the cost of some GPU processing that could otherwise be spent on rendering). There is a tradeoff. Personally I would prefer to just run it in software. When you buy something like a GTX 770 or 780, the GPU is typically the FPS bottleneck in your games :)
  • SirGCal - Monday, June 10, 2013 - link

    I really could care less about PhysX... And 60fps caps also give me migraines... I specifically build machines with 120fps+ caps... My current rig has a 144Hz cap. So smooth... Sure many people can't tell the different. Good for them. I get migraine headaches at 60 FPS with digital content. Including crappy movies which are even worse (~25 frames...). I have a very sensitive visual function of my body/mind. Actually a LOT of people do. That's why 3D movies really don't take off so well. Something like one in 10 can not actually see stereoscopic vision at all and only like one in three really enjoy our fake 3D effects... Something like that.

    But the extreme cases like myself, not only do I not enjoy it, but it causes actual physical pain. I buy the best to get 144fps, smooth as glass, all the time. And even doing so, I've never ever spent $4k on my gaming rig... heck, never spent more then $2k. So you're a bit sarcastic there. I guess if you don't build your own sure but... The real crappy part is avoiding the developers who refuse to open their crappy ports beyond 60Hz. There are some that leave the console locks in place on the PC. Those just never get purchased...
  • Mondozai - Monday, August 12, 2013 - link

    Mention your migraines one more time, I'm sure we all missed it.
  • firewall597 - Thursday, June 13, 2013 - link

    Your one reason makes me lololol
  • Gastec - Sunday, July 7, 2013 - link

    I for one choosed Nvidia over AMD because of the Radeon frame times problem. I would agree that not so many people buy a 4000$ computer in a shop, unless it's an Apple I guess :-P Though many so called computer enthusiasts do end up paying over time quite a hefty sum on hardware components and software. It's not because they are snobs wanting the best, but because the best costs so much money :)
  • jonjonjonj - Tuesday, June 4, 2013 - link

    its their 2nd best single gpu card not the 3rd. a multi gpu 690 is not comparable and amd also has a 7990 so that would make the 7970 the 2nd best card by your logic. i personally see it as a complete failure that their 2nd best card is equal to amds best card that came out 18 months ago. think about that the 7970 came out a year and a half ago. its nothing to brag about. before you call me a amd fan. i'm not i look for the best price/performance and will go with whatever company currently has it.

    go look at the bench for a 7970 amds top card compared to a 570 nvidia's 2nd best card when it was released. not even close. i realize the 7000 series was new architecture and they had a die shrink but you can see real gains.
    http://www.anandtech.com/bench/Product/508?vs=518
  • sweenish - Thursday, June 6, 2013 - link

    3rd. Titan, 780, then 770.
  • Gigaplex - Sunday, June 16, 2013 - link

    I'm assuming you're only considering cards from the Geforce line and not Quadro/Tesla...
  • iEATu - Thursday, May 30, 2013 - link

    Plus a better memory VRM.

Log in

Don't have an account? Sign up now