Alongside today’s launch of the GK106-based GeForce GTX 660, NVIDIA is also launching one other card: the GeForce GTX 650. In a nutshell, the GTX 650 is the long-awaited GDDR5 version of the GeForce GT 640, sporting higher clockspeeds and far more memory bandwidth than its DDR3-crippled predecessor.

  GTX 660 Ti GTX 660 GTX 650 GT 640
Stream Processors 1344 960 384 384
Texture Units 112 80 32 32
ROPs 24 24 16 16
Core Clock 915MHz 980MHz 1058MHz 900MHz
Shader Clock N/A N/A N/A N/A
Boost Clock 980MHz 1033MHz N/A N/A
Memory Clock 6.008GHz GDDR5 6.008GHz GDDR5 5GHz GDDR5 1.782GHz DDR3
Memory Bus Width 192-bit 192-bit 128-bit 128-bit
VRAM 2GB 2GB 1GB/2GB 2GB
FP64 1/24 FP32 1/24 FP32 1/24 FP32 1/24 FP32
TDP 150W 140W 64W 65W
GPU GK104 GK106 GK107 GK107
Transistor Count 3.5B 2.54B 1.3B 1.3B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Launch Price $299 $229 $109 $99

The fact that NVIDIA launched a performance-uncompetitive part in the GT 640 back in June was unfortunate, but not unexpected given the fact that NVIDIA was facing an even more serious supply shortage than they are now. By our reckoning NVIDIA didn’t want to introduce a desktop GK107 part that would be in high demand when they needed those GK107 GPUs for their more profitable mobile lineup, hence the need to hold off on a GDDR5 GK107 desktop card until now.

But as the saying goes, it’s better late than never. The GTX 650 should handily outperform the GT 640 in virtually all scenarios, and based on what we’ve seen with past DDR3/GDDR5 cards we wouldn’t be surprised to see gaming performance at least double. Furthermore the GTX 650 is clocked some 158MHz (18%) higher than GT 640, which will only increase those performance gains.

Speaking of performance, one thing that caught our eye was that NVIDIA has set the TDP of the GTX 650 at 64W, 1 watt lower than the TDP of the GT 640. Typically we see GDDR5 cards sport a higher TDP thanks to the memory’s higher power consumption, and this would be further driven up by the fact that the GTX 650 is clocked higher than the GT 640. Given the fuzzy nature of NVIDIA’s TDP ratings it’s almost certain that the GTX 650 will draw more power than the GT 640, but it may end up being a very small difference if these TDP values are anywhere near accurate.

With that said, NVIDIA is equipping the GTX 650 with a 6pin PCIe power connector – strictly speaking it shouldn’t be necessary for stock operation, but since this is a GTX card NVIDIA has professed a concern over power consumption when overclocking. Of course with the already high core clock of the GTX 650 it remains to be seen just how far GTX 650 can be overclocked. At the very least the lack of a boost clock means that overclocking will be a much more straightforward endeavor.

Of course the $64,000 question right now is whether all of this will be enough to make GTX 650 competitive with AMD’s Radeon HD 7770 and 7750, and unfortunately that’s an answer we’re going to have to wait to get. Although both the GTX 650 and GTX 660 are launching today, NVIDIA didn’t want to have the GTX 650 steal the GTX 660’s time in the limelight, so the press was not sampled ahead of time. We’ll be looking at the GTX 650 in the coming week, at which point we should have an answer to that question.

Summer 2012 GPU Pricing Comparison
AMD Price NVIDIA
Radeon HD 7950 $329  
  $299 GeForce GTX 660 Ti
Radeon HD 7870 $249  
  $239 GeForce GTX 660
Radeon HD 7850 $199  
Radeon HD 7770 $109 GeForce GTX 650
Radeon HD 7750 $99 GeForce GT 640
POST A COMMENT

21 Comments

View All Comments

  • flyck - Friday, September 14, 2012 - link

    the 650 is 20% slower than the 7770... actually its even a tad slower than the 7750. Although depending on the game choice this might fluctuate. There is absolutely no reason to buy a 650 for the price they are set above. Reply
  • lmcd - Saturday, September 15, 2012 - link

    Yeah when I first saw the guy above you's post I was trying to post from my phone desperately "DON'T LISTEN DDD:"

    Thanks for making me feel better that not all hope was lost.

    NVidia dropped the ball on midrange, here. The 650 seems about in line with the 5770 I got nearly two years ago, which is absolutely ridiculous, given that I got it for a nearly-comparable price. Two years ago.

    I would go with a 7750 or 7770 if you think you might XF (CF?) later. It seems like most 7750s can OC the difference to a 7770 anyways, if you find you need the performance.
    Reply
  • jtenorj - Wednesday, September 19, 2012 - link

    I am very late, but HD7770 is the best choice. The shaders on the gtx560se run at
    twice the speed of the rest of the chip, but a hd7770 is faster now than at launch
    thanks to new drivers and overclocks like crazy. The gtx650 has lots of OCing
    headroom too, but it performs worse than hd7750 at stock speeds( the gtx 650 needs
    that pcie 6pin to stretch its legs). The performance of stock HD7770 at launch and a
    stock gtx560se would be about the same(again, HD7770 is faster now than then and
    has a lot more overclocking headroom). Oh, yeah. Of course gtx560se is faster than
    gtx650(when they are at stock speeds), but HD7770 is faster than both(at stock, much
    faster when overclocked).

    Don't bother with crossfire, though(support is not universal and you risk experiencing
    micro stuttering). Just get the HD7770 and overclock the snot out of it. Physx is very
    niche, and you can pretty much make a radeon do anything else a geforce can do.
    Reply
  • KineticHummus - Friday, September 14, 2012 - link

    In my opinion, it seems like performance for a ~$100 card really has not increased all that much since a few years ago. Power consumption and noise has seen tremendous progress, dont get me wrong, but pure progress has been slow. I bought a gts250 in I believe March of 2010, possibly earlier, for 90 bucks. I'd love to upgrade, for a max of around 130 bucks, but based off of testing for the 7770/50 and the gtx650 (only seen tomshardware's review so far on the 650), I am left out in the cold with no real "upgrade" option. Only sidegrades to less power/noise. Really hope a generation soon will change this. Reply
  • CaedenV - Friday, September 14, 2012 - link

    Welcome to the future.
    I think a lot of companies are realizing that they have reached the satisfy-able 'minimum' for customers at the low end. Proceeding generations will see smaller size, smaller heat sinks and fanless systems, less overall power usage, and the inclusion of newer standards and formats as they come out, but raw performance may not see much increase on the low end until new consoles come out. And even then we will see a boost in the minimum for a few years, and then it will level off.
    Meanwhile, the high end of things will continue to see growth with minimal price increases, which means that you will get and ever increasing !/$ on mid-range cards.

    The only thing that will really push the low end cards out of this quagmire will be whenever (if ever) onboard GPUs catch up to these performance levels.
    Reply
  • lmcd - Saturday, September 15, 2012 - link

    5770 actually seems to match this card entirely, including the price I got it at nearly two years ago. Reply
  • MySchizoBuddy - Saturday, September 15, 2012 - link

    For CUDA is it better to buy two of 650s instead of one 660? would love to see benchmarks with dual 650s Reply
  • rscoot - Saturday, September 15, 2012 - link

    I can't see why they would be, you'd still be down 192 SPs compared to a 660. Reply
  • Lerianis - Sunday, September 16, 2012 - link

    I'm assuming that is meant to keep competitors from easily dissecting leaks of design information. Reply
  • jtenorj - Wednesday, September 19, 2012 - link

    The 100 and 300 series were oem parts based on g9x silicon. Reply

Log in

Don't have an account? Sign up now