Nice! I heard that the 770 was going to perform much better than this, but I'm glad to see an improvement as well as lower prices. This might prompt a price cut by AMD, which could benefit everybody.
When the GTX 770 is so far behind even ancient cards in GPU compute and Folding... You know it is time to recall the overheating GTX 770 back to Nvidia and design something with real improvements.
80c load is quite common and safe for GPUs that have stock coolers. If those temps concern you, wait until these are released with aftermarket coolers installed.
Does "first place" matter, or do price points? If the 7970 was AMDs twentieth best card it still wouldn't change that it's competing with the 770s price point.
I'm not an "AMD dude", but I fail to see why that's right on. Price points matter, where the products rank within an individual companies line don't. If the 770 was Nvidias 100th best graphics card, at the same price/performance what would that change? Nothing.
Wait, you want to compare FPS/dollar and then turn around and say you choose which one has PhysX? Well, the marketing team certainly succeeded with you, lol.
Apparently you don't know that PhysX is a software code path that is supported and available regardless of what hardware you run. There isn't an abundant pile of evidence, through benchmarking or otherwise, that having a Nvidia card while running a PhysX supported engine will yield superior results compared to a similarly priced AMD card. Example? Take Metro 2033; probably the more demanding DX11, PhysX supported engine games available: http://www.anandtech.com/bench/GPU12/377
PhysX is CUDA accelerated with an nvidia card present, and thus will have hardware accelerated physics (of course at the cost of some GPU processing that could otherwise be spent on rendering). There is a tradeoff. Personally I would prefer to just run it in software. When you buy something like a GTX 770 or 780, the GPU is typically the FPS bottleneck in your games :)
I really could care less about PhysX... And 60fps caps also give me migraines... I specifically build machines with 120fps+ caps... My current rig has a 144Hz cap. So smooth... Sure many people can't tell the different. Good for them. I get migraine headaches at 60 FPS with digital content. Including crappy movies which are even worse (~25 frames...). I have a very sensitive visual function of my body/mind. Actually a LOT of people do. That's why 3D movies really don't take off so well. Something like one in 10 can not actually see stereoscopic vision at all and only like one in three really enjoy our fake 3D effects... Something like that.
But the extreme cases like myself, not only do I not enjoy it, but it causes actual physical pain. I buy the best to get 144fps, smooth as glass, all the time. And even doing so, I've never ever spent $4k on my gaming rig... heck, never spent more then $2k. So you're a bit sarcastic there. I guess if you don't build your own sure but... The real crappy part is avoiding the developers who refuse to open their crappy ports beyond 60Hz. There are some that leave the console locks in place on the PC. Those just never get purchased...
I for one choosed Nvidia over AMD because of the Radeon frame times problem. I would agree that not so many people buy a 4000$ computer in a shop, unless it's an Apple I guess :-P Though many so called computer enthusiasts do end up paying over time quite a hefty sum on hardware components and software. It's not because they are snobs wanting the best, but because the best costs so much money :)
its their 2nd best single gpu card not the 3rd. a multi gpu 690 is not comparable and amd also has a 7990 so that would make the 7970 the 2nd best card by your logic. i personally see it as a complete failure that their 2nd best card is equal to amds best card that came out 18 months ago. think about that the 7970 came out a year and a half ago. its nothing to brag about. before you call me a amd fan. i'm not i look for the best price/performance and will go with whatever company currently has it.
go look at the bench for a 7970 amds top card compared to a 570 nvidia's 2nd best card when it was released. not even close. i realize the 7000 series was new architecture and they had a die shrink but you can see real gains. http://www.anandtech.com/bench/Product/508?vs=518
Sorry dude, that wasn't aimed at you. Anand your comments system has a mind of its own. If I reply to xyz I sort of expect my reply to be below xyz's comment and not inserted randomly in to the comments list.
Once again, a year late, but still a nice card. The updated cooler and higher memory clocks are impressive, but the max Boost clock was achievable on "FTW" type binned GTX 680s in the past.
I guess this is Nvidia's "Gigahertz Edition", basically an overclocked SKU to bring parity in the performance midrange market.
How in the world is this card a year late? Nvidia was still winning at this time, one year ago. Now they have not one, not two, but three single GPU cards that are on parity or are faster than the 7970 GE. Nvidia is in a far better position than they were with their GTX 500 series.
Full GK104 should've been GTX 670 and below from the outset, as Nvidia initially planned. That's why it's a year late, at this price point anyways.
Also, AMD reached parity with Nvidia's GTX 680 last year with the 7970GE launch in June/July, which then distanced itself by 5-10% with the Never Settle Drivers in Sept/Oct last year.
Now that the GTX 770 has launched and is ~10% faster than the 680, it again, reaches parity with the 7970GE.
I thought the 104/114 series was historically reserved for the x60, while the 100/110 series was meant for the x70/x80 chips. Thus this new highend GK104 model should have been a 760Ti. GK110 should have maxed out at the 780 and the 770 should have been the paired down model. If they really had to have a Titan, it should have been a DPFP uncapped 780 (so they got that almost right). Of course the prices should have been the usual highend price points and not the massive price jumps they are currently pushing. Sure you can justify the price with the current performance relative to the previous generation, but if we always did that, the high end cards would get perpetually more expensive as the performance of each new generation of cards would justify a price hike over the previous generation. In reality, these prices are the unfortunate result of a lack of competition. Of course not all companies handle lack of competition the same way. nVidia has shown that, when uncontested, they will jack introductory prices into the stratosphere (8800 Ultra - $800-1000, $650 - GTX280, Titan/GTX780 - $1000/$650). Under normal competitive conditions, the top single GPU card from either nVidia or AMD/ATi of each generation comes in at $500. In similarly uncontested situations AMD/ATi has proven to be much less abusive to their customers (7970 - $550, 5870 - $400). Granted the relatively low price of the Dual GPU GTX295 probably kept the 5870s price in check until the GTX400 series launched, but at that point there was a significant difference in stability between single and dual GPU cards. Now I must mention, lest anyone gets the wrong idea, that AMD/ATi was probably only taking this route because marketshare/mindshare was more important to them than profit margins. Nonetheless, the facts remain.
I agree with virtually everything you said, although I never really had a problem with Nvidia jumping GK104 up a SKU to the x70 range. The performance was certainly there especially relative to last-gen performance and full GK104 also beat AMD's best offering at the time.
The problem I had was Nvidia's decision to turn this 2nd tier ASIC into their flagship and subsequently, hold off on launching their true flagship ASIC a full year AND charge $1000 (and later, $650) for it.
All events predicated on the fact AMD launched 7970 at flagship prices when it really didn't deserve the asking price. Tahiti launch set the stage for Nvidia to not only undercut AMD pricing but to beat them in performance as well with only their 2nd tier chip.
True, the 7970 could definitely be considered overpriced when it launched, but it was the undisputed performance champ until nVidia finally launched the GTX680 to bring back competition. Though, this begs the question, was the 7970 really this underperforming, or was the GK104 simply larger and faster (relatively speaking) than midrange chips in the past. Given that the GK104 die size is smaller than the GTS250, GTX460, GTX555 die sizes, I'd say larger is out. That said, they removed a lot of compute resources to get the gaming performance they were targeting, so faster might hold some weight.
The 7000 series sudden proficiency in compute combined with the equally sudden removal of compute focus in the GTX600 series meant the 7970 would need to be far larger to maintain equivalent performance. Given the fact that Tahiti XT (352mm) was much closer to the size of GK104 (294mm) than GK110 (561mm), the 7970 should probably be considered a mid-weight. That is to say I can conclude that Tahiti XT was under performing (in games) AND GK104 was an overachiever. So the question becomes, is compute capabilities important enough to sacrifice gaming performance that a year ago likely would have clocked in closer to the GTX780 (GTX775 class?) for compute performance that in many cases exceeds Titan, but gaming performance roughly on par with a GTX680?
IMO AMD's initial, higher price on the 7970 was justified. People forget that it was a much bigger chip than the 6970, with a 384-bit bus instead of 256. Any 384-bit part is effectively big, IMO. Same size as the 580, and now the Titan and 780.
The fault here IMO goes right back to AMD's marketing division. If they hadn't stupidly went from 5870 to 6970, then people might have noticed that Tahiti was in fact a bigger part than its two immediate predecessors, and properly deserving of the 7900-series naming.
True Bro ;-) AMD is more user friendly, but when Next-Gen on ATI/AMD comes PS4 and M$ XBx1 then all optimalisation will be AMD friendly and DX11.1 (DX11.x or Full DX11) and we won !
Well looks like the 7970 ghz has been tied. Lower price is nice but almost no OC headroom (wouldn't be surprised if AT got a cherry picked sample) and no game bundle. Similar power consumption too. Performance increase is virtually 0 but the price decrease is nice.
Worth pointing out that you can pretty easily get the AMD bundle codes for ~$30 on eBay. It wipes out the price advantage, but it does let you weigh the cards on the merits of their hardware.
"Of course this is only a 9% increase in the GPU clockspeed, which is going to pale in comparison to parts like GTX 670 and GTX 780, each of which can do 20%+ due to their lower clockspeeds. So there’s some overclocking headroom in GTX 780, but as to be expected not a ton."
Last sentence should read GTX 770 yea?
Great article, good to see nvidia's progress with the GPU boost 2 and taking on-board the tdp/power issue that the 600 series seems to have had. It will be interesting to see what they make of the 760 and what it will contain.
Wow... at worst it is equal with the 680 for $100 less. At best, it is tied with the 780 for $250 less. I think NVIDIA needs to reexamine their pricing model - I'm sure the market will fix it for them. Either the 770 is too cheap or the 780 is way too expensive. (signs point to the latter)
If the 780 performs better than the 680 by 30-35% out the gate w/ crappy day1 drivers (future updates will only increase that advantage), how exactly is it tied with the 770?
It's easy to stand by your comment if you completely ignore what I wrote and ignore the facts. For instance, the 680 was not tested with "crappy day1 drivers", it was tested with very recent 320.14 drivers while the 770/780 used 320.18. In addition, I never said that the 770 was always tied with the 780, I said "at its best", which means that at the high end it ties the 780 in some benchmarks. At its worst, on the low end, the 770 is tied with the 680 in some benchmarks. I hope that clarifies things for you.
They are both overpriced relative to their historical cost/pricing, as a result you see Nvidia has posted record margins last quarter, and will probably do similarly well again.
I'm not happy that NVIDIA threw power efficiency to the wind this generation. What is with these GPU manufacturers that they can't seem to CONSISTENTLY focus on power efficiency? It's always...."Oh don't worry, next gen will be better we promise," then it finally does get better, then next gen sucks, then again it's "don't worry, next gen we'll get power consumption down, we mean it this time." How about CONTINUING to focus on it? Imagine any other product segment where a 35%! power increase would be considered acceptable, there is none. That makes a 10 or whatever FPS jump not impressive in the slightest. I have a 660 Ti which I feel has an amazing speed to power efficiency ratio, looks like this generation definitely needs to be sat out.
It's going to be hard to get a performance increase without sacrificing some power while using the same architecture. You pretty much need a new architecture to get both.
There isn't much you can really do when your working with the same process node and same architecture, the best you can hope for is a slight bump in efficiency at the same performance level but if you increase performance past the sweet spot, you sacrifice efficiency.
In past generation you had half node shrinks. GTX 280 -> GTX 285 65nm to 55nm and hence reduced power consumption.
Now we don't, we have jumped straight from 55nm -> 40nm -> 28nm, with the next 20nm node still aways out. There just isn't very much you can do right now for performance.
Looks like an interesting part. If for no other reason that to put pressure on AMD's 7950 Ghz card. I imagine that card will be dropping to 400ish very soon.
I am not sure what card to pick up this summer. I want to buy my first 2560x1440 monitor (leaning towards Dell 2713hm) this summer, but that means I need a new video card too as my AMD 6950 is not going to have the muscle for 1440p. It looks like both the Nvida 770 and AMD 7950 Ghz are borderline for 1440p depending on the game, but there is a big price jump to go to the Nvidia 780.
I am also not a huge fan of crossfire/sli although I do have a compatible motherboard. Also to preempt the 2560/1440 vs 2560/1600 debate, yes i would of course prefer more pixels, but most of the 2560x1600 monitors I have seen are wide gamut which I don't need and cost 300-400 more. 160 vertical pixels are not worth 300-400 bucks and dealing with the Wide gamut issues for programs that aren't compatible.
I'd say thats a good choice for the 1440p monitor. As far as the actual card, I agree with you that your decision should be between those 2 cards. Just as performance of the 7970GHz has increased due to driver updates, I think we can expect some performance increase from the 770 over time as well. Not sure why that was not mentioned at all in the article. If i were in your situation it would probably come down to the bundled games that come with the AMD card, do you want those games?
No, we cannot expect performance increases from the 770 over time due to driver updates. The 770 is GK104, same architecture as the GTX680. It's just a higher-clocked 680. 680 drivers have been rolling since its release in March, it's been 14 months.
The regular game-specific performance improvements will be there for both cards, for games that are coming, but we can't expect the 770 to improve in performance due to drivers as it is already a 14-month mature product (refresh), driver-wise.
That makes no sense. If the 7970 has been benefiting greatly from each driver update, and its almost 6 months older than GK104, why can the 7970 improve but the 770 can't?
Wow that's a real lack of information to come up with an answer like this. 7970 was a COMPLETE remake of what had been done before. Totally justifiable that drivers improved performance like the gtx 600 series that was new. The 770 is TOTALLY a 680, physically, it's simply a GK 104 die pushed to the extreme, almost nothing new on the driver side from 680 to 770. Not that they can't improve anything but most of the job is done on the driver side of things.
I'd say forget the Dell, and go with an Overlord Tempest OC. It's the same price & panel as you'd get from Dell, except it can be overclocked to 120hz. They're a California based company, and their sales/IT departments are awesome. If you're not interested in overclocking, they sell a 60hz model for about $400. You should seriously check them out.
Pretty happy to see Nvidia FINALLY realize the potential of this architexture. Gives me hope for the next generation; combined with a process node drop we should see pretty impressive performance. This refresh will keep the market feeling fresh until then. I just hope they're a lot more aggressive with pricing. I know you say in the article that you were surprised by this price; but really it's still too high. It's not absurd or anything, and assuming the price drops 50+ bucks by fall it's in line with GPU market trends. But I'd like to see some price pressure FROM Nvidia, instead of AMD always being the one to kick off a price war.
Most of the reviews I've seen have had the 770 beating the 7970gE pretty well. It seems this site really bends over to make AMD look as good as possible, even though the 7970 uses more power and is generally slower and more expensive.
It seems to match up with other reviews I have seen. Maybe you are looking at ones that are not using the reference card? The non-reference reviews show it doing a bit better.
Still even with the better results of the non reference cards it is a bit disappointing of a release from Nvidia IMO. While it is good that it will likely cause AMD to drop the price of the 7970 GE but it won't set a fire under AMD to make an impressive jump on their next lineup refresh.
And if you look at any AMD review, you'll see fanbois jumping out of the wood work to accuse Anand and crew of being Nvidia homers. You can't win for losing I guess.
I'd glad you mentioned the 2GB VRAM issue Ryan. Because it WILL be a problem soon.
In the comments for 780 review i was saying that even 3GB VRAM will probably not be enough for the next 18 months - 2 years, atleast for people who game at 2560x1600 and higher (maybe even 1080p with enough AA). As usual many short-sighted idiots didn't agree, when it should be amazingly obvious theres going to be a big VRAM usage jump when these new consoles arrive and their games start getting ported to PC. They will easily be going over 2GB.
I definitely wouldn't buy the 770 with 2GB. It's not enough and i've had problems with high-end cards running out of VRAM in the past when the 360/PS3 launched. It will happen again with 2GB cards. And it's really not a nice experience when it happens (single digit FPS) and totally unacceptable for hardware this expensive.
people have been saying that for a long time. i heard the same thing when i bought my 550 ti's. and, 2 years later....only battlefield 3 pushed ppast the 1 GB frame buffer at 1080p, and that was on unplayable setting (everything maxed out). now, if I lower the settings to maintain at least 30fps, no problems. 700 MB usage max. mabye 750 on a huge map. now, at 1440p, i can see this being a problem for 2 gb, but i think 3gb will be just fine for a long time.
I don't quite understand why Nvidia's partners wouldn't go with the reference design of the 770. I've been keenly interested in those nice high quality coolers and hoping they'd make their way into the $400 parts. It's a great selling point (I think) and disappointing to know that they won't be using them.
TechPowerUp ran tests of three GTX 770s with third-party coolers (Asus DirectCU, Gigabyte WindForce, and Palit JetStream). All three beat the GTX 770 reference on thermals for both idle and load. Noise levels varied, but the DirectCU seemed to be the winner since it was quieter than the reference cooler on both idle and load. That card also was a bit faster in benchmarks than the reference.
That said, I agree the build quality of the reference cooler is better than the aftermarket substitutes - but Asus is probably a close second. Their DirectCU series has always been very good.
I asked this on the 780 review, and it seems like it might be even more interesting for the 770 considering Nvidia's basically threw more power at a 680, but a performance per watt comparison would be great. If there was something that clearly showed the efficiency of each card in a way (maybe using a fixed work load) it would be interesting to see. Especially when compared to similar architectures or when comparing AMD's efforts with the GHz editions.
Since when did 70-80C temperatures become acceptable? I had been looking to upgrade my MSI Cyclone GTX 460 which would never hit higher than 62C and I got a great deal on 2 560TIs for less than half the cost of them new. I have run them in single card and SLI; I see 80C+ when I run an overclock program like MSI Afterburner. I use a custom fan profile to bring the temps down to 75C or less at higher fan speed, but still in reasonable noise levels. It's still not quite enough.
All these video cards may be fine at these temperatures, but when you are sitting next to the case and there is 80C being pumped out, you really feel it. Especially now with Summer heat finally hitting where I live. My $25 Hyper212+ keeps my OC'ed i7 2600k at a good 45-50C when playing games. I would buy aftermarket coolers if they were not going to take up 3 slots each (I have a card that I need, but would have to be removed.) and didn't cost nearly as much as I paid for the cards.
AMD, NVIDIA and card partners need to work on bringing temperatures down.
The architecture of these video cards were obviously made for performance first. That does not mean they can't also work on lowering power consumption to lower the heat produced. One thing that I've found to help my situation is to set all games to run at 60fps without vsync if possible, which thankfully is most fo the games I play. Some games become unplayable or wonky with vsync and other ways of limiting fps without vsync, so I just deal with the heat from no fps limits.
I hope that the developers of console ports from PS4 and Xbox One put in an fps limit option like Borderlands 2 if they don't allow dev console access.
Although its not currently accessible from the driver control panel, Nvidia drivers have a built in fps limiter that I use in every game I play (never had any issues with it). You can access it with NvidiaInspector.
Since 70-80C has always been the best a blower style cooler can do on a high power GPU without getting obscenely loud, and blowers have proven to be the best option to avoid frying the GPU in a case with horrible ventilation. IOW about when both nVidia and ATI adopted blowers for their reference designs.
70C-80C temperatures became acceptable after nVidia decided to release Fermi based cards that regularly hit the mid 90Cs. Since then, the temperatures have in fact come down. Of course, they are still high for my liking and I pay extra for cards with better coolers (I.E. MSI TwinFrozer, Asus DirectCU). That said, there is only so much you can do when pushing 3 times the TDP of an Intel Core i7-3770K while cooling it with a cooler that is both lighter and less ideally formed for the task (Comparing some of the best GPU coolers to any number of heatsinks from Noctua, Thermalright, etc.). Water cooling loops work wonders, but not everyone wants the expense or hassle.
The higher the temperatures, the less fan speed you need, because you have higher delta-theta between the air entering the cooler and the cooling fins, which results in more energy transfer at less volume throughput. Obviously the temperature is a pure function of the fan curve under load, and has very little to do with the actual chip (unless you go so far down in energy output, that you can rely on passive convection).
It's a nice card, but not nice enough for me to upgrade my 670. If it had been a slightly more paired down GK110, I would have considered it...but the performance gains are just not enough to justify replacing my 670 (which still has little trouble with most games).
I'll spend my computing dollar on going from Sandy Bridge -> Haswell instead, and wait for the eventual 800 series sometime next year (which should be a new micro architecture).
Right, I could pull the trigger on two of these (when they come out with 4GB versions) as they are 85%+ above my current 2x570GTX's. Thanks for having 570's in the results by the way.
My BIG question is: Is the proper next-gen cards still due early next year or is this all we've got for the next 18 months? The rumour mill is fine here.
As I run 3 displays and a 6000x1080p resolution I literally can never have too much performance. So if waiting until next year meant I get a better upgrade I'm happy to do it. This system can just about keep me going right now.
I may abandon the 3 screen setup for a consumer version Oculus Rift but how long away is that? I want to hedge my bets.
I think Maxwell is going to be more like mid-2014. It seems aggressive though... a new architecture, a shrink to 20nm and they're shoe-horning a 64-bit ARM chip in there. Lots of opportunities for delays IMO. But it might be a wise idea to wait... NVIDIA is promising (read: take with a grain of salt) 3x "GFLOPS per watt" over Kepler, and about 7-8x over Fermi. It's hard to predict how that will scale into performance though.
It would be historically accurate to state that NVIDIA typically releases new architectures on new nodes, and that both Maxwell and TSMC 20nm are scheduled for 2014. When in 2014 is currently something only NVIDIA could tell you.
But I would say this: consider this the half-time show. This is the mid-generation refresh, so we're roughly half-way between the launch of Kepler and Maxwell.
Hopefully by the time I replace my aging E8500/6970 system (which still plays everything I care about pretty well at 1080p) with a haswell, this thing will have a 4GB option so i can make another long lasting rig.
They won't be priced the same for long. Unfortunately, I just can't see a GTX670FTW beating a GTX770 (especially with a factory overclock). I wonder if there is enough overclocking headroom for a GTX770FTW.
"The wildcard in all of this will be the next-generation consoles, each of which packs 8GB of RAM, which is quite a lot of RAM for video operations even after everything else is accounted for. With most PC games being ports of console games, there’s a decent risk of 2GB cards being undersized when used with high resolutions and the highest quality art assets. The worst case scenario is only that these highest quality assets may not be usable at playable performance, but considering the high performance of every other aspect of GTX 770 that would be a distinct and unfortunate bottleneck."
NONE of the release offerings (May 30)of the GTX770 on Newegg have the Titan cooler !!!! Regardless of the pictures in this article and on the GTX7xx main page on Newegg. And no bundled software to "ease the pain" and perhaps help mentally deaden the fan noise..... this product takes more power than the GTX680. Early buyers beware... !!
"Having 2GB of RAM doesn’t impose any real problems today, but I’m left to wonder for how much longer that’s going to be true. The wildcard in all of this will be the next-generation consoles, each of which packs 8GB of RAM, which is quite a lot of RAM for video operations even after everything else is accounted for. With most PC games being ports of console games, there’s a decent risk of 2GB cards being undersized when used with high resolutions and the highest quality art assets. "
Last week a noob posted something like that on the 780 review. It was decimated by a slew of tech geeks comments afterward. I am surprised to see the same kind of reasoning on a text written by an AT expert.
All AT reviewers by now know that next console will be using an APU from AMD that will have the graphic muscle (almost) comparable to a 6670 ( 5670 in PS4 case thanks to GDDR5) . So what Mr. Ryan Smith is stating is that a "8GB" 6670 can perform better than a 2GB 770 in video operations?
I am well aware that Mr Ryan Smith is over-qualified to help AT readers revisit this old legend of graphics memory : How little is too little?
And please let us not starting flaming about memory usage- most modern OSs and gaming engines use available RAM dinamically, so if one sees a game use 90%+ of available graphics memory does not imply , at all, that such game would run faster if we double the graphics memory. The opposite is often the true.
As soon as 4GB versions of the 770 launch AT should pit these versions against the 2GB 770 and the 3GB 7970. Or we could go back months ago and re-read tests done when the 4GB versions of the 680 came out- only at triple screen resolutions and insane levels of AA would we see any theoretical advantage of 3-4Gb over 2GB, which is largely unpractical since most games can't run at these resolutions and AA with a single card anyway.
I think NVDIA did it right (again): 2GB is enough for today and we wont see next gen consoles running triple screen resolutions at 16xAA+. 2Gb means less BoM, which is good for profit and price competition and less energy consumption which is good for card temps and max Oc results.
Just to start the flaming war- next consoles will not run in monolithic GPUs, but in twin jaguar cores. So when you see those 768/1152 GPU cores numbers, remember these are "crossfired" cores. And in both consoles the GPU is running at a mere 800Mhz, hence the comparison with the 5670/6670, 480 shaders cards@ 800Mhz. It is widely accepted that console games are developed using the lowest common denominator, in this case, the Xbox One DDR3 memory. Even if we take the huge assumption that dual jaguar cores running in tandem can work similar to a 7850 -1024 cores at 860Mhz- in a PS4 ( which is a huge leap of faith looking back to ho badly AMD fared in previous crossfires attempts using integrated GPU like these jaguar cores) that turns out to be the same:
Do an 8GB 7850 gives us better graphical results than a 2GB 770, for any gaming application in the foreseeable future?
Don't 4k on me please: both consoles will be using HDMI, not DisplayPort. and no, they wont be able to drive games across 3 screens. This "next gen-consoles will have more Video RAM than high GPUs in PCs, so their games will be better" is reminding of the old days of "1gb DDr2 cards are better than 256Mb DDr3 cards for future games" scam.
We're aware of the difference. A good chunk of that unified memory is going to be consumed by the OS, the application, and other things that typically reside on the CPU in a PC. But we're still expecting games to be able to load 3GB+ in assets, which would be a problem for 2GB cards.
Why are you guys using FXAA in benchmarks as high end as these? Especially for games like BF3 where you have FPS over 100. 4x AA for 1080p and 2x for 1440p. No question those look better than FXAA...
Folding@Home Big-Time Discrepancy in reviews Can anyone explain the material differences between this review's Compute Results for Folding@Home and the same FAHbench run at Tom's Hardware? http://www.tomshardware.com/reviews/geforce-gtx-77...
Since FAHbench is self-contained -- load and go -- it's hard to figure how the results could be so different.
People should be looking at other websites. This review is showing scores that don't even make sense. The 7970, on bf3, has the same score as tomshardware's review of the 680, which was done over the year ago.
nobody cares. If this were ~$300 I'd seriously consider it. But getting a 7950 for ~$300 along with 4 quality games just makes me not care about a $400 card thats already out of my budget anyway. Nvidia always keeps their best just too high. At 400 its competing in value against a card with more to offer. They don't have the concept of winning by pricing right, but I guess they've never had to go there like AMD did.
Hmm recently built a system with gigabyte 670 and was looking to go sli in the near future. Upon reading this review I'm second guessing. Should I get a second 670 in a few months and go sli Or a 770 and go sli some time around christmas? Very happy with temps and noise on gigabyte wind force 670
That is not correct. The HD 7970 has a bigger bus, but being 28nm instead of 40nm like the HD 6970 means that the HD 7970 was able to achieve great performance gains by being 354mm2 compared to the HD 6970 which is around 389mm2
Is anyone else as disappointed as I am about pricing all the way across the board with this new generation? As an owner of a GTX580 I was thinking it's about time for an upgrade, but all these high end cards look 100 $/£/€ overpriced to me. I wasn't happy about paying £450 for my 580 but there's no way in hell I'm prepared to pay £550 for the 780, and the 770 isn't a big enough upgrade to interest me. I'm more than a little suspicious that AMD and NVidia are agreeing on price points in order to make larger profits. Having just 2 companies in a market sector makes it pretty easy for them to do this.
I've just bought two 3GB 580s for 450 UKP total, will be benching them next week single & SLI. If you're interested in the results, let me know by PM/email (or Google "SGI Ian" to find my contact page) and I'll send you the info links once the tests are done. I'll be testing with 3DMark06, Vantage, 3DMark11, Firestrike, Call of Juarez, Stalker COP, X3TC, PT Boats, Far Cry 3, and all 4 Unigines (Sanctuary, Tropics, Heaven and Valley). If I'm reading reviews correctly, two 580s SLI should beat a 780.
If your existing 580 is a 1.5GB card, note you can get a 2nd one off eBay these days for typically less than 150 UKP. I've won two this month, a Zotac card for 123 UKP (using it to type this post) and an EVGA card for 142 UKP.
And yes, I agree, the new gen card pricing is kinda nuts. Anyone would think we haven't had a recession. I doubt peoples' budgets have suddenly become 30% higher (only reason I could buy some is I sold some of my other stuff to cover the cost). The two 3GB 580s I bought were in total 100 UKP less than the cheapest 780 from the same seller (I probably could have eventually obtained two 3GB 580s off eBay, but decided the chance to get them from a proper retailer right now with warranty, etc. was too good to pass up).
Thanks for the reply, Ian. For some reason I'd totally forgotten about SLI. I'd be very interested to see the results of a 580 SLI vs. 780, and I suspect a few other readers here would too. One thing, how closely matched do the two cards have to be? Exactly the same model? Just the same clock speeds? Or is it more forgiving? I can't imagine it would be very easy to track down a 2nd identical 580, which is the reason I ask. If any Anand readers with knowledge of SLIing can share their experiences that would be great.
Memory... How many games are 64bit? = How many games needs more then 4gig memory? Today zero. It was even more fun when most PCs had BIOS. It cant even address that much graphic memory. Maybe 64bit gaming will be mainstream in a year? I don't know. I sure seems to take its time. I have used 64 bit in my work since 1995 (real computers) and 64bit on my mac since 2002. (OSX 10.27 Smeagol). Gaming/Windows takes its time since Windows don't have smart software packaging. (real computers have the same binary for 32/64bit, different languages and even different architectures. I loved when Apple had "fat binaries" and everything could run on both PPC and X86. Gives the customer a choice of architecture.)
Looks more like a GTX 680 Plus. The gains over the year-old GTX 680 and HD 7970 Ghz edition are measly and power efficiency was thrown in as an afterthought.
I'll wait for the HD 8950 and HD 8970 or whatever they'll call it before I make a decision. I suppose it's worth the wait considering what implications the new PS4 and Xbox One will have on game development and their impact on PC ports performance-wise.
I just ordered the Gigabyte GTX 770, sold one of my 660 SCs for 160 dollars and am returning the one I just bought. Will I see a big performance drop from 660 SLI to the 770 at 2560x1440? I hope I made the right choice :/
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
117 Comments
Back to Article
karasaj - Thursday, May 30, 2013 - link
Nice! I heard that the 770 was going to perform much better than this, but I'm glad to see an improvement as well as lower prices. This might prompt a price cut by AMD, which could benefit everybody.axien86 - Thursday, May 30, 2013 - link
When the GTX 770 is so far behind even ancient cards in GPU compute and Folding... You know it is time to recall the overheating GTX 770 back to Nvidia and design something with real improvements.freespace303 - Wednesday, June 5, 2013 - link
80c load is quite common and safe for GPUs that have stock coolers. If those temps concern you, wait until these are released with aftermarket coolers installed.tipoo - Thursday, May 30, 2013 - link
This really could have been called "680 gets bios update, price drop".BeauCharles - Thursday, May 30, 2013 - link
Its not their top single GPU card, its their third place. The fact its tying with AMD's first place pretty much speaks for itself.tipoo - Thursday, May 30, 2013 - link
Does "first place" matter, or do price points? If the 7970 was AMDs twentieth best card it still wouldn't change that it's competing with the 770s price point.EJS1980 - Thursday, May 30, 2013 - link
Even though a lot of AMD dudes will surely get butthurt with you, your point is right on. Heavy is the head...tipoo - Thursday, May 30, 2013 - link
I'm not an "AMD dude", but I fail to see why that's right on. Price points matter, where the products rank within an individual companies line don't. If the 770 was Nvidias 100th best graphics card, at the same price/performance what would that change? Nothing.EJS1980 - Thursday, May 30, 2013 - link
I guess I should clarify that I was making a generalization, and wasn't referring to anyone in particular.sna1970 - Thursday, May 30, 2013 - link
what matters is how many FPS you get per dollar.who cares about getting flagships when you reach 60fps ? and how many people pay 4000$ for high end gaming machine ?
I choose nvida over AMD for one reason , PhysX.
pandemonium - Friday, May 31, 2013 - link
Wait, you want to compare FPS/dollar and then turn around and say you choose which one has PhysX? Well, the marketing team certainly succeeded with you, lol.Apparently you don't know that PhysX is a software code path that is supported and available regardless of what hardware you run. There isn't an abundant pile of evidence, through benchmarking or otherwise, that having a Nvidia card while running a PhysX supported engine will yield superior results compared to a similarly priced AMD card. Example? Take Metro 2033; probably the more demanding DX11, PhysX supported engine games available: http://www.anandtech.com/bench/GPU12/377
inighthawki - Saturday, June 1, 2013 - link
PhysX is CUDA accelerated with an nvidia card present, and thus will have hardware accelerated physics (of course at the cost of some GPU processing that could otherwise be spent on rendering). There is a tradeoff. Personally I would prefer to just run it in software. When you buy something like a GTX 770 or 780, the GPU is typically the FPS bottleneck in your games :)SirGCal - Monday, June 10, 2013 - link
I really could care less about PhysX... And 60fps caps also give me migraines... I specifically build machines with 120fps+ caps... My current rig has a 144Hz cap. So smooth... Sure many people can't tell the different. Good for them. I get migraine headaches at 60 FPS with digital content. Including crappy movies which are even worse (~25 frames...). I have a very sensitive visual function of my body/mind. Actually a LOT of people do. That's why 3D movies really don't take off so well. Something like one in 10 can not actually see stereoscopic vision at all and only like one in three really enjoy our fake 3D effects... Something like that.But the extreme cases like myself, not only do I not enjoy it, but it causes actual physical pain. I buy the best to get 144fps, smooth as glass, all the time. And even doing so, I've never ever spent $4k on my gaming rig... heck, never spent more then $2k. So you're a bit sarcastic there. I guess if you don't build your own sure but... The real crappy part is avoiding the developers who refuse to open their crappy ports beyond 60Hz. There are some that leave the console locks in place on the PC. Those just never get purchased...
Mondozai - Monday, August 12, 2013 - link
Mention your migraines one more time, I'm sure we all missed it.firewall597 - Thursday, June 13, 2013 - link
Your one reason makes me lolololGastec - Sunday, July 7, 2013 - link
I for one choosed Nvidia over AMD because of the Radeon frame times problem. I would agree that not so many people buy a 4000$ computer in a shop, unless it's an Apple I guess :-P Though many so called computer enthusiasts do end up paying over time quite a hefty sum on hardware components and software. It's not because they are snobs wanting the best, but because the best costs so much money :)jonjonjonj - Tuesday, June 4, 2013 - link
its their 2nd best single gpu card not the 3rd. a multi gpu 690 is not comparable and amd also has a 7990 so that would make the 7970 the 2nd best card by your logic. i personally see it as a complete failure that their 2nd best card is equal to amds best card that came out 18 months ago. think about that the 7970 came out a year and a half ago. its nothing to brag about. before you call me a amd fan. i'm not i look for the best price/performance and will go with whatever company currently has it.go look at the bench for a 7970 amds top card compared to a 570 nvidia's 2nd best card when it was released. not even close. i realize the 7000 series was new architecture and they had a die shrink but you can see real gains.
http://www.anandtech.com/bench/Product/508?vs=518
sweenish - Thursday, June 6, 2013 - link
3rd. Titan, 780, then 770.Gigaplex - Sunday, June 16, 2013 - link
I'm assuming you're only considering cards from the Geforce line and not Quadro/Tesla...iEATu - Thursday, May 30, 2013 - link
Plus a better memory VRM.khanov - Friday, May 31, 2013 - link
*sigh*You failed again.
khanov - Friday, May 31, 2013 - link
Sorry dude, that wasn't aimed at you. Anand your comments system has a mind of its own.If I reply to xyz I sort of expect my reply to be below xyz's comment and not inserted randomly in to the comments list.
chizow - Thursday, May 30, 2013 - link
Once again, a year late, but still a nice card. The updated cooler and higher memory clocks are impressive, but the max Boost clock was achievable on "FTW" type binned GTX 680s in the past.I guess this is Nvidia's "Gigahertz Edition", basically an overclocked SKU to bring parity in the performance midrange market.
Homeles - Thursday, May 30, 2013 - link
How in the world is this card a year late? Nvidia was still winning at this time, one year ago. Now they have not one, not two, but three single GPU cards that are on parity or are faster than the 7970 GE. Nvidia is in a far better position than they were with their GTX 500 series.chizow - Thursday, May 30, 2013 - link
Full GK104 should've been GTX 670 and below from the outset, as Nvidia initially planned. That's why it's a year late, at this price point anyways.Also, AMD reached parity with Nvidia's GTX 680 last year with the 7970GE launch in June/July, which then distanced itself by 5-10% with the Never Settle Drivers in Sept/Oct last year.
Now that the GTX 770 has launched and is ~10% faster than the 680, it again, reaches parity with the 7970GE.
JPForums - Thursday, May 30, 2013 - link
I thought the 104/114 series was historically reserved for the x60, while the 100/110 series was meant for the x70/x80 chips. Thus this new highend GK104 model should have been a 760Ti. GK110 should have maxed out at the 780 and the 770 should have been the paired down model. If they really had to have a Titan, it should have been a DPFP uncapped 780 (so they got that almost right).Of course the prices should have been the usual highend price points and not the massive price jumps they are currently pushing. Sure you can justify the price with the current performance relative to the previous generation, but if we always did that, the high end cards would get perpetually more expensive as the performance of each new generation of cards would justify a price hike over the previous generation. In reality, these prices are the unfortunate result of a lack of competition. Of course not all companies handle lack of competition the same way. nVidia has shown that, when uncontested, they will jack introductory prices into the stratosphere (8800 Ultra - $800-1000, $650 - GTX280, Titan/GTX780 - $1000/$650). Under normal competitive conditions, the top single GPU card from either nVidia or AMD/ATi of each generation comes in at $500. In similarly uncontested situations AMD/ATi has proven to be much less abusive to their customers (7970 - $550, 5870 - $400). Granted the relatively low price of the Dual GPU GTX295 probably kept the 5870s price in check until the GTX400 series launched, but at that point there was a significant difference in stability between single and dual GPU cards. Now I must mention, lest anyone gets the wrong idea, that AMD/ATi was probably only taking this route because marketshare/mindshare was more important to them than profit margins. Nonetheless, the facts remain.
chizow - Thursday, May 30, 2013 - link
I agree with virtually everything you said, although I never really had a problem with Nvidia jumping GK104 up a SKU to the x70 range. The performance was certainly there especially relative to last-gen performance and full GK104 also beat AMD's best offering at the time.The problem I had was Nvidia's decision to turn this 2nd tier ASIC into their flagship and subsequently, hold off on launching their true flagship ASIC a full year AND charge $1000 (and later, $650) for it.
All events predicated on the fact AMD launched 7970 at flagship prices when it really didn't deserve the asking price. Tahiti launch set the stage for Nvidia to not only undercut AMD pricing but to beat them in performance as well with only their 2nd tier chip.
JPForums - Thursday, May 30, 2013 - link
True, the 7970 could definitely be considered overpriced when it launched, but it was the undisputed performance champ until nVidia finally launched the GTX680 to bring back competition. Though, this begs the question, was the 7970 really this underperforming, or was the GK104 simply larger and faster (relatively speaking) than midrange chips in the past. Given that the GK104 die size is smaller than the GTS250, GTX460, GTX555 die sizes, I'd say larger is out. That said, they removed a lot of compute resources to get the gaming performance they were targeting, so faster might hold some weight.The 7000 series sudden proficiency in compute combined with the equally sudden removal of compute focus in the GTX600 series meant the 7970 would need to be far larger to maintain equivalent performance. Given the fact that Tahiti XT (352mm) was much closer to the size of GK104 (294mm) than GK110 (561mm), the 7970 should probably be considered a mid-weight. That is to say I can conclude that Tahiti XT was under performing (in games) AND GK104 was an overachiever. So the question becomes, is compute capabilities important enough to sacrifice gaming performance that a year ago likely would have clocked in closer to the GTX780 (GTX775 class?) for compute performance that in many cases exceeds Titan, but gaming performance roughly on par with a GTX680?
JlHADJOE - Friday, May 31, 2013 - link
IMO AMD's initial, higher price on the 7970 was justified. People forget that it was a much bigger chip than the 6970, with a 384-bit bus instead of 256. Any 384-bit part is effectively big, IMO. Same size as the 580, and now the Titan and 780.The fault here IMO goes right back to AMD's marketing division. If they hadn't stupidly went from 5870 to 6970, then people might have noticed that Tahiti was in fact a bigger part than its two immediate predecessors, and properly deserving of the 7900-series naming.
EJS1980 - Thursday, May 30, 2013 - link
Pretty much this /I\I
I
LoccOtHaN - Friday, May 31, 2013 - link
True Bro ;-) AMD is more user friendly, but when Next-Gen on ATI/AMD comes PS4 and M$ XBx1 then all optimalisation will be AMD friendly and DX11.1 (DX11.x or Full DX11) and we won !whyso - Thursday, May 30, 2013 - link
Well looks like the 7970 ghz has been tied. Lower price is nice but almost no OC headroom (wouldn't be surprised if AT got a cherry picked sample) and no game bundle. Similar power consumption too. Performance increase is virtually 0 but the price decrease is nice.A5 - Thursday, May 30, 2013 - link
Worth pointing out that you can pretty easily get the AMD bundle codes for ~$30 on eBay. It wipes out the price advantage, but it does let you weigh the cards on the merits of their hardware.steve_rogers42 - Thursday, May 30, 2013 - link
Think there is an error on page 7,"Of course this is only a 9% increase in the GPU clockspeed, which is going to pale in comparison to parts like GTX 670 and GTX 780, each of which can do 20%+ due to their lower clockspeeds. So there’s some overclocking headroom in GTX 780, but as to be expected not a ton."
Last sentence should read GTX 770 yea?
Great article, good to see nvidia's progress with the GPU boost 2 and taking on-board the tdp/power issue that the 600 series seems to have had. It will be interesting to see what they make of the 760 and what it will contain.
Cheers,
nathanddrews - Thursday, May 30, 2013 - link
Wow... at worst it is equal with the 680 for $100 less. At best, it is tied with the 780 for $250 less. I think NVIDIA needs to reexamine their pricing model - I'm sure the market will fix it for them. Either the 770 is too cheap or the 780 is way too expensive. (signs point to the latter)shompa - Thursday, May 30, 2013 - link
GTX 780 die is huge. It costs Nvidia almost double to manufacture a 780GTX then a 770/680 die.Only if Nvidia have enough harvested defect dies from Tesla chips they could/would lower the price on GTX 780.
EJS1980 - Thursday, May 30, 2013 - link
If the 780 performs better than the 680 by 30-35% out the gate w/ crappy day1 drivers (future updates will only increase that advantage), how exactly is it tied with the 770?nathanddrews - Thursday, May 30, 2013 - link
How exactly is it tied? Like I said, at its best.http://www.anandtech.com/bench/Product/829?vs=827
770 and 780 tied (within +/-5%):
DiRT: Showdown - 1920x1080 - Ultra Quality + 4xMSAA
Sleeping Dogs - 1920x1080 - Ultra Quality + Normal AA
Battlefield 3 - 1920x1080 - Ultra Quality + FXAA-High
Compute: Civilization V
Compute: Sony Vegas Pro 12 Video Render
Synthetic: 3DMark Vantage Pixel Fill
EJS1980 - Thursday, May 30, 2013 - link
Your link shows the 780 with a pretty substantial performance advantage, save for a couple instances. I stand by my comment.nathanddrews - Saturday, June 1, 2013 - link
It's easy to stand by your comment if you completely ignore what I wrote and ignore the facts. For instance, the 680 was not tested with "crappy day1 drivers", it was tested with very recent 320.14 drivers while the 770/780 used 320.18. In addition, I never said that the 770 was always tied with the 780, I said "at its best", which means that at the high end it ties the 780 in some benchmarks. At its worst, on the low end, the 770 is tied with the 680 in some benchmarks. I hope that clarifies things for you.chizow - Thursday, May 30, 2013 - link
They are both overpriced relative to their historical cost/pricing, as a result you see Nvidia has posted record margins last quarter, and will probably do similarly well again.Razorbak86 - Thursday, May 30, 2013 - link
Cool! I'm both a customer and a shareholder, but my shares are worth a hell of a lot more than my SLi cards. :)antef - Thursday, May 30, 2013 - link
I'm not happy that NVIDIA threw power efficiency to the wind this generation. What is with these GPU manufacturers that they can't seem to CONSISTENTLY focus on power efficiency? It's always...."Oh don't worry, next gen will be better we promise," then it finally does get better, then next gen sucks, then again it's "don't worry, next gen we'll get power consumption down, we mean it this time." How about CONTINUING to focus on it? Imagine any other product segment where a 35%! power increase would be considered acceptable, there is none. That makes a 10 or whatever FPS jump not impressive in the slightest. I have a 660 Ti which I feel has an amazing speed to power efficiency ratio, looks like this generation definitely needs to be sat out.jwcalla - Thursday, May 30, 2013 - link
It's going to be hard to get a performance increase without sacrificing some power while using the same architecture. You pretty much need a new architecture to get both.jasonelmore - Thursday, May 30, 2013 - link
or a die shrinkBlibbax - Thursday, May 30, 2013 - link
As these cards have configurable TDP, you get to choose your own priorities.coldpower27 - Thursday, May 30, 2013 - link
There isn't much you can really do when your working with the same process node and same architecture, the best you can hope for is a slight bump in efficiency at the same performance level but if you increase performance past the sweet spot, you sacrifice efficiency.In past generation you had half node shrinks. GTX 280 -> GTX 285 65nm to 55nm and hence reduced power consumption.
Now we don't, we have jumped straight from 55nm -> 40nm -> 28nm, with the next 20nm node still aways out. There just isn't very much you can do right now for performance.
JDG1980 - Thursday, May 30, 2013 - link
Yes, this is really TSMC's fault. They've been sitting on their ass for too long.tynopik - Thursday, May 30, 2013 - link
maybe a shade of NVIDIA green for the 770 in the charts instead of AMD red?joel4565 - Thursday, May 30, 2013 - link
Looks like an interesting part. If for no other reason that to put pressure on AMD's 7950 Ghz card. I imagine that card will be dropping to 400ish very soon.I am not sure what card to pick up this summer. I want to buy my first 2560x1440 monitor (leaning towards Dell 2713hm) this summer, but that means I need a new video card too as my AMD 6950 is not going to have the muscle for 1440p. It looks like both the Nvida 770 and AMD 7950 Ghz are borderline for 1440p depending on the game, but there is a big price jump to go to the Nvidia 780.
I am also not a huge fan of crossfire/sli although I do have a compatible motherboard. Also to preempt the 2560/1440 vs 2560/1600 debate, yes i would of course prefer more pixels, but most of the 2560x1600 monitors I have seen are wide gamut which I don't need and cost 300-400 more. 160 vertical pixels are not worth 300-400 bucks and dealing with the Wide gamut issues for programs that aren't compatible.
djboxbaba - Thursday, May 30, 2013 - link
I'd say thats a good choice for the 1440p monitor. As far as the actual card, I agree with you that your decision should be between those 2 cards. Just as performance of the 7970GHz has increased due to driver updates, I think we can expect some performance increase from the 770 over time as well. Not sure why that was not mentioned at all in the article. If i were in your situation it would probably come down to the bundled games that come with the AMD card, do you want those games?yasamoka - Thursday, May 30, 2013 - link
No, we cannot expect performance increases from the 770 over time due to driver updates. The 770 is GK104, same architecture as the GTX680. It's just a higher-clocked 680. 680 drivers have been rolling since its release in March, it's been 14 months.The regular game-specific performance improvements will be there for both cards, for games that are coming, but we can't expect the 770 to improve in performance due to drivers as it is already a 14-month mature product (refresh), driver-wise.
EJS1980 - Thursday, May 30, 2013 - link
That makes no sense. If the 7970 has been benefiting greatly from each driver update, and its almost 6 months older than GK104, why can the 7970 improve but the 770 can't?Galidou - Saturday, June 1, 2013 - link
Wow that's a real lack of information to come up with an answer like this. 7970 was a COMPLETE remake of what had been done before. Totally justifiable that drivers improved performance like the gtx 600 series that was new. The 770 is TOTALLY a 680, physically, it's simply a GK 104 die pushed to the extreme, almost nothing new on the driver side from 680 to 770. Not that they can't improve anything but most of the job is done on the driver side of things.Stuka87 - Thursday, May 30, 2013 - link
Did you mean to say 7970Ghz? The 7950 is already at $300, has been for some time.joel4565 - Thursday, May 30, 2013 - link
Yeah I did mean 7970Ghz. Sorry for confusion.EJS1980 - Thursday, May 30, 2013 - link
I'd say forget the Dell, and go with an Overlord Tempest OC. It's the same price & panel as you'd get from Dell, except it can be overclocked to 120hz. They're a California based company, and their sales/IT departments are awesome. If you're not interested in overclocking, they sell a 60hz model for about $400. You should seriously check them out.cbrownx88 - Friday, May 31, 2013 - link
I went to their page and everything is sold out... :(Hrel - Thursday, May 30, 2013 - link
Pretty happy to see Nvidia FINALLY realize the potential of this architexture. Gives me hope for the next generation; combined with a process node drop we should see pretty impressive performance. This refresh will keep the market feeling fresh until then. I just hope they're a lot more aggressive with pricing. I know you say in the article that you were surprised by this price; but really it's still too high. It's not absurd or anything, and assuming the price drops 50+ bucks by fall it's in line with GPU market trends. But I'd like to see some price pressure FROM Nvidia, instead of AMD always being the one to kick off a price war.trajan2448 - Thursday, May 30, 2013 - link
Most of the reviews I've seen have had the 770 beating the 7970gE pretty well. It seems this site really bends over to make AMD look as good as possible, even though the 7970 uses more power and is generally slower and more expensive.raghu78 - Thursday, May 30, 2013 - link
what most of reviews. across a wide range of games and you will see these two cards are tied.http://www.hardwarecanucks.com/forum/hardware-canu...
http://www.computerbase.de/artikel/grafikkarten/20...
http://www.pcgameshardware.de/Geforce-GTX-770-Graf...
http://www.hardware.fr/articles/896-22/recapitulat...
bitstorm - Thursday, May 30, 2013 - link
It seems to match up with other reviews I have seen. Maybe you are looking at ones that are not using the reference card? The non-reference reviews show it doing a bit better.Still even with the better results of the non reference cards it is a bit disappointing of a release from Nvidia IMO. While it is good that it will likely cause AMD to drop the price of the 7970 GE but it won't set a fire under AMD to make an impressive jump on their next lineup refresh.
Brainling - Thursday, May 30, 2013 - link
And if you look at any AMD review, you'll see fanbois jumping out of the wood work to accuse Anand and crew of being Nvidia homers. You can't win for losing I guess.kallogan - Thursday, May 30, 2013 - link
barely beats 680 at higher power consumption. Turbo boost is useless. Useless gpu. Next.gobaers - Thursday, May 30, 2013 - link
There are no bad products, only bad prices. If you want to think of this as a 680 with a price cut and modest bump, where is the harm in that?EJS1980 - Thursday, May 30, 2013 - link
Exactly!B3an - Thursday, May 30, 2013 - link
I'd glad you mentioned the 2GB VRAM issue Ryan. Because it WILL be a problem soon.In the comments for 780 review i was saying that even 3GB VRAM will probably not be enough for the next 18 months - 2 years, atleast for people who game at 2560x1600 and higher (maybe even 1080p with enough AA). As usual many short-sighted idiots didn't agree, when it should be amazingly obvious theres going to be a big VRAM usage jump when these new consoles arrive and their games start getting ported to PC. They will easily be going over 2GB.
I definitely wouldn't buy the 770 with 2GB. It's not enough and i've had problems with high-end cards running out of VRAM in the past when the 360/PS3 launched. It will happen again with 2GB cards. And it's really not a nice experience when it happens (single digit FPS) and totally unacceptable for hardware this expensive.
TheinsanegamerN - Monday, July 29, 2013 - link
people have been saying that for a long time. i heard the same thing when i bought my 550 ti's. and, 2 years later....only battlefield 3 pushed ppast the 1 GB frame buffer at 1080p, and that was on unplayable setting (everything maxed out). now, if I lower the settings to maintain at least 30fps, no problems. 700 MB usage max. mabye 750 on a huge map. now, at 1440p, i can see this being a problem for 2 gb, but i think 3gb will be just fine for a long time.just4U - Thursday, May 30, 2013 - link
I don't quite understand why Nvidia's partners wouldn't go with the reference design of the 770. I've been keenly interested in those nice high quality coolers and hoping they'd make their way into the $400 parts. It's a great selling point (I think) and disappointing to know that they won't be using them.chizow - Thursday, May 30, 2013 - link
I agree, it feels like false advertising or bait and switch given GPU Boost 2.0 relies greatly on operating temps and throttling once you hit 80C.Seems a bit irresponsible for Nvidia to send out cards like this and for reviewers to subsequently review and publish the results.
JDG1980 - Thursday, May 30, 2013 - link
TechPowerUp ran tests of three GTX 770s with third-party coolers (Asus DirectCU, Gigabyte WindForce, and Palit JetStream). All three beat the GTX 770 reference on thermals for both idle and load. Noise levels varied, but the DirectCU seemed to be the winner since it was quieter than the reference cooler on both idle and load. That card also was a bit faster in benchmarks than the reference.That said, I agree the build quality of the reference cooler is better than the aftermarket substitutes - but Asus is probably a close second. Their DirectCU series has always been very good.
ArmedandDangerous - Thursday, May 30, 2013 - link
This article is in desperate need of some editing work. Spelling and comprehension errors throughout.Nighyal - Thursday, May 30, 2013 - link
I asked this on the 780 review, and it seems like it might be even more interesting for the 770 considering Nvidia's basically threw more power at a 680, but a performance per watt comparison would be great. If there was something that clearly showed the efficiency of each card in a way (maybe using a fixed work load) it would be interesting to see. Especially when compared to similar architectures or when comparing AMD's efforts with the GHz editions.ThIrD-EyE - Thursday, May 30, 2013 - link
Since when did 70-80C temperatures become acceptable? I had been looking to upgrade my MSI Cyclone GTX 460 which would never hit higher than 62C and I got a great deal on 2 560TIs for less than half the cost of them new. I have run them in single card and SLI; I see 80C+ when I run an overclock program like MSI Afterburner. I use a custom fan profile to bring the temps down to 75C or less at higher fan speed, but still in reasonable noise levels. It's still not quite enough.All these video cards may be fine at these temperatures, but when you are sitting next to the case and there is 80C being pumped out, you really feel it. Especially now with Summer heat finally hitting where I live. My $25 Hyper212+ keeps my OC'ed i7 2600k at a good 45-50C when playing games. I would buy aftermarket coolers if they were not going to take up 3 slots each (I have a card that I need, but would have to be removed.) and didn't cost nearly as much as I paid for the cards.
AMD, NVIDIA and card partners need to work on bringing temperatures down.
quorm - Thursday, May 30, 2013 - link
lower temperature readings do not mean less heat produced. better cooling just moves the heat from the GPU to your room more efficiently.ThIrD-EyE - Thursday, May 30, 2013 - link
The architecture of these video cards were obviously made for performance first. That does not mean they can't also work on lowering power consumption to lower the heat produced. One thing that I've found to help my situation is to set all games to run at 60fps without vsync if possible, which thankfully is most fo the games I play. Some games become unplayable or wonky with vsync and other ways of limiting fps without vsync, so I just deal with the heat from no fps limits.I hope that the developers of console ports from PS4 and Xbox One put in an fps limit option like Borderlands 2 if they don't allow dev console access.
MattM_Super - Friday, May 31, 2013 - link
Although its not currently accessible from the driver control panel, Nvidia drivers have a built in fps limiter that I use in every game I play (never had any issues with it). You can access it with NvidiaInspector.DanNeely - Thursday, May 30, 2013 - link
Since 70-80C has always been the best a blower style cooler can do on a high power GPU without getting obscenely loud, and blowers have proven to be the best option to avoid frying the GPU in a case with horrible ventilation. IOW about when both nVidia and ATI adopted blowers for their reference designs.JPForums - Thursday, May 30, 2013 - link
70C-80C temperatures became acceptable after nVidia decided to release Fermi based cards that regularly hit the mid 90Cs. Since then, the temperatures have in fact come down. Of course, they are still high for my liking and I pay extra for cards with better coolers (I.E. MSI TwinFrozer, Asus DirectCU). That said, there is only so much you can do when pushing 3 times the TDP of an Intel Core i7-3770K while cooling it with a cooler that is both lighter and less ideally formed for the task (Comparing some of the best GPU coolers to any number of heatsinks from Noctua, Thermalright, etc.). Water cooling loops work wonders, but not everyone wants the expense or hassle.Rick83 - Friday, May 31, 2013 - link
The higher the temperatures, the less fan speed you need, because you have higher delta-theta between the air entering the cooler and the cooling fins, which results in more energy transfer at less volume throughput.Obviously the temperature is a pure function of the fan curve under load, and has very little to do with the actual chip (unless you go so far down in energy output, that you can rely on passive convection).
ninjaquick - Thursday, May 30, 2013 - link
Delta Percentages: The 7990 just needs to be removed, it skews the whole chart way too much.Brainling - Thursday, May 30, 2013 - link
It's a nice card, but not nice enough for me to upgrade my 670. If it had been a slightly more paired down GK110, I would have considered it...but the performance gains are just not enough to justify replacing my 670 (which still has little trouble with most games).I'll spend my computing dollar on going from Sandy Bridge -> Haswell instead, and wait for the eventual 800 series sometime next year (which should be a new micro architecture).
The0ne - Thursday, May 30, 2013 - link
"... the rest of the year will be a battle of prices and bundles."Can't wait.
Runadumb - Thursday, May 30, 2013 - link
Firstly Thank you for the detailed review.Right, I could pull the trigger on two of these (when they come out with 4GB versions) as they are 85%+ above my current 2x570GTX's. Thanks for having 570's in the results by the way.
My BIG question is: Is the proper next-gen cards still due early next year or is this all we've got for the next 18 months? The rumour mill is fine here.
As I run 3 displays and a 6000x1080p resolution I literally can never have too much performance. So if waiting until next year meant I get a better upgrade I'm happy to do it. This system can just about keep me going right now.
I may abandon the 3 screen setup for a consumer version Oculus Rift but how long away is that? I want to hedge my bets.
jwcalla - Thursday, May 30, 2013 - link
I think Maxwell is going to be more like mid-2014. It seems aggressive though... a new architecture, a shrink to 20nm and they're shoe-horning a 64-bit ARM chip in there. Lots of opportunities for delays IMO. But it might be a wise idea to wait... NVIDIA is promising (read: take with a grain of salt) 3x "GFLOPS per watt" over Kepler, and about 7-8x over Fermi. It's hard to predict how that will scale into performance though.Ryan Smith - Thursday, May 30, 2013 - link
It would be historically accurate to state that NVIDIA typically releases new architectures on new nodes, and that both Maxwell and TSMC 20nm are scheduled for 2014. When in 2014 is currently something only NVIDIA could tell you.But I would say this: consider this the half-time show. This is the mid-generation refresh, so we're roughly half-way between the launch of Kepler and Maxwell.
araczynski - Thursday, May 30, 2013 - link
Hopefully by the time I replace my aging E8500/6970 system (which still plays everything I care about pretty well at 1080p) with a haswell, this thing will have a 4GB option so i can make another long lasting rig.xdesire - Thursday, May 30, 2013 - link
Ok i see that this is an overclocked/tweaked GTX 680 but what in the hell is that TDP?!HEADHUNTERZ! - Thursday, May 30, 2013 - link
Soooo...where does the GTX 670 FTW compare to the GTX 770!? Theyre both the same price! So which one would be a better decision to make?JPForums - Thursday, May 30, 2013 - link
They won't be priced the same for long. Unfortunately, I just can't see a GTX670FTW beating a GTX770 (especially with a factory overclock). I wonder if there is enough overclocking headroom for a GTX770FTW.Enkur - Thursday, May 30, 2013 - link
Why is there a picture of Xbox One in the article when its mentioned nowhere.Razorbak86 - Thursday, May 30, 2013 - link
The 2GB Question & The Test"The wildcard in all of this will be the next-generation consoles, each of which packs 8GB of RAM, which is quite a lot of RAM for video operations even after everything else is accounted for. With most PC games being ports of console games, there’s a decent risk of 2GB cards being undersized when used with high resolutions and the highest quality art assets. The worst case scenario is only that these highest quality assets may not be usable at playable performance, but considering the high performance of every other aspect of GTX 770 that would be a distinct and unfortunate bottleneck."
kilkennycat - Thursday, May 30, 2013 - link
NONE of the release offerings (May 30)of the GTX770 on Newegg have the Titan cooler !!!! Regardless of the pictures in this article and on the GTX7xx main page on Newegg. And no bundled software to "ease the pain" and perhaps help mentally deaden the fan noise..... this product takes more power than the GTX680. Early buyers beware... !!geok1ng - Thursday, May 30, 2013 - link
"Having 2GB of RAM doesn’t impose any real problems today, but I’m left to wonder for how much longer that’s going to be true. The wildcard in all of this will be the next-generation consoles, each of which packs 8GB of RAM, which is quite a lot of RAM for video operations even after everything else is accounted for. With most PC games being ports of console games, there’s a decent risk of 2GB cards being undersized when used with high resolutions and the highest quality art assets. "Last week a noob posted something like that on the 780 review. It was decimated by a slew of tech geeks comments afterward. I am surprised to see the same kind of reasoning on a text written by an AT expert.
All AT reviewers by now know that next console will be using an APU from AMD that will have the graphic muscle (almost) comparable to a 6670 ( 5670 in PS4 case thanks to GDDR5) . So what Mr. Ryan Smith is stating is that a "8GB" 6670 can perform better than a 2GB 770 in video operations?
I am well aware that Mr Ryan Smith is over-qualified to help AT readers revisit this old legend of graphics memory :
How little is too little?
And please let us not starting flaming about memory usage- most modern OSs and gaming engines use available RAM dinamically, so if one sees a game use 90%+ of available graphics memory does not imply , at all, that such game would run faster if we double the graphics memory. The opposite is often the true.
As soon as 4GB versions of the 770 launch AT should pit these versions against the 2GB 770 and the 3GB 7970. Or we could go back months ago and re-read tests done when the 4GB versions of the 680 came out- only at triple screen resolutions and insane levels of AA would we see any theoretical advantage of 3-4Gb over 2GB, which is largely unpractical since most games can't run at these resolutions and AA with a single card anyway.
I think NVDIA did it right (again): 2GB is enough for today and we wont see next gen consoles running triple screen resolutions at 16xAA+. 2Gb means less BoM, which is good for profit and price competition and less energy consumption which is good for card temps and max Oc results.
Enkur - Thursday, May 30, 2013 - link
I cant believe AT is mixing up unified graphics and system memory on consoles with dedicated RAM of the graphics card. doesnt make sense.Egg - Thursday, May 30, 2013 - link
PS4 has 8GB of GDDR5 and a GPU somewhat close to a 7850. I don't know where you got your facts from.geok1ng - Thursday, May 30, 2013 - link
Just to start the flaming war- next consoles will not run in monolithic GPUs, but in twin jaguar cores. So when you see those 768/1152 GPU cores numbers, remember these are "crossfired" cores. And in both consoles the GPU is running at a mere 800Mhz, hence the comparison with the 5670/6670, 480 shaders cards@ 800Mhz.It is widely accepted that console games are developed using the lowest common denominator, in this case, the Xbox One DDR3 memory. Even if we take the huge assumption that dual jaguar cores running in tandem can work similar to a 7850 -1024 cores at 860Mhz- in a PS4 ( which is a huge leap of faith looking back to ho badly AMD fared in previous crossfires attempts using integrated GPU like these jaguar cores) that turns out to be the same:
Do an 8GB 7850 gives us better graphical results than a 2GB 770, for any gaming application in the foreseeable future?
Don't 4k on me please: both consoles will be using HDMI, not DisplayPort. and no, they wont be able to drive games across 3 screens. This "next gen-consoles will have more Video RAM than high GPUs in PCs, so their games will be better" is reminding of the old days of "1gb DDr2 cards are better than 256Mb DDr3 cards for future games" scam.
Ryan Smith - Thursday, May 30, 2013 - link
We're aware of the difference. A good chunk of that unified memory is going to be consumed by the OS, the application, and other things that typically reside on the CPU in a PC. But we're still expecting games to be able to load 3GB+ in assets, which would be a problem for 2GB cards.iEATu - Thursday, May 30, 2013 - link
Why are you guys using FXAA in benchmarks as high end as these? Especially for games like BF3 where you have FPS over 100. 4x AA for 1080p and 2x for 1440p. No question those look better than FXAA...Ryan Smith - Thursday, May 30, 2013 - link
In BF3 we're testing both FXAA and MSAA. Otherwise most of our other tests are MSAA, except for Crysis 3 which is FXAA only for performance reasons.Catalina588 - Thursday, May 30, 2013 - link
Folding@Home Big-Time Discrepancy in reviewsCan anyone explain the material differences between this review's Compute Results for Folding@Home and the same FAHbench run at Tom's Hardware?
http://www.tomshardware.com/reviews/geforce-gtx-77...
Since FAHbench is self-contained -- load and go -- it's hard to figure how the results could be so different.
Ryan Smith - Thursday, May 30, 2013 - link
We're using a newer version of the benchmark, 1.2. FAHBench 1.2 has some very big performance optimizations that aren't in 1.1x.kyuu - Thursday, May 30, 2013 - link
Not bad, but I think I'd still just find a 7970 with a good cooler on sale and overclock the crap out of it if I was looking to buy a high-end GPU.Lt_dan - Thursday, May 30, 2013 - link
People should be looking at other websites. This review is showing scores that don't even make sense. The 7970, on bf3, has the same score as tomshardware's review of the 680, which was done over the year ago.azixtgo - Thursday, May 30, 2013 - link
nobody cares. If this were ~$300 I'd seriously consider it. But getting a 7950 for ~$300 along with 4 quality games just makes me not care about a $400 card thats already out of my budget anyway. Nvidia always keeps their best just too high. At 400 its competing in value against a card with more to offer. They don't have the concept of winning by pricing right, but I guess they've never had to go there like AMD did.Razorbak86 - Thursday, May 30, 2013 - link
Just because YOU don't care, doesn't mean that NOBODY cares. Please don't attempt to speak for the rest of us.agentwax - Thursday, May 30, 2013 - link
Hmm recently built a system with gigabyte 670 and was looking to go sli in the near future. Upon reading this review I'm second guessing. Should I get a second 670 in a few months and go sli Or a 770 and go sli some time around christmas? Very happy with temps and noise on gigabyte wind force 670thunderising - Friday, May 31, 2013 - link
Now it's time for a HD 7970 GHZ GHZ MEGA GHZ edition with faster clocks and a new driver release for improved performance. HeheheAt least that would be better than a HD8950 = HD 7970 with faster clock speeds.
evolucion8 - Friday, May 31, 2013 - link
That is not correct. The HD 7970 has a bigger bus, but being 28nm instead of 40nm like the HD 6970 means that the HD 7970 was able to achieve great performance gains by being 354mm2 compared to the HD 6970 which is around 389mm2colonelclaw - Friday, May 31, 2013 - link
Is anyone else as disappointed as I am about pricing all the way across the board with this new generation? As an owner of a GTX580 I was thinking it's about time for an upgrade, but all these high end cards look 100 $/£/€ overpriced to me. I wasn't happy about paying £450 for my 580 but there's no way in hell I'm prepared to pay £550 for the 780, and the 770 isn't a big enough upgrade to interest me.I'm more than a little suspicious that AMD and NVidia are agreeing on price points in order to make larger profits. Having just 2 companies in a market sector makes it pretty easy for them to do this.
mapesdhs - Friday, May 31, 2013 - link
I've just bought two 3GB 580s for 450 UKP total, will be benching them next week
single & SLI. If you're interested in the results, let me know by PM/email (or Google
"SGI Ian" to find my contact page) and I'll send you the info links once the tests are
done. I'll be testing with 3DMark06, Vantage, 3DMark11, Firestrike, Call of Juarez,
Stalker COP, X3TC, PT Boats, Far Cry 3, and all 4 Unigines (Sanctuary, Tropics,
Heaven and Valley). If I'm reading reviews correctly, two 580s SLI should beat a 780.
If your existing 580 is a 1.5GB card, note you can get a 2nd one off eBay these days
for typically less than 150 UKP. I've won two this month, a Zotac card for 123 UKP
(using it to type this post) and an EVGA card for 142 UKP.
And yes, I agree, the new gen card pricing is kinda nuts. Anyone would think
we haven't had a recession. I doubt peoples' budgets have suddenly become
30% higher (only reason I could buy some is I sold some of my other stuff to
cover the cost). The two 3GB 580s I bought were in total 100 UKP less than the
cheapest 780 from the same seller (I probably could have eventually obtained
two 3GB 580s off eBay, but decided the chance to get them from a proper
retailer right now with warranty, etc. was too good to pass up).
Ian.
colonelclaw - Saturday, June 1, 2013 - link
Thanks for the reply, Ian. For some reason I'd totally forgotten about SLI. I'd be very interested to see the results of a 580 SLI vs. 780, and I suspect a few other readers here would too. One thing, how closely matched do the two cards have to be? Exactly the same model? Just the same clock speeds? Or is it more forgiving? I can't imagine it would be very easy to track down a 2nd identical 580, which is the reason I ask. If any Anand readers with knowledge of SLIing can share their experiences that would be great.shompa - Friday, May 31, 2013 - link
Memory... How many games are 64bit? = How many games needs more then 4gig memory? Today zero. It was even more fun when most PCs had BIOS. It cant even address that much graphic memory.Maybe 64bit gaming will be mainstream in a year? I don't know. I sure seems to take its time. I have used 64 bit in my work since 1995 (real computers) and 64bit on my mac since 2002. (OSX 10.27 Smeagol). Gaming/Windows takes its time since Windows don't have smart software packaging. (real computers have the same binary for 32/64bit, different languages and even different architectures. I loved when Apple had "fat binaries" and everything could run on both PPC and X86. Gives the customer a choice of architecture.)
SymphonyX7 - Friday, May 31, 2013 - link
Looks more like a GTX 680 Plus. The gains over the year-old GTX 680 and HD 7970 Ghz edition are measly and power efficiency was thrown in as an afterthought.I'll wait for the HD 8950 and HD 8970 or whatever they'll call it before I make a decision. I suppose it's worth the wait considering what implications the new PS4 and Xbox One will have on game development and their impact on PC ports performance-wise.
kwrzesien - Monday, June 24, 2013 - link
4GB models have appeared at Newegg: http://www.newegg.com/Product/Product.aspx?Item=N8...Colin.B - Tuesday, July 9, 2013 - link
I just ordered the Gigabyte GTX 770, sold one of my 660 SCs for 160 dollars and am returning the one I just bought. Will I see a big performance drop from 660 SLI to the 770 at 2560x1440? I hope I made the right choice :/