Final Words

To wrap things up, let’s start with the obvious: NVIDIA has reclaimed their crown – they have the fastest single-GPU card. The GTX 480 is between 10 and 15% faster than the Radeon 5870 depending on the resolution, giving it a comfortable lead over AMD’s best single-GPU card.

With that said, we have to take pause for a wildcard: AMD’s 2GB Radeon 5870, which will be launching soon. We know the 1GB 5870 is RAM-limited at times, and while it’s unlikely more RAM on its own will be enough to make up the performance difference, we can’t fully rule that out until we have the benchmarks we need. If the GTX 480 doesn’t continue to end up being the fastest single-GPU card out there, we’ll be surprised.

The best news in this respect is that you’ll have time to soak in the information. With a retail date of April 12th, if AMD launches their card within the next couple of weeks you’ll have a chance to look at the performance of both cards and decide which to get without getting blindsided.

On a longer term note, we’re left wondering just how long NVIDIA can maintain this lead. If a 2GB Radeon isn’t enough to break the GTX 480, how about a higher clocked 5800 series part? AMD has had 6 months to refine and respin as necessary; with their partners already producing factory overclocked cards up to 900MHz, it’s too early to count AMD out if they really want to do some binning in order to come up with a faster Radeon 5800.

Meanwhile let’s talk about the other factors: price, power, and noise. At $500 the GTX 480 is the world’s fastest single-GPU card, but it’s not a value proposition. The price gap between it and the Radeon 5870 is well above the current performance gap, but this has always been true about the high-end. Bigger than price though is the tradeoff for going with the GTX 480 and its much bigger GPU – it’s hotter, it’s noisier, and it’s more power hungry, all for 10-15% more performance. If you need the fastest thing you can get then the choice is clear, otherwise you’ll have some thinking to decide what you want and what you’re willing to live with in return.

Moving on, we have the GTX 470 to discuss. It’s not NVIDIA’s headliner so it’s easy to get lost in the shuffle. With a price right between the 5850 and 5870, it delivers performance right where you’d expect it to be. At 5-10% slower than the 5870 on average, it’s actually a straightforward value proposition: you get 90-95% of the performance for around 87% of the price. It’s not a huge bargain, but it’s competitively priced against the 5870. Against the 5850 this is less true where it’s a mere 2-8% faster, but this isn’t unusual for cards above $300 – the best values are rarely found there. The 5850 is the bargain hunter’s card, otherwise if you can spend more pick a price and you’ll find your card. Just keep in mind that the GTX 470 is still going to be louder/hotter than any 5800 series card, so there are tradeoffs to make, and we imagine most people would err towards the side of the cooler Radeon cards.

With that out of the way, let’s take a moment to discuss Fermi’s future prospects. Fermi’s compute-heavy and tessellation-heavy design continues to interest us but home users won’t find an advantage to that design today. This is a card that bets on the future and we don’t have our crystal ball. With some good consumer-oriented GPGPU programs and developers taking up variable tessellation NVIDIA could get a lot out of this card, or if that fails to happen they could get less than they hoped for. All we can do is sit and watch – it’s much too early to place our bets.

As for NVIDIA’s ecosystem, the situation hasn’t changed much from 2009. NVIDIA continues to offer interesting technologies like PhysX, 3D Vision, and CUDA’s wider GPGPU application library. But none of these are compelling enough on their own, they’re merely the icing on the cake. But if you’re already in NVIDIA’s ecosystem then the choice seems clear: NVIDIA has a DX11 card ready to go that lets you have your cake and eat it too.

Finally, as we asked in the title, was it worth the wait? No, probably not. A 15% faster single-GPU card is appreciated and we’re excited to see both AMD and NVIDIA once again on competitive footing with each other, but otherwise with much of Fermi’s enhanced abilities still untapped, we’re going to be waiting far longer for a proper resolution anyhow. For now we’re just happy to finally have Fermi, so that we can move on to the next step.

Temperature, Power, & Noise: Hot and Loud, but Not in the Good Way
Comments Locked

196 Comments

View All Comments

  • Ryan Smith - Wednesday, March 31, 2010 - link

    My master copies are labeled the same, but after looking at the pictures I agree with you; something must have gotten switched. I'll go flip things. Thanks.
  • Wesgoood - Wednesday, March 31, 2010 - link

    Correction, Nvidia retained their crown on Anandtech. Even though some resolutions even on here were favored to ATI(mostly the higher ones). On Toms Hardware 5870 pretty much beat GTX 480 from 1900x1200 to 2560x1600, not every time in 1900 but pretty much every single time in 2560.

    That ...is where the crown is, in the best of the best situations, not ....OMG it beat it in 1680 ...THAT HAS TO BE THE BEST!

    Plus the power hungry state of this card is just appauling. Nvidia have shown they can't compete with proper technology, rather having to just cram everything they can onto a chip and prey it works right.

    Where as ATI's GPU is designed perfectly to where they have plenty of room to almost double the size of the 5870.
  • efeman - Wednesday, March 31, 2010 - link

    I copied this over from a comment I made on a blog post.

    I've been with nVidia for the past decade. My brother built his desktop way back when with the Ti 4200, I bought a prefab with a 5950 ultra, my last budget build had an 8600 GTS in it, and I upgraded to the GTX 275 last year. I am in no way a fanboy, nVidia just has treated me very well. If I had made that last decision a few months later after the price hike, it would've definitely been the HD 4890; almost identical performance for ballpark $100 less.

    I recently built a new high-end rig (Core i7 and all), but I waited out on dropping the money on a 5800 series card. I knew nVidia's new cards were on the way, and I was excited and willing to wait it out; I expected a lot out of them.

    Now that they're are out in the open, I have to say I'm a little shaken. In many cases, the performance of the cards are not where I would've hoped they be (the general consensus seems to be 5-10% increase in performance over their ATI counterparts; I see that failing in many cases, however). It seems like the effort that nVidia put into the cards gave them lots of potential, but most of it is wasted.

    "The future of PC gaming" is right in the title of this post, and that's what these cards have been built for. Nvidia has a strong lead over ATI in compute and tessellation performance now, that's obvious; however, that will only provide useful if and when developers decide to put the extra effort into taking advantage of those technologies. Nvidia is gambling right now; it has already given ATI a half-year lead on the DX11 market, and it's pushing cards that won't be fully utilized until who-knows-when (there's no telling when these technologies will be more widely integrated into the gaming market). What will it do in the meantime? ATI is already on it's way to producing its 5000-series refresh; and this time it knows the competition's performance.

    I was hoping for the GTX 400s to do the same thing that the GTX 200s did: give nVidia back the high-end performance throne. ATI is not only competitive with it's counterparts, but it still has the 5970 for the enthusiast performance crown (don't forget Eyefinity!). I think nVidia made a mistake in putting so much focus into compute and tessellation performance; it would've been smarter to produce cards with similar die sizes (crappy wafer yields, anyone?), faster raw performance with tessellation/compute as a secondary objective, and more competitive pricing. It wouldn't have been a bad option to create a separate chip for the Tesla cards, one that focused on the compute performance while the GeForce cards focused on the rest.

    I still have faith. Maybe nVidia will work wonders with the drivers and producing performance we were waiting for. Maybe it has something awesome brewing deep within its labs. Or maybe my fears will embody themselves, and nVidia is crossing its fingers and hoping for its tessellation/compute performance to give it the market share later on. If so, ATI will provide me with my pair of cards.

    That was quite the rant; I wasn't planning on writing that much when I decided to comment on Drew Henry's (nVidia GM) blog post. I suppose I'm passionate about this sort of thing, and I really hope nVidia doesn't lose me after all this time.
  • Kevinmbaron - Wednesday, March 31, 2010 - link

    The fact that this card comes out a year and a 1/2 after the the GTX 295 makes me sick. Add to that the fact that the GTX 295 actually is faster then the GTX 480 in a few benchmarks and very close in others is like a bad dream for nvidia. Forget if they can beat AMD, they can't even beat themselves. They could have did a die shrink on the GTX 295, add some more shadders and double the memory and had that card out a year ago and it would have crushed anything on the market. Instead they risked it all on a hair brained new card. I am a GTX 295 owner. Apperently my card is a all arround better card being it doesnt lag in some games like the 480 does. I guess i will stick with my old GTX 295 for another year. Maybe then there might be a card worth buying. Even the ATI 5970 doesn't have enough juice to justify a new purchase from me. This should be considered horrible news for Nvidia. They should be ashammed of themselves and the CEO should be asked to step down.
  • ol1bit - Thursday, April 1, 2010 - link

    I just snagged a 5870 gen 2 I think (XFX) from NewEgg.

    They have been hard to find in stock, and they are out again.

    I think many were waiting to see if the GF100 was a cruel joke or not. I am sorry for Nivida, but love the completion. I hope Nvidia will survive.

    I'll bet they are burning the midnight oil for gen 2 of the GF100.
  • bala_gamer - Friday, April 2, 2010 - link

    Did you guys recieve the GTX480 earlier than other reviewers? There were 17 cards tested on 3 drivers and i am assuming tests were done multiple times per game to get an average. installing, reinstalling drivers, etc 10.3 catalyst drivers came out week of march 18.

    Do you guys have multiple computers benchmarking at the same time? I just cannot imagine how the tests were all done within the time frame.
  • Ryan Smith - Sunday, April 4, 2010 - link

    Our cards arrived on Friday the 19th, and in reality we didn't start real benchmarking until Saturday. So all of that was done in roughly a 5 day span. In true AnandTech tradition, there wasn't much sleep to be had that week. ;-)
  • mrbig1225 - Tuesday, April 6, 2010 - link

    I felt compelled to say a few things about nvidia’s Fermi (480/470 GTX). I like to always start out by saying…let’s take the fanboyism out of the equation and look at the facts. I am a huge nvidia fan, however they dropped the ball big time. They are selling people on ONE aspect of DX11 (tessellation) and that’s really the only thing there cards does well but it’s not an efficient design. What people aren’t looking at is that their tessellation is done by the polymorh engine which ties directly into the cuda cores, meaning the more cuda cores occupied by shaders processing…etc the less tessellation performance and vice versa = less frames per sec. As you noticed we see tons of tessellation benchmarks that show the gtx 480 is substantially faster at tessellation, I agree when the conditions suite that type of architecture (and there isn’t a lot of other things going on). We know that the gf100(480/470gtx) is a computing beast, but I don’t believe that will equate to overall gaming performance. The facts are this gpu is huge (3billion + transistors), creates a boat load of heat, and sucks up more power than any of the latest dual gpu cards (295gtx, 5970) came to market 6 months late and is only faster than its single gpu competition by 10-15% and some of us are happy? Oh that’s right it will be faster in the future when dx11 is relevant…I don’t think so for a few reasons but ill name two. If you look at the current crop of dx11 games, the benchmarks and actual dx11 game benchmarks (shaders and tessellation…etc) shows something completely different. I think if tessellation was nvidia’s trump card in games then basically the 5800 series would be beat substantially in any dx11 title with tessellation turned on…we aren’t seeing that(we are seeing the opposite in some circumstances), I don’t think we will. I also am fully aware that tessellation is scalable, but that brings me to another point. I know many of you will say that it is only in extreme tessellation environments that we really start to see the nvidias card take off. Well if you agree with that statement then you will see that nvidia has another issue. The 1st is the way they implement tessellation in their cards (not very scalable imo) 2nd is, the video card industry sales are not comprised of high end gpus, but the cheaper mainstream ones. Since nvidia polymorph engine is tied directly to its shaders…u kinda see where this is going, basicly less powerful cards will be bottlenecked by their lack of shaders for tessellation and vice versa. Developers want to make money, the way they make money is selling lots of games, example crysis was a big game, however it didn’t break any records sales…truth of the matter is most people systems couldn’t run crysis. Now you look at valve software and a lot of their titles sale well because of how friendly it is to mainstream gpus(not the only thing but it does help). The hardware has to be there to support a large # of game sales, meaning that if the majority of parts cannot do extreme levels of tessellation then you will find few games to implement it. Food for thought… can anyone show me a dx11 title that the gtx480 handily beats the 5870 by the same amount that it does in the heaven benchmark or even close to that. I think as a few of you have said, it will come down to what game work better with what architecture..some will benefit nvidia(Farcry2..good example) others Ati (Stalker)…I think that is what we are seeing now. IMO
    P.S. I think also why people are pissed is because this card was stated to be 60% faster than the 5870. As u can see its not!!
  • houkouonchi - Thursday, April 8, 2010 - link

    Why the hell are the screenshots showing off the AA results in a lossy JPEG format instead of PNG like pretty much anything else?
  • dzmcm - Monday, April 12, 2010 - link

    I'm not familiar with Battleforge firsthand, but I understood it uses HD Ambient Occlusion wich is a variation of Screen Space Ambient Occlusion that includes normal maps. And since it's inception in Crysis SSAO has stood for Screen Space AO. So why is it called Self Shadow AO in this article?

    Bit-tech refers to Stalker:CoP's SSAO as "Soft Shadow." That I'm willing to dismiss. But I think they're wrong.

    Am I'm falling behind with my jargon, or are you guys not bothering to keep up?

Log in

Don't have an account? Sign up now