Prologue

Because designing GPUs this big is "[redacted] hard"

-NVIDIA’s VP of Product Marketing Ujesh Desai on GF100

Fermi/GF100/GTX400 has been a long time coming. For an industry where the major players are usually in lockstep with each other in some fashion, this latest cycle has been distinctly out of sync. In September of 2009 we saw AMD launch their Radeon 5000 series, and spend the next 6 months as the performance leader for the discrete graphics market for virtually every price point. NVIDIA on the other hand missed the latest cycle by a mile, and when AMD was delivering 5000 series cards NVIDIA could only talk about the High Performance Computing applications of their next GPU, Fermi. It would be the next big thing, both figuratively and literally.

In January we got the next piece of the Fermi story at CES, when NVIDIA was willing to talk about the gaming-oriented hardware and applications of the first Fermi GPU: GF100. We found out it would be significantly different from the GT200 GPU powering NVIDIA’s GTX200 series, that NVIDIA was going to break up the traditional fixed-function pipeline and at the same time take a particularly keen interest in tessellation. What we didn’t find out is when it would ship.

Much of this has been in NVIDIA’s hands – some of it has not. What’s indisputable is that TSMC, the chip foundry used by both AMD and NVIDIA, was not delivering the kind of yields on their 40nm process that AMD and NVIDIA were expecting. Both of them suffered for it. AMD could not get enough viable chips to meet demand for their 5000 series part, leaving a wake of disappointed gamers who could not get AMD’s latest wonder, and limiting AMD’s ability to profit from one of the few times in AMD/ATI’s history where the company had a clear lead over NVIDIA, and if you ask AMD also limiting their ability to control prices. NVIDIA meanwhile had to deal with the fact that they were trying to produce a very large chip on a low-yielding process, a combination for disaster given that size is the enemy of high yields.

What’s also indisputable is that this 6 month wait has benefited few people. For the observers of an industry where the competition is cut-throat we saw GPU prices rise, when in the last generation AMD and NVIDIA knocked upwards of hundreds of dollars off of each other’s prices practically overnight. It may have been good times overall for AMD, but for buyers the competitive marketplace is sorely missed.

That brings us to today. We can see the light at the end of the tunnel, the end of NVIDIA’s 6 month journey is near. We’ve told you about the compute applications of Fermi, we’ve told you about the gaming applications of Fermi, and now at long last we can tell you about the first products. It’s time to say hello to the GTX 400 series.

Meet the GTX 480 and GTX 470

 

  GTX 480 GTX 470 GTX 295 GTX 285 9800+ GTX
Stream Processors 480 448 2 x 240 240 128
Texture Address / Filtering 60/60 56/56 2 x 80 / 80 80 / 80 64 / 64
ROPs 48 40 2x 28 32 16
Core Clock 700MHz 607MHz 576MHz 648MHz 738MHz
Shader Clock 1401MHz 1215MHz 1242MHz 1476MHz 1836MHz
Memory Clock 924MHz (3696MHz data rate) GDDR5 837MHz (3348MHz data rate) GDDR5 999MHz (1998MHz data rate) GDDR3 1242MHz (2484MHz data rate) GDDR3 1100MHz (2200MHz data rate) GDDR3
Memory Bus Width 384-bit 320-bit 2 x 448-bit 512-bit 256-bit
Frame Buffer 1.5GB 1.25GB 2 x 896MB 1GB 512MB
Transistor Count 3B 3B 2 x 1.4B 1.4B 754M
Manufacturing Process TSMC 40nm TSMC 40nm TSMC 55nm TSMC 55nm TSMC 55nm
Price Point $499 $349 $500 $400 $150 - 200

Today NVIDIA is launching two cards: the GeForce GTX 480, and the GeForce GTX 470. Both of them are based on GF100, the first and largest member of the Fermi family. Right off the bat, we can tell you that neither card is a complete GF100 chip. We know from NVIDIA’s earlier announcements that a complete GF100 is a 512 SP/core part organized in a 4x16x32 fashion, but these first parts will not have all of GF100’s functional units activated. Instead we’ll be getting a 480 core part for the GTX 480, and a 448 core part for the GTX 470. Ultimately we will not be seeing the full power of GF100 right away, but you can be sure that somewhere down the line we’ll see a GTX 485 or GTX 490 with all of GF100’s functional units enabled.


NVIDIA's GeForce GTX 480

What we’re starting out with today at the high-end is the GTX 480, a card based on a GF100 with 1 of the 16 SMs disabled that will sell for $499, making it the primary competitor for the Radeon 5870. The disabled SM has no affect on the ROPs which are part of a separate functional block, but it does cut down on the shading, texturing, and tessellation capabilities of the card compared to where a full GF100 card would be. This gives the GTX 480 the full 48 ROPs and 768KB of L2 cache of GF100, along with 60 texture units, 15 PolyMorph engines, and 480 cores. Although the architectural overhaul means we can’t compare the GTX 480 to the GTX 285 quite as easily as we could the Radeon 5000 series to the Radeon 4000 series, the GTX 480 is still in some ways a doubled-up GTX 285 from a shader standpoint.

The GTX 470 on the other hand is a further cut-down GF100-based product that will sell for $349. As we stated earlier it has only 448 cores, a product of NVIDIA disabling 2 of the 16 SMs. Along with disabling the SMs, NVIDIA also disabled 1 of the 6 ROP clusters, which disables 6 ROPs, 128KB of L2 cache, and 2 of the 32bit memory channels. This leaves us with 448 cores running alongside 40 ROPs, 640KB of L2 cache, all on a 320bit GDDR5 memory bus.


NVIDIA's GeForce GTX 470

As is usually the case for a harvested part, GTX 470 takes a clockspeed hit compared to GTX 480. The core clock falls 13% to 607MHz, and the shader clock falls the same distance to 1215MHz. The memory clock on the other hand only drops by 10% to 837MHz (3348MHz effective). Overall this gives the GTX 470 around 80% of the shading, texturing, and tessellation capabilities of the GTX 480 and 72% of the ROP capability and memory bandwidth.

One thing that caught our eye with all of this was that NVIDIA’s memory clocks are lower than we had been initially expecting. GDDR5 is readily available up to 5GHz while NVIDIA doesn’t go any higher than 3.7GHz; in fact between the smaller memory bus than the GTX 285 and the lower than expected memory clocks, the GTX 400 series doesn’t have all that much more bandwidth than the GTX 285 did. As it stands the GTX 480 only has 11% more memory bandwidth than the GTX 285, while the GTX 470 has 15% less than the GTX 285.

Given the 384-bit bus, we initially assumed NVIDIA was running in to even greater memory bus issues than AMD ran in to for the 5000 series, but as it turns out that’s not the case. When we asked NVIDIA about working with GDDR5, they told us that their biggest limitation wasn’t the bus like AMD but rather deficiencies in their own I/O controller, which in turn caused them to miss their targeted memory speeds. Unlike AMD who has been using GDDR5 for nearly 2 years, NVIDIA is still relatively new at using GDDR5 (their first product was the GT 240 late last year), so we can’t say we’re completely surprised here. If nothing else, this gives NVIDIA ample room to grow in the future if they can get a 384-bit memory bus up to the same speeds as AMD has gotten their 256-bit bus.

This leaves us with the other elephant in the room: yield on GF100. NVIDIA hasn’t commented specifically on the yields coming out of TSMC but we know even 6 months later that AMD still isn’t satisfied with things, so that should offer some guidance on the situation given NVIDIA’s larger die. As it stands NVIDIA chose to launch their highest end GF100 part with only 15 of 16 SMs in order to reach the “broadest availability”, which is a clear sign that NVIDIA isn’t getting enough full-yielding and high-clocking dies at this time to offer a proper unharvested part.

The power/heat situation also bears mentioning, since it often goes hand-in-hand with yield issues. With a 500mm2+ die on the 40nm process, it should come as no surprise that both the GTX 480 and GTX 470 are hot cards. NVIDIA has to pay the piper for having such a large die, and this is one of the places where they do so. The TDP for the GTX 480 is 250W while it’s 215W for the GTX 470; meanwhile the cards idle at 47W and 33W respectively. NVIDIA’s large die strategy usually leads to them having power-hungry parts, but from a historical perspective the GTX 480 is the hungriest yet for a single-GPU card; even the GTX280 wasn’t quite as high. We’ll get in to this more when we take a look at measured power consumption.

Meet the GTX 480 and GTX 470, Cont
Comments Locked

196 Comments

View All Comments

  • Ryan Smith - Wednesday, March 31, 2010 - link

    My master copies are labeled the same, but after looking at the pictures I agree with you; something must have gotten switched. I'll go flip things. Thanks.
  • Wesgoood - Wednesday, March 31, 2010 - link

    Correction, Nvidia retained their crown on Anandtech. Even though some resolutions even on here were favored to ATI(mostly the higher ones). On Toms Hardware 5870 pretty much beat GTX 480 from 1900x1200 to 2560x1600, not every time in 1900 but pretty much every single time in 2560.

    That ...is where the crown is, in the best of the best situations, not ....OMG it beat it in 1680 ...THAT HAS TO BE THE BEST!

    Plus the power hungry state of this card is just appauling. Nvidia have shown they can't compete with proper technology, rather having to just cram everything they can onto a chip and prey it works right.

    Where as ATI's GPU is designed perfectly to where they have plenty of room to almost double the size of the 5870.
  • efeman - Wednesday, March 31, 2010 - link

    I copied this over from a comment I made on a blog post.

    I've been with nVidia for the past decade. My brother built his desktop way back when with the Ti 4200, I bought a prefab with a 5950 ultra, my last budget build had an 8600 GTS in it, and I upgraded to the GTX 275 last year. I am in no way a fanboy, nVidia just has treated me very well. If I had made that last decision a few months later after the price hike, it would've definitely been the HD 4890; almost identical performance for ballpark $100 less.

    I recently built a new high-end rig (Core i7 and all), but I waited out on dropping the money on a 5800 series card. I knew nVidia's new cards were on the way, and I was excited and willing to wait it out; I expected a lot out of them.

    Now that they're are out in the open, I have to say I'm a little shaken. In many cases, the performance of the cards are not where I would've hoped they be (the general consensus seems to be 5-10% increase in performance over their ATI counterparts; I see that failing in many cases, however). It seems like the effort that nVidia put into the cards gave them lots of potential, but most of it is wasted.

    "The future of PC gaming" is right in the title of this post, and that's what these cards have been built for. Nvidia has a strong lead over ATI in compute and tessellation performance now, that's obvious; however, that will only provide useful if and when developers decide to put the extra effort into taking advantage of those technologies. Nvidia is gambling right now; it has already given ATI a half-year lead on the DX11 market, and it's pushing cards that won't be fully utilized until who-knows-when (there's no telling when these technologies will be more widely integrated into the gaming market). What will it do in the meantime? ATI is already on it's way to producing its 5000-series refresh; and this time it knows the competition's performance.

    I was hoping for the GTX 400s to do the same thing that the GTX 200s did: give nVidia back the high-end performance throne. ATI is not only competitive with it's counterparts, but it still has the 5970 for the enthusiast performance crown (don't forget Eyefinity!). I think nVidia made a mistake in putting so much focus into compute and tessellation performance; it would've been smarter to produce cards with similar die sizes (crappy wafer yields, anyone?), faster raw performance with tessellation/compute as a secondary objective, and more competitive pricing. It wouldn't have been a bad option to create a separate chip for the Tesla cards, one that focused on the compute performance while the GeForce cards focused on the rest.

    I still have faith. Maybe nVidia will work wonders with the drivers and producing performance we were waiting for. Maybe it has something awesome brewing deep within its labs. Or maybe my fears will embody themselves, and nVidia is crossing its fingers and hoping for its tessellation/compute performance to give it the market share later on. If so, ATI will provide me with my pair of cards.

    That was quite the rant; I wasn't planning on writing that much when I decided to comment on Drew Henry's (nVidia GM) blog post. I suppose I'm passionate about this sort of thing, and I really hope nVidia doesn't lose me after all this time.
  • Kevinmbaron - Wednesday, March 31, 2010 - link

    The fact that this card comes out a year and a 1/2 after the the GTX 295 makes me sick. Add to that the fact that the GTX 295 actually is faster then the GTX 480 in a few benchmarks and very close in others is like a bad dream for nvidia. Forget if they can beat AMD, they can't even beat themselves. They could have did a die shrink on the GTX 295, add some more shadders and double the memory and had that card out a year ago and it would have crushed anything on the market. Instead they risked it all on a hair brained new card. I am a GTX 295 owner. Apperently my card is a all arround better card being it doesnt lag in some games like the 480 does. I guess i will stick with my old GTX 295 for another year. Maybe then there might be a card worth buying. Even the ATI 5970 doesn't have enough juice to justify a new purchase from me. This should be considered horrible news for Nvidia. They should be ashammed of themselves and the CEO should be asked to step down.
  • ol1bit - Thursday, April 1, 2010 - link

    I just snagged a 5870 gen 2 I think (XFX) from NewEgg.

    They have been hard to find in stock, and they are out again.

    I think many were waiting to see if the GF100 was a cruel joke or not. I am sorry for Nivida, but love the completion. I hope Nvidia will survive.

    I'll bet they are burning the midnight oil for gen 2 of the GF100.
  • bala_gamer - Friday, April 2, 2010 - link

    Did you guys recieve the GTX480 earlier than other reviewers? There were 17 cards tested on 3 drivers and i am assuming tests were done multiple times per game to get an average. installing, reinstalling drivers, etc 10.3 catalyst drivers came out week of march 18.

    Do you guys have multiple computers benchmarking at the same time? I just cannot imagine how the tests were all done within the time frame.
  • Ryan Smith - Sunday, April 4, 2010 - link

    Our cards arrived on Friday the 19th, and in reality we didn't start real benchmarking until Saturday. So all of that was done in roughly a 5 day span. In true AnandTech tradition, there wasn't much sleep to be had that week. ;-)
  • mrbig1225 - Tuesday, April 6, 2010 - link

    I felt compelled to say a few things about nvidia’s Fermi (480/470 GTX). I like to always start out by saying…let’s take the fanboyism out of the equation and look at the facts. I am a huge nvidia fan, however they dropped the ball big time. They are selling people on ONE aspect of DX11 (tessellation) and that’s really the only thing there cards does well but it’s not an efficient design. What people aren’t looking at is that their tessellation is done by the polymorh engine which ties directly into the cuda cores, meaning the more cuda cores occupied by shaders processing…etc the less tessellation performance and vice versa = less frames per sec. As you noticed we see tons of tessellation benchmarks that show the gtx 480 is substantially faster at tessellation, I agree when the conditions suite that type of architecture (and there isn’t a lot of other things going on). We know that the gf100(480/470gtx) is a computing beast, but I don’t believe that will equate to overall gaming performance. The facts are this gpu is huge (3billion + transistors), creates a boat load of heat, and sucks up more power than any of the latest dual gpu cards (295gtx, 5970) came to market 6 months late and is only faster than its single gpu competition by 10-15% and some of us are happy? Oh that’s right it will be faster in the future when dx11 is relevant…I don’t think so for a few reasons but ill name two. If you look at the current crop of dx11 games, the benchmarks and actual dx11 game benchmarks (shaders and tessellation…etc) shows something completely different. I think if tessellation was nvidia’s trump card in games then basically the 5800 series would be beat substantially in any dx11 title with tessellation turned on…we aren’t seeing that(we are seeing the opposite in some circumstances), I don’t think we will. I also am fully aware that tessellation is scalable, but that brings me to another point. I know many of you will say that it is only in extreme tessellation environments that we really start to see the nvidias card take off. Well if you agree with that statement then you will see that nvidia has another issue. The 1st is the way they implement tessellation in their cards (not very scalable imo) 2nd is, the video card industry sales are not comprised of high end gpus, but the cheaper mainstream ones. Since nvidia polymorph engine is tied directly to its shaders…u kinda see where this is going, basicly less powerful cards will be bottlenecked by their lack of shaders for tessellation and vice versa. Developers want to make money, the way they make money is selling lots of games, example crysis was a big game, however it didn’t break any records sales…truth of the matter is most people systems couldn’t run crysis. Now you look at valve software and a lot of their titles sale well because of how friendly it is to mainstream gpus(not the only thing but it does help). The hardware has to be there to support a large # of game sales, meaning that if the majority of parts cannot do extreme levels of tessellation then you will find few games to implement it. Food for thought… can anyone show me a dx11 title that the gtx480 handily beats the 5870 by the same amount that it does in the heaven benchmark or even close to that. I think as a few of you have said, it will come down to what game work better with what architecture..some will benefit nvidia(Farcry2..good example) others Ati (Stalker)…I think that is what we are seeing now. IMO
    P.S. I think also why people are pissed is because this card was stated to be 60% faster than the 5870. As u can see its not!!
  • houkouonchi - Thursday, April 8, 2010 - link

    Why the hell are the screenshots showing off the AA results in a lossy JPEG format instead of PNG like pretty much anything else?
  • dzmcm - Monday, April 12, 2010 - link

    I'm not familiar with Battleforge firsthand, but I understood it uses HD Ambient Occlusion wich is a variation of Screen Space Ambient Occlusion that includes normal maps. And since it's inception in Crysis SSAO has stood for Screen Space AO. So why is it called Self Shadow AO in this article?

    Bit-tech refers to Stalker:CoP's SSAO as "Soft Shadow." That I'm willing to dismiss. But I think they're wrong.

    Am I'm falling behind with my jargon, or are you guys not bothering to keep up?

Log in

Don't have an account? Sign up now