Graphics Chip Die Sizes

Finally, below you can see our rough estimations and calculations for some die sizes. Lines in bold indicate chips for which we have a relatively accurate die size, so they are not pure estimates.

Nvidia Die Sizes
DirectX 9.0C with PS3.0 and VS3.0 Support
GF 6600 NV43 146 110 9.50 8 159
GF 6600GT NV43 146 110 9.50 8 159
GF 6800LE NV40 222 130 8.75 8 287
GF 6800 NV40 222 130 8.75 8 287
GF 6800GT NV40 222 130 8.75 8 287
GF 6800U NV40 222 130 8.75 8 287
GF 6800UE NV40 222 130 8.75 8 287
DirectX 9 with PS2.0+ and VS2.0+ Support
GFFX 5200LE NV34 45 150 9.50 8 91
GFFX 5200 NV34 45 150 9.50 8 91
GFFX 5200U NV34 45 150 9.50 8 91
GFFX 5500 NV34 45 150 9.50 8 91
GFFX 5600XT NV31 80 130 10.00 8 135
GFFX 5600 NV31 80 130 10.00 8 135
GFFX 5600U NV31 80 130 10.00 8 135
GFFX 5700LE NV36 82 130 9.50 8 125
GFFX 5700 NV36 82 130 9.50 8 125
GFFX 5700U NV36 82 130 9.50 8 125
GFFX 5700UDDR3 NV36 82 130 9.50 8 125
GFFX 5800 NV30 125 130 10.00 8 211
GFFX 5800U NV30 125 130 10.00 8 211
GFFX 5900XT/SE NV35 135 130 9.50 8 206
GFFX 5900 NV35 135 130 9.50 8 206
GFFX 5900U NV35 135 130 9.50 8 206
GFFX 5950U NV38 135 130 9.50 8 206
DirectX 8 with PS1.3 and VS1.1 Support
GF3 Ti200 NV20 57 150 10.00 8 128
GeForce 3 NV20 57 150 10.00 8 128
GF3 Ti500 NV20 57 150 10.00 8 128
GF4 Ti4200 128 NV25 63 150 10.00 8 142
GF4 Ti4200 64 NV25 63 150 10.00 8 142
GF4 Ti4200 8X NV25 63 150 10.00 8 142
GF4 Ti4400 NV25 63 150 10.00 8 142
GF4 Ti4600 NV25 63 150 10.00 8 142
GF4 Ti4800 NV25 63 150 10.00 8 142
GF4 Ti4800 SE NV25 63 150 10.00 8 142
DirectX 7
GeForce 256 SDR NV10 23 220 10.00 8 111
GeForce 256 DDR NV10 23 220 10.00 8 111
GF2 MX200 NV11 20 180 10.00 8 65
GF2 MX NV11 20 180 10.00 8 65
GF2 MX400 NV11 20 180 10.00 8 65
GF2 GTS NV15 25 180 10.00 8 81
GF2 Pro NV15 25 180 10.00 8 81
GF2 Ti NV15 25 150 10.00 8 56
GF2 Ultra NV15 25 180 10.00 8 81
GF4 MX420 NV17 29 150 10.00 8 65
GF4 MX440 SE NV17 29 150 10.00 8 65
GF4 MX440 NV17 29 150 10.00 8 65
GF4 MX440 8X NV18 29 150 10.00 8 65
GF4 MX460 NV17 29 150 10.00 8 65
 
ATI Die Sizes
DirectX 9 with PS2.0b and VS2.0 Support
X800 SE? R420 160 130 9.75 8 257
X800 Pro R420 160 130 9.75 8 257
X800 GT? R420 160 130 9.75 8 257
X800 XT R420 160 130 9.75 8 257
X800 XT PE R420 160 130 9.75 8 257
DirectX 9 with PS2.0 and VS2.0 Support
9500 R300 107 150 9.00 8 195
9500 Pro R300 107 150 9.00 8 195
9550 SE RV350 75 130 8.50 8 92
9550 RV350 75 130 8.50 8 92
9600 SE RV350 75 130 8.50 8 92
9600 RV350 75 130 8.50 8 92
9600 Pro RV350 75 130 8.50 8 92
9600 XT RV360 75 130 8.50 8 92
9700 R300 107 150 9.00 8 195
9700 Pro R300 107 150 9.00 8 195
9800 SE R350 115 150 9.00 8 210
9800 R350 115 150 9.00 8 210
9800 Pro R350 115 150 9.00 8 210
9800 XT R360 115 150 9.00 8 210
X300 SE RV370 75 110 9.00 8 74
X300 RV370 75 110 9.00 8 74
X600 Pro RV380 75 130 8.50 8 92
X600 XT RV380 75 130 8.50 8 92
DirectX 8.1 with PS1.4 and VS1.1 Support
8500 LE R200 60 150 10.00 8 135
8500 R200 60 150 10.00 8 135
9000 RV250 36 150 10.00 8 81
9000 Pro RV250 36 150 10.00 8 81
9100 R200 60 150 10.00 8 135
9100 Pro R200 60 150 10.00 8 135
9200 SE RV280 36 150 10.00 8 81
9200 RV280 36 150 10.00 8 81
9200 Pro RV280 36 150 10.00 8 81
DirectX 7
Radeon VE RV100 30? 180 10.00 8 97
7000 PCI RV100 30? 180 10.00 8 97
7000 AGP RV100 30? 180 10.00 8 97
Radeon LE R100 30 180 10.00 8 97
Radeon SDR R100 30 180 10.00 8 97
Radeon DDR R100 30 180 10.00 8 97
7200 R100 30 180 10.00 8 97
7500 LE RV200 30 150 10.00 8 68
7500 AIW RV200 30 150 10.00 8 68
7500 RV200 30 150 10.00 8 68

After all that, we finally get to the chart of die sizes. That was a lot of work for what might be considered a small reward, but there is a reason for all this talk of die sizes. If you look at the charts, you should notice one thing looking at the history of modern GPUs: die sizes are increasing exponentially on the high end parts. This is not a good thing at all.

AMD and Intel processors vary in size over time, depending on transistor counts, process technology, etc. However, they both try to target a "sweet spot" in terms of size that maximizes yields and profits. Smaller is almost always better, all other things being equal, with ideal sizes generally being somewhere in between 80 mm2 and 120 mm2. Larger die sizes mean that there are fewer chips per wafer, and there are more likely to be errors in an individual chip, decreasing yields. There is also a set cost per wafer, so whether you can get 50 or 500 chips out of the wafer, the cost remains the same. ATI and NVIDIA do not necessarily incur these costs, but their fabrication partners do, and it still affects chip output and availability. Let's look at this a little closer, though.

On 300 mm wafers, you have a total surface area of 70,686 mm2 (pi * r2; r = 150 mm). If you have a 130 mm2 chip, you could get approximately 500 chips out of a wafer, of which a certain percentage will have flaws. If you have a 200 mm2 chip, you could get about 320 chips, again with a certain percentage having flaws. With a 280 mm2 like the NV40 and R420, we're down to about 230 chips per wafer. So just in terms of the total number of dies to test, we see how larger die sizes are undesirable. Let's talk about the flaws, though.

The percentage of chips on a wafer that are good is called the yield. Basically, there are an average number of flaws in any wafer, more or less distributed evenly. With that being the case, each flaw will normally affect one chip, although if there are large numbers of flaws you could get several defects per chip. As an example, let's say there are on average 50 flaws per wafer. That means there will typically be 50 failed chips on each wafer. Going back to the chip sizes and maximum dies listed above, we can now get an estimated yield. With 130 mm2 dies, we lose about 50 out of 500, so the yield would be 90%, which is very good. With 200 mm2 dies, we lose about 50 out of 320, so now the yield drops to 84%. On the large 280 mm2 dies, we now lose 50 out of 230, and yield drops to 78%. Those are just examples, as we don't know the exact details of the TSMC and IBM fabrication plants, but it should suffice to illustrate how large die sizes are not at all desirable.

Now, look at the die size estimates, and you'll see that from the NV10 and R100 we have gone from a typical die size of +/- 100 mm2 in late 1999 to around 200 mm2 in mid 2002 on the R300, and we're now at around 280 mm2 in mid 2004. Shrinking to 90 nm process technology would reduce die sizes by about half compared to 130 nm, but AMD is just now getting their 90 nm parts out, and it may be over a year before 90 nm becomes available for fabless companies like ATI and NVIDIA. It's going to be interesting seeing how the R5xx and NV5x parts shape up, as simply increasing the number of vertex and pixel pipelines beyond current levels is going to be difficult without shifting to a 90 nm process.

All is not lost, however. Looking at the mid-range market, you can see how these parts manage to be priced lower, allowing them to sell in larger volumes. Most of these parts remain under 150 mm2 in size, and quite a few of the parts remain under 100 mm2. It's no surprise that ATI and NVIDIA sell many more of their mid-range and low-end parts than high-end parts, since few non-gamers have a desire to spend $500 on a graphics card when they could build an entire computer for that price. Really, though, these parts are mid-range because they can be, while the high-end parts really have to be in that segment. Smaller sizes bring higher yields and higher supply, resulting in lower prices. Conversely, larger sizes bring lower yields and a lower supply, so prices go up. We especially see this early on: if demand is great enough for the new cards, we get instances like the recent 6800 and X800 cards where parts are selling for well over MSRP.

Is it smaller than a bread box? Wrapping it All Up
Comments Locked

43 Comments

View All Comments

  • JarredWalton - Thursday, October 28, 2004 - link

    43 - It should be an option somewhere in the ATI Catalyst Control Center. I don't have an X800 of my own to verify this on, not to mention a lack of applications which use this feature. My comment was more tailored towards people that don't read hardware sites. Typical users really don't know much about their hardware or how to adjust advanced settings, so the default options are what they use.
  • Thera - Tuesday, October 19, 2004 - link

    You say SM2.0b is disabled and consumers don't know how to turn it on. Can you tell us how to enable SM2.0b?

    Thank you.

    (cross posted from video forum)
  • endrebjorsvik - Wednesday, September 15, 2004 - link

    WOW!! Very nice article!!

    does anyone have all these datas collected into an exel-file or something??
  • JarredWalton - Sunday, September 12, 2004 - link

    Correction to my last post. KiB and MiB and such are meant to be used for size calculations, and then KB and MB can be used for bandwidth calculations. Now the first paragraph (and my gripe) should be a little more clear if you didn't understand it already. Basically, the *bandwidth* companies (hard drives, and to a lesser extent RAM companies advertising bandwidth) proposed that their incorrect calculations stand and that those who wanted to use the old computer calculations should change.

    There are problems, however. HDD and RAM both continue to use both calculations. RAM uses the simplified KB and MB for bandwidth, but the accepted KB and MB (KiB and MiB now) for size. HDD uses the simplified KB and MB for size, but then they use the other KB and MB for sustained transfer rates. So, the proposed change not only failed to address the problem, but the proposers basically continue in the same way as before.
  • JarredWalton - Saturday, September 11, 2004 - link

    #38 - there are quite a few cards/chips that were only available in very limited quantities.

    39 - Actually, that is only partially true. KibiBytes and MibiBytes are a *proposed* change as far as I am aware, and they basically allow the HDD and RAM people to continue with their simplified calculations. I believe that KiB and MiB are meant for bandwidths, however, and not memory sizes. The problem is that MB and KB were in existence long before KiB and MiB were proposed. Early computers with 8 KB of RAM (over 40 years ago) had 8192 bytes of RAM, not 8000 bytes. When you buy a 512 MB DIMM, it is 512 * 1048576 bytes, not 512 * 1000000 bytes.

    If a new standard is to be adopted for abbreviations, it is my personal opinion that the parties who did not conform to the old standard are the ones that should change. Since I often look at the low level details of processors and GPUs and such, I do not want to have two different meanings of the same thing, which is what we currently have. Heck, there was even a class action lawsuit against hard drive manufacturers a while back about this "lie". That was the solution: the HDD people basically said, "We're right and in the future 2^10 = KiB, 2^20 = MiB, 2^30 = GiB, etc." Talk about not taking responsibility for your acttions....

    It *IS* a minor point for most people, and relative performance is still the same. Basically, this is one of my pet peeves. It would be like saying, "You know what, 5280 feet per mile is inconvenient Even though it has been this way for ages, let's just call it 5000 feet per mile." I have yet to see any hardware manufacturers actually use KiB or MiB as an abbreviation, and software that has been around for decades still thinks that a KB is 1024 bytes and a MB is 1048576.
  • Bonta - Saturday, September 11, 2004 - link

    Jarred, you were wrong about the abbreviation MB.
    1 MB is 1 mega Byte is (1000*1000) Bytes is 1000000 Bytes is 1 million Bytes.
    1 MiB is (1024*1024) Bytes is 1048576 Bytes.

    So the vid card makers (and the hard drive makers) actually have it right, and can keep smiling. It is the people that think 1MB is 1048576 Bytes that have it wrong. I can't pronounce or spell 1 MiB correctly, but it is something like 1 mibiBytes.
  • viggen - Friday, September 10, 2004 - link

    Nice article but what's up with the 9200 Pro running at 300mhz for core & memory? I dun remember ATI having such a card.
  • JarredWalton - Wednesday, September 8, 2004 - link

    Oops... I forgot the link from Quon. Here it is:

    http://www.appliedmaterials.com/HTMAC/index.html

    It's somewhat basic, but at the same time, it covers several things my article left out.
  • JarredWalton - Wednesday, September 8, 2004 - link

    I received a link from Matthew Quon containing a recent presentation on the whole chip fabrication process. It includes details that I omitted, but in general it supports my abbreviated description of the process.

    #34: Yes, there are errors that are bound to slip through. This is especially true on older parts. However, as you point out, several of the older chips were offered in various speed grades, which only makes it more difficult. Several of the as-yet unreleased parts may vary, but on the X700 and 6800LE, that's the best info we have right now. The vertex pipelines are *not* tied directly to the pixel quads, so disabling 1/4 or 1/2 of the pixel pipelines does not mean they *have* to disable 1/4 or 1/2 of the vertex pipelines. According to T8000, though, the 6800LE is a 4 vertex pipeline card.

    Last, you might want to take note of the fact that I have written precisely 3 articles for Anandtech. I live in Washington, while many of the other AT people are back east. So, don't count on everything being reviewed by every single AT editor - we're only human. :)

    (I'm working on some updates and corrections, which will hopefully be posted in the next 24 hours.)
  • T8000 - Wednesday, September 8, 2004 - link

    I think it is very good to put the facts together in such a review.

    I did notice three things, however:

    1: I have a GF6800LE and it has 4 enabled vertex pipes instead of 5 and comes with a 300/700 gpu/mem clock.

    2: Since gpu clock speeds did not increase much, they had to add more features (like pipelines) to increase performance.

    3: Gpu defects are less of an issue then cpu defects, since a lot of large gpu's offered the luxory of disabling parts, so that most defective gpu's can still be sold. As far as I know, this feature has never made it into the cpu market.

Log in

Don't have an account? Sign up now