Graphics Chip Die Sizes

Finally, below you can see our rough estimations and calculations for some die sizes. Lines in bold indicate chips for which we have a relatively accurate die size, so they are not pure estimates.

Nvidia Die Sizes
DirectX 9.0C with PS3.0 and VS3.0 Support
GF 6600 NV43 146 110 9.50 8 159
GF 6600GT NV43 146 110 9.50 8 159
GF 6800LE NV40 222 130 8.75 8 287
GF 6800 NV40 222 130 8.75 8 287
GF 6800GT NV40 222 130 8.75 8 287
GF 6800U NV40 222 130 8.75 8 287
GF 6800UE NV40 222 130 8.75 8 287
DirectX 9 with PS2.0+ and VS2.0+ Support
GFFX 5200LE NV34 45 150 9.50 8 91
GFFX 5200 NV34 45 150 9.50 8 91
GFFX 5200U NV34 45 150 9.50 8 91
GFFX 5500 NV34 45 150 9.50 8 91
GFFX 5600XT NV31 80 130 10.00 8 135
GFFX 5600 NV31 80 130 10.00 8 135
GFFX 5600U NV31 80 130 10.00 8 135
GFFX 5700LE NV36 82 130 9.50 8 125
GFFX 5700 NV36 82 130 9.50 8 125
GFFX 5700U NV36 82 130 9.50 8 125
GFFX 5700UDDR3 NV36 82 130 9.50 8 125
GFFX 5800 NV30 125 130 10.00 8 211
GFFX 5800U NV30 125 130 10.00 8 211
GFFX 5900XT/SE NV35 135 130 9.50 8 206
GFFX 5900 NV35 135 130 9.50 8 206
GFFX 5900U NV35 135 130 9.50 8 206
GFFX 5950U NV38 135 130 9.50 8 206
DirectX 8 with PS1.3 and VS1.1 Support
GF3 Ti200 NV20 57 150 10.00 8 128
GeForce 3 NV20 57 150 10.00 8 128
GF3 Ti500 NV20 57 150 10.00 8 128
GF4 Ti4200 128 NV25 63 150 10.00 8 142
GF4 Ti4200 64 NV25 63 150 10.00 8 142
GF4 Ti4200 8X NV25 63 150 10.00 8 142
GF4 Ti4400 NV25 63 150 10.00 8 142
GF4 Ti4600 NV25 63 150 10.00 8 142
GF4 Ti4800 NV25 63 150 10.00 8 142
GF4 Ti4800 SE NV25 63 150 10.00 8 142
DirectX 7
GeForce 256 SDR NV10 23 220 10.00 8 111
GeForce 256 DDR NV10 23 220 10.00 8 111
GF2 MX200 NV11 20 180 10.00 8 65
GF2 MX NV11 20 180 10.00 8 65
GF2 MX400 NV11 20 180 10.00 8 65
GF2 GTS NV15 25 180 10.00 8 81
GF2 Pro NV15 25 180 10.00 8 81
GF2 Ti NV15 25 150 10.00 8 56
GF2 Ultra NV15 25 180 10.00 8 81
GF4 MX420 NV17 29 150 10.00 8 65
GF4 MX440 SE NV17 29 150 10.00 8 65
GF4 MX440 NV17 29 150 10.00 8 65
GF4 MX440 8X NV18 29 150 10.00 8 65
GF4 MX460 NV17 29 150 10.00 8 65
 
ATI Die Sizes
DirectX 9 with PS2.0b and VS2.0 Support
X800 SE? R420 160 130 9.75 8 257
X800 Pro R420 160 130 9.75 8 257
X800 GT? R420 160 130 9.75 8 257
X800 XT R420 160 130 9.75 8 257
X800 XT PE R420 160 130 9.75 8 257
DirectX 9 with PS2.0 and VS2.0 Support
9500 R300 107 150 9.00 8 195
9500 Pro R300 107 150 9.00 8 195
9550 SE RV350 75 130 8.50 8 92
9550 RV350 75 130 8.50 8 92
9600 SE RV350 75 130 8.50 8 92
9600 RV350 75 130 8.50 8 92
9600 Pro RV350 75 130 8.50 8 92
9600 XT RV360 75 130 8.50 8 92
9700 R300 107 150 9.00 8 195
9700 Pro R300 107 150 9.00 8 195
9800 SE R350 115 150 9.00 8 210
9800 R350 115 150 9.00 8 210
9800 Pro R350 115 150 9.00 8 210
9800 XT R360 115 150 9.00 8 210
X300 SE RV370 75 110 9.00 8 74
X300 RV370 75 110 9.00 8 74
X600 Pro RV380 75 130 8.50 8 92
X600 XT RV380 75 130 8.50 8 92
DirectX 8.1 with PS1.4 and VS1.1 Support
8500 LE R200 60 150 10.00 8 135
8500 R200 60 150 10.00 8 135
9000 RV250 36 150 10.00 8 81
9000 Pro RV250 36 150 10.00 8 81
9100 R200 60 150 10.00 8 135
9100 Pro R200 60 150 10.00 8 135
9200 SE RV280 36 150 10.00 8 81
9200 RV280 36 150 10.00 8 81
9200 Pro RV280 36 150 10.00 8 81
DirectX 7
Radeon VE RV100 30? 180 10.00 8 97
7000 PCI RV100 30? 180 10.00 8 97
7000 AGP RV100 30? 180 10.00 8 97
Radeon LE R100 30 180 10.00 8 97
Radeon SDR R100 30 180 10.00 8 97
Radeon DDR R100 30 180 10.00 8 97
7200 R100 30 180 10.00 8 97
7500 LE RV200 30 150 10.00 8 68
7500 AIW RV200 30 150 10.00 8 68
7500 RV200 30 150 10.00 8 68

After all that, we finally get to the chart of die sizes. That was a lot of work for what might be considered a small reward, but there is a reason for all this talk of die sizes. If you look at the charts, you should notice one thing looking at the history of modern GPUs: die sizes are increasing exponentially on the high end parts. This is not a good thing at all.

AMD and Intel processors vary in size over time, depending on transistor counts, process technology, etc. However, they both try to target a "sweet spot" in terms of size that maximizes yields and profits. Smaller is almost always better, all other things being equal, with ideal sizes generally being somewhere in between 80 mm2 and 120 mm2. Larger die sizes mean that there are fewer chips per wafer, and there are more likely to be errors in an individual chip, decreasing yields. There is also a set cost per wafer, so whether you can get 50 or 500 chips out of the wafer, the cost remains the same. ATI and NVIDIA do not necessarily incur these costs, but their fabrication partners do, and it still affects chip output and availability. Let's look at this a little closer, though.

On 300 mm wafers, you have a total surface area of 70,686 mm2 (pi * r2; r = 150 mm). If you have a 130 mm2 chip, you could get approximately 500 chips out of a wafer, of which a certain percentage will have flaws. If you have a 200 mm2 chip, you could get about 320 chips, again with a certain percentage having flaws. With a 280 mm2 like the NV40 and R420, we're down to about 230 chips per wafer. So just in terms of the total number of dies to test, we see how larger die sizes are undesirable. Let's talk about the flaws, though.

The percentage of chips on a wafer that are good is called the yield. Basically, there are an average number of flaws in any wafer, more or less distributed evenly. With that being the case, each flaw will normally affect one chip, although if there are large numbers of flaws you could get several defects per chip. As an example, let's say there are on average 50 flaws per wafer. That means there will typically be 50 failed chips on each wafer. Going back to the chip sizes and maximum dies listed above, we can now get an estimated yield. With 130 mm2 dies, we lose about 50 out of 500, so the yield would be 90%, which is very good. With 200 mm2 dies, we lose about 50 out of 320, so now the yield drops to 84%. On the large 280 mm2 dies, we now lose 50 out of 230, and yield drops to 78%. Those are just examples, as we don't know the exact details of the TSMC and IBM fabrication plants, but it should suffice to illustrate how large die sizes are not at all desirable.

Now, look at the die size estimates, and you'll see that from the NV10 and R100 we have gone from a typical die size of +/- 100 mm2 in late 1999 to around 200 mm2 in mid 2002 on the R300, and we're now at around 280 mm2 in mid 2004. Shrinking to 90 nm process technology would reduce die sizes by about half compared to 130 nm, but AMD is just now getting their 90 nm parts out, and it may be over a year before 90 nm becomes available for fabless companies like ATI and NVIDIA. It's going to be interesting seeing how the R5xx and NV5x parts shape up, as simply increasing the number of vertex and pixel pipelines beyond current levels is going to be difficult without shifting to a 90 nm process.

All is not lost, however. Looking at the mid-range market, you can see how these parts manage to be priced lower, allowing them to sell in larger volumes. Most of these parts remain under 150 mm2 in size, and quite a few of the parts remain under 100 mm2. It's no surprise that ATI and NVIDIA sell many more of their mid-range and low-end parts than high-end parts, since few non-gamers have a desire to spend $500 on a graphics card when they could build an entire computer for that price. Really, though, these parts are mid-range because they can be, while the high-end parts really have to be in that segment. Smaller sizes bring higher yields and higher supply, resulting in lower prices. Conversely, larger sizes bring lower yields and a lower supply, so prices go up. We especially see this early on: if demand is great enough for the new cards, we get instances like the recent 6800 and X800 cards where parts are selling for well over MSRP.

Is it smaller than a bread box? Wrapping it All Up
Comments Locked

43 Comments

View All Comments

  • MODEL 3 - Wednesday, September 8, 2004 - link

    A lot of mistakes for a professional hardware review site the size of Anandtech.I will only mention the de facto mistakes since I have doubts for more.I am actually surprised about the amount of mistakes in this article.I mean since I live in Greece (not the center of the world in 3d technology or hardware market) I always thought that the editors in the best hardware review sites of the world (like Anandtech) have at least the basic knowledge related to technology and they make research and doublecheck if their articles are correct.I mean they get paid, right?I mean if I can find so easily their mistakes (I have no technology related degree although I was purchase and product manager in the best Greek IT companies) they must be doing something very,very wrong indeed.Now onto the mistakes:
    ATI :
    X700 6 vertex pipelines: Actually this is no mistake since I have no information about this new part but it seems strange if X700 will have the same (6) vertex pipelines as X800XT.I guess more logical would be half as many (3) (like 6800Ultra-6600GT) or double as many as X600 (4).We will see.
    Radeon VE 183/183: The actual speed was 166/166SDR 128bit for ATI parts and as low as 143/143 for 3rd party bulk part
    Radeon 7000 PCI 166/333 The actual speed was 166/166SDR 128bit for ATI parts and as low as 143/143 for 3rd party bulk part (note that anandtech suggests 166DDR and the correct is 166 SDR)
    Radeon 7000 AGP 183/366 32/64(MB): The actual speed was 166/166SDR for ATI parts and as low as 143/143 for 3rd party bulk part (note that anandtech suggests 166DDR and the correct is 166 SDR) also at launch and for a whole year (if ever) it didn't exist a 64MB part
    Radeon 7200 64bit ram bus: The 7200 was exactly the same as Radeon DDR so the ram bus width was 128bit
    ATI has unofficial DX 9 with SM2.0b support: Actually ATI has official DX 9.0b support and Microsoft certified this "in between" version of DX9.When they enable their 2.0b feutures they don't fail WHQL compliance since 2.0b is official microsoft version (get it?).Feutures like 3Dc normal map compression are activated only in open GL mode but 3Dc compression is not part of DX9.0b.
    NVIDIA:
    GF 6800LE with 8 pixel pipelines has according to Anandtech 5 vertex pipelines: Actually this is no mistake since I have no information about this part but since 6800GT/Ultra is built with four (4) quads with 4 pixel pipelines each isn't more logical the 6800LE with half the quads to have half the pixel (8) AND half (3) the vertex pipelines?
    GFFX 5700 3 vertex pipelines: GFFX 5700 has half the number of pixel AND vertex pipelines of 5900 so if you convert the vertex array of 5900 into 3 vertex pipes (which is correct) then the 5700 would have 1,5
    GF4 4600 300/600: The actual speed is 300/325DDR 128bit
    GF2MX 175/333: The actual speed is 175/166SDR 128bit
    GF4MX series 0.5 vertex shader: Actually the GF4MX series had twice the amount of vertex shaders of GF2 so the correct number of vertex shader is 1
    According to Anandtech, the GF3 cards only show a slight performance increase over the GF2 Ultra, and that is only in more recent games : Actually GF3 (Q1 01) was based in 0,18 nm technology and the yields was extremely low.In reality GF3 parts in acceptable quantity came in Q3 01 with GF3Ti series 0,15 nm technology .If you check the performance in open GL games at and after Q3 01 and DX8 games at and after Q3 02 you will clearly see GF3 to have double the performance of GF2 clock for clock (GF3Ti500 Vs GF2Ultra)

    Now, the rest of the article is not bad and I also appreciate the effort.
  • JarredWalton - Wednesday, September 8, 2004 - link

    Sorry, ViRGE - I actually took your suggestion to heart and updated page 3 initially, since you are right about it being more common. However, I forgot to modify the DX7 performance charts. There are probably quite a few other corrections that should be made as well....
  • ViRGE - Tuesday, September 7, 2004 - link

    Jared, like I said, you're technically right about how the GF2 MX could be outfitted with either 128bit SDR or 64bit SDR/DDR, but you said it yourself that the cards were mostly 128bit SDR. Obviously any change won't have an impact, but in my humble opinion, it would be best to change the GF2 MX to better represent what historically happened, so that if someone uses this chart as a reference for a GF2 MX, they're more likely to be getting the "right" data.
  • BigLan - Tuesday, September 7, 2004 - link

    Good job with the article

    Love the office reference...

    "Can I put it in my mouth?"
  • darth_beavis - Tuesday, September 7, 2004 - link

    Sorry, now it's suddenly working. I don't know what my problem is (but I'm sure it's hard to pronounce).
  • darth_beavis - Tuesday, September 7, 2004 - link

    Actually it looks like none of them have labels. Is anandtech not mozilla compatible or something. Just use jpgs pleaz.
  • darth_beavis - Tuesday, September 7, 2004 - link

    Why is there no descriptions for the columns on the graph on pg 2. Are just supposed to guess what the numbers mean?
  • JarredWalton - Tuesday, September 7, 2004 - link

    Yes, Questar, laden with errors. All over the place. Thanks for pointing them out so that they could be corrected. I'm sure that took you quite some time.

    Seriously, though, point them out (other than omissions, as making a complete list of every single variation of every single card would be difficult at best) and we will be happy to correct them provided that they actually are incorrect. And if you really want a card included, send the details of the card, and we can add that as well.

    Regarding the ATI AIW (All In Wonder, for those that don't know) cards, they often varied from the clock and RAM speeds of the standard chips. Later models may have faster RAM or core speeds, while earlier models often had slower RAM and core speeds.
  • blckgrffn - Tuesday, September 7, 2004 - link

    Questar - if you don't like it, leave. The article clearly stated its bounds and did a great job. My $.02 - the 7500 AIW is 64 meg DDR only, unsure of the speed however. Do you want me to check that out?
  • mikecel79 - Tuesday, September 7, 2004 - link

    #22 The Geforce256 was released in October of 1999 so this is roughly the last 5 years of chips from ATI and Nvidia. If it were to include all other manufacturers it would be quite a bit longer.

    How about examples of this article being "laden or errors" instead of just stating it.

Log in

Don't have an account? Sign up now