Wrapping it All Up

So, that's an overview of the recent history of graphics processors. For those that are impressed by the rate of progress in the CPU world, it pales in comparison to recent trends in 3D graphics. Just looking at raw theoretical performance, since the introduction of the "World's First Graphics Processing Unit GPU", the GeForce 256, 3D chips have become about 20 times as fast. That doesn't even take into account architectural optimizations that actually allow chips to come closer to their theoretical performance, or the addition of programmability in DX8 and later chips. Taken together with the raw performance increases, it is probably safe to say that GPUs have become roughly 30 times faster since their introduction. We often hear of "Moore's Law" in regards to CPUs, which is usually paraphrased as being a doubling of performance every 18 to 24 months. (The actual paper from Moore has more to do with optimal transistor counts for maximizing profits than performance.) In comparison, "Moore's Law" for 3D graphics has been double the performance every 12 months.

The amazing thing is that we are still pushing the limits of the current technology. Sure, the 6800 Ultra and X800 XT are fast enough to run all current games with 4xAA and 8xAF turned on, but some programmer out there is just waiting for more power. The Unreal Engine 3 images that have been shown are truly impressive, and even the best cards of today struggle to meet the demands. The goal of real-time Hollywood quality rendering is still a ways off, but only a few years ago Pixar scoffed when NVIDIA claimed they were approaching the ability to do Toy Story 2 visuals in real time. Part of their rebuttal was that Toy Story 2 was using something like 96 GB/s of bandwidth for their textures. We're one third of the way there now!

What does the future hold? With the large sizes of the top GPUs, it is probably safe to bet that newer features (i.e. DirectX 10) are going to be at least a year or more in the future. This is probably a good thing, as it will give ATI and NVIDIA (and their fabrication partners) time to shrink the die process and hopefully start making more cards available. We may not even see DirectX 10 hardware for 18 months, as it is planned as part of the next version of Windows, codenamed Longhorn. Longhorn is currently slated for a 2006 release, so there isn't much point in selling hardware that is completely lacking in software support at the OS and library level.

Those looking for lower prices may be in for something of a disappointment. Lower prices would always be nice, but the trend with the bleeding edge hardware is that it is only getting more expensive with each successive generation. Look at the NVIDIA top-end cards: GeForce 256 DDR launched at about $300, GeForce 2 Ultra and GeForce 3 launched at around $350, GeForce 4 Ti4600 was close to $400, GeForce FX 5800 Ultra and 5950 Ultra were close to $500 at launch, and recently the 6800 Ultra has launched at over $500. More power is good, but not everyone has the funds to buy FX-53 or P4EE processors and matching system components. However, today's bleeding edge hardware is tomorrow's mainstream hardware, so while not everyone can afford a 6800 or X800 card right now, the last generation of high-end hardware is now selling for under $200, and even the $100 parts are better than the GeForce 3 era.

Now the really hairy stuff
Comments Locked

43 Comments

View All Comments

  • Neo_Geo - Tuesday, September 7, 2004 - link

    Nice article.... BUT....
    I was hoping the Quadro and FireGL lines would be included in the comparison.
    As someone who uses BOTH proffessional (ProE and SolidWorks) AND consumer level (games) software, I am interested in purchasing a Quadro or FireGL, but I want to compare these to their consumer level equivalent (as each pro level card generally has an equivalent consumer level card with some minor, but important, otomizations).

    Thanks
  • mikecel79 - Tuesday, September 7, 2004 - link

    The AIW 9600 Pros have faster memory than the normal 9600 Pro. 9600 Pro memory runs at 650Mhz vs the 600 on a normal 9600.

    Here's the Anandtech article for reference:
    http://www.anandtech.com/video/showdoc.aspx?i=1905...
  • Questar - Tuesday, September 7, 2004 - link

    #20,

    This list is not complete at all, it would be 3 times the size if it was from the last 5 or 6 years. It covers about the last 3, and is laden with errors

    Just another exampple of half-asssed job this site has been doing lately.
  • JarredWalton - Tuesday, September 7, 2004 - link

    #14 - Sorry, I went with desktop cards only. Usually, you're stuck with whatever comes in your laptop anyway. Maybe in the future, I'll look at including something like that.

    #15 - Good God, Jim - I'm a CS graduate, not a graphics artist! (/Star Trek) Heheh. Actually, you would be surprised at how difficult it can be to get everything to fit. Maximum width of the tables is 550 pixels. Slanting the graphics would cause issues making it all fit. I suppose putting in vertical borders might help keep things straight, but I don't like the look of charts with vertical separators.

    #20 - Welcome to the club. Getting old sucks - after a certain point, at least.
  • Neekotin - Tuesday, September 7, 2004 - link

    great read! wow! i didn't know there were so much GPUs in the past 5-6 years. its like more than all combined before them. guess i'm a bit old.. ;)
  • JarredWalton - Tuesday, September 7, 2004 - link

    12/13: I updated the Radeon LE entry and resorted the DX7 page. I'm sure anyone that owns a Radeon LE already knows this, but you could use a registry hack to turn them into essentially a full Radeon DDR. (By default, the Hierarchical Z compression and a few other features were disabled.) Old Anandtech article on the subject:

    http://www.anandtech.com/video/showdoc.aspx?i=1473
  • JarredWalton - Monday, September 6, 2004 - link

    Virge... I could be wrong on this, but I'm pretty sure some of the older chips could actually be configured with either SDR or DDR RAM, and I think the GF2 MX series was one of those. The problem was that you could either have 64-bit DDR or 128-bit SDR, so it really didn't matter which you chose. But yeah, there were definitely 128-bit SDR versions of the cards available, and they were generally more common than the 64-bit DDR parts I listed. The MX200, of course, was 64-bit SDR, so it got the worst of both worlds. Heh.

    I think the early Radeons had some similar options, and I'm positive that such options existed in the mobile arena. Overall, though, it's a minor gripe (I hope).
  • ViRGE - Monday, September 6, 2004 - link

    Jarred, without getting too nit-picky, your data for the GeForce 2 MX is technically wrong; the MX used a 128bit/SDR configuration for the most part, not a 64bit/DDR configuration(http://www.anandtech.com/showdoc.aspx?i=1266&p... Note that this isn't true for any of the other MX's(both the 200 and 400 widely used 64bit/DDR), and the difference between the two configurations has no effect on the math for memory bandwidth, but it's still worth noting.
  • Cygni - Monday, September 6, 2004 - link

    Ive been working with Adrian's Rojak Pot on a very similar chart to this one for awhile now. Check it out:

    http://www.rojakpot.com/showarticle.aspx?artno=88&...
  • Denial - Monday, September 6, 2004 - link

    Nice article. In the future, if you could put the text at the top of the tables on an angle it would make them much easier to read.

Log in

Don't have an account? Sign up now