It's almost ironic that the one industry we deal with that is directly related to entertainment has been the least exciting for the longest time. The graphics world has been littered with controversies surrounding very fickle things as of late; the majority of articles you'll see relating to graphics these days don't have anything to do with how fast the latest $500 card will run. Instead, we're left to argue about the definition of the word "cheating". We pick at pixels with hopes of differentiating two of the fiercest competitors the GPU world has ever seen, and we debate over 3DMark.

What's interesting is that all of the things we have occupied ourselves with in recent times have been present throughout history. Graphics companies have always had questionable optimizations in their drivers, they have almost always differed in how they render a scene and yes, 3DMark has been around for quite some time now (only recently has it become "cool" to take issue with it).

So why is it that in the age of incredibly fast, absurdly powerful DirectX 9 hardware do we find it necessary to bicker about everything but the hardware? Because, for the most part, we've had absolutely nothing better to do with this hardware. Our last set of GPU reviews were focused on two cards - ATI's Radeon 9800 Pro (256MB) and NVIDIA's GeForce FX 5900 Ultra, both of which carried a hefty $499 price tag. What were we able to do with this kind of hardware? Run Unreal Tournament 2003 at 1600x1200 with 4X AA enabled and still have power to spare, or run Quake III Arena at fairytale frame rates. Both ATI and NVIDIA have spent countless millions of transistors, expensive die space and even sacrificed current-generation game performance in order to bring us some very powerful pixel shader units with their GPUs. Yet, we have been using them while letting their pixel shading muscles atrophy.

Honestly, since the Radeon 9700 Pro, we haven't needed any more performance to satisfy the needs of today's games. If you take the most popular game in recent history, the Frozen Throne expansion to Warcraft III, you could run that just fine on a GeForce4 MX - a $500 GeForce FX 5900 Ultra was in no way, shape or form necessary.

The argument we heard from both GPU camps was that you were buying for the future; that a card you would buy today could not only run all of your current games extremely well, but you'd be guaranteed good performance in the next-generation of games. The problem with this argument was that there was no guarantee when the "next-generation" of games would be out. And by the time they are out, prices on these wonderfully expensive graphics cards may have fallen significantly. Then there's the issue of the fact that how well cards perform in today's pixel-shaderless games honestly says nothing about how DirectX 9 games will perform. And this brought us to the joyful issue of using 3DMark as a benchmark.

If you haven't noticed, we've never relied on 3DMark as a performance tool in our 3D graphics benchmark suites. The only times we've included it, we've either used it in the context of a CPU comparison or to make sure fill rates were in line with what we were expecting. With 3DMark 03, the fine folks at Futuremark had a very ambitious goal in mind - to predict the performance of future DirectX 9 titles using their own shader code designed to mimic what various developers were working on. The goal was admirable; however, if we're going to recommend something to millions of readers, we're not going to base it solely off of one synthetic benchmark that potentially may be indicative of the performance of future games. The difference between the next generation of games and what we've seen in the past is that the performance of one game is much less indicative of the performance of the rest of the market; as you'll see, we're no longer memory bandwidth bound - now we're going to finally start dealing with games whose pixel shader programs and how they are handled by the execution units of the GPU will determine performance.

All of this discussion isn't for naught, as it brings us to why today is so very important. Not too long ago, we were able to benchmark Doom3 and show you a preview of its performance; but with the game being delayed until next year, we have to turn to yet another title to finally take advantage of this hardware - Half-Life 2. With the game almost done and a benchmarkable demo due out on September 30th, it isn't a surprise that we were given the opportunity to benchmark the demos shown off by Valve at E3 this year.

Unfortunately, the story here isn't as simple as how fast your card will perform under Half-Life 2; of course, given the history of the 3D graphics industry, would you really expect something like this to be without controversy?

It's Springer Time
Comments Locked

111 Comments

View All Comments

  • Anonymous User - Friday, September 12, 2003 - link

    I think the insinuation is clear from that nVidia email posted and Gabe's comments. Valve believed nVidia was trying to "cheat" with their D50s by intentionally having fog disabled etc. Rather than toss around accusations, it was simpler for them to just require that the benchmarks at this point be run with released drivers and avoid the issue of currently bugged drivers with non-working features, whether the reason was accidental or intentional.

    Considering that the FXes fared poorly with 3DMark and again with HL2 - both using DX9 implementations, I think it might be fair to say that the FXes aren't going to do too much better in the future. Especially considering the way they reacted to 3DMark 03 - fighting the benchmark rather than releasing drivers to remedy the performance issue.

    I'd like to see how the FXes do running HL2 with pure DX8 rather than DX9 or a hybrid, as I think most people owning current nVidia cards are going to have to go that route to achieve the framerates desired.
  • Anonymous User - Friday, September 12, 2003 - link

    I dont see how the minimum requirements set but valve are going to play this game. 700mhz and a TNT2. The FX5200's could barely keep up.
  • Anonymous User - Friday, September 12, 2003 - link

    #68: 33 fps * 1.73 = 57.09 fps (add the one to account for the intial 33 score).

    This doesn't quite work out based on the 57.3 score of the 9800 Pro so corrected score on the Nvidia was probably closer to this:
    57.3 / 1.73 = 33.12 fps

    #69: I would definitely try to find a 9600 Pro before I bought a 9500 Pro. The 9600 fully supports DX9 whereas the 9500 does not.
  • Anonymous User - Friday, September 12, 2003 - link

    Guess Its time to upgrade...
    Now where's my &*&%%'n wallet!!


    Wonder where I'll be able to find a R9500Pro (Sapphire)
  • Anonymous User - Friday, September 12, 2003 - link

    The performance increase between the FX5900 and Rad9800Pro is not 73%. Do the math correctly and it turns into 36.5% lead. The article should be revised.
  • atlr - Friday, September 12, 2003 - link

    If anyone sees benchmarks for 1 GHz computers, please post a URL. Thanks.
  • WooDaddy - Friday, September 12, 2003 - link

    Hmmm... I understand that Nvidia would be upset. But it's not like ATI is using a special setting to run faster. They're using DX9.. Nvidia needs to get on the ball. I'm going to have to upgrade my video card since I have a now obsolete Ti4200 GF4.

    GET IT TOGETHER NVIDIA!!! DON'T MAKE ME BUY ATI!

    I might just sell my Nvidia stock while I'm at it. HL2 is a big mover and I believe can make or break the card on the consumer side.
  • Anonymous User - Friday, September 12, 2003 - link

    I had just ordered a 5600 Ultra thinking it would be a great card. It's going back.

    If I can get full DX 9 performance with a 9600 Pro for around $180, and that card's performance is better than the 5900 Ultra - then I'm game.

    I bought a TNT when Nvidia was making a name for it's self. I bought a GF2 GTS when Nvida was destroying the 3dfx - now Nvidia seems to have droped the ball on DX9. I want to play HL2 on what ever card I buy. A 5600 ultra won't seem to cut it. I know the 50's are out there, but I've seen the Aquamark comparision with the 45's and 50's and I'm not impressed.

    I really wanted to buy Nvidia, but I cannot afford it.

  • Anonymous User - Friday, September 12, 2003 - link

    #62: I do have the money but I choose to spend it elsewhere. FYI: I spend $164 US on my 2.4C and I'm running speeds faster than the system used for this benchmark.

    "The Dell PCs we used were configured with Pentium 4 3.0C processors on 875P based motherboards with 1GB of memory. We were running Windows XP without any special modifications to the OS or other changes to the system."

    Anand was using a single system to show what HL2 performance would be on video cards available on the market today. If we was to run benchmarks on different CPU's he would have to spend a tremendous amount more time doing so. In the interest of getting the info out as soon as possible, he limited himself to a single system.

    I would deduce from the performance numbers of HL2 in Anand's benchmarks that unless you have a 9600 Pro/9800 Pro, your AMD will not be able to effectively run HL2.
  • Anonymous User - Friday, September 12, 2003 - link

    Woohoooo!!!
    My ATI 9500@9700 128MB with 8 pixel pipelines and 256bit access beats the crap out of any FX.
    And it only costed me 190euros/190dollars

    Back to the drawing board NVidia.
    Muahahahah!!!

Log in

Don't have an account? Sign up now