The problem with gaming benchmarks is that we're almost always measuring the performance of today's cards with yesterday's games. For the longest time, Quake III Arena was a very popular benchmark as very few configurations could run it with all of the visual options turned on at high frame rates. Today, the situation is much different; it isn't uncommon to see 200+ frame rates under Quake III Arena.

There have been a number of other gaming benchmarks that have risen to the occasion in recent history. Games such as Serious Sam introduced a very configurable engine to the world of benchmarking, effectively giving us a very flexible tool to measure performance with. There were other titles that didn't fare as well, such as Unreal Tournament which, in the vast majority of cases, ended up being more CPU limited than graphics limited.

The one thing that all of these benchmarks have in common is that they are of currently available or previously popular games. We can extrapolate from their results how a particular card or family of GPUs will perform in future games but we never really know until those games become available. It's already widely known that you can't even begin to treat a video card upgrade as an investment; with 6-month product cycles you have to play a guessing game as to whether you'll be buying adequate power for the future.

Case in point would be the release of the GeForce3 almost 12 months ago. The card was more than enough for the games that were out at the time but many bought on the premise that it would give them superior performance in forthcoming DirectX 8 titles. Fast forwarding to the present day, there are still no major titles that require the DX8 features of the GeForce3 and those that purchased the card early on were left with much cheaper and sometimes higher performing alternatives just 6 months later.

But we're simply talking about things from the standpoint of the end-user. The situation is even more frustrating from the standpoint of the developer. The developers want to make their games as incredible as possible, but they need the hardware, driver and API support to do so. At the same time, hardware manufacturers such as ATI and NVIDIA aren't going to waste precious die-space implementing features that won't be used for another 2 years. It's the classic chicken and the egg syndrome; luckily, in this case, both ATI and NVIDIA have supplied the eggs with their DX8-compliant GPUs.

Then there's the issue of drivers. Is a graphics vendor going to spend their time optimizing for features that won't be used in games for another 6 - 12 months or will they focus on the benchmarks and games that are currently being played? The answer is obvious; but where does this leave the developers? These are the people that are using currently available cards to test and build their next-generation game engines. If currently available drivers won't run their next-generation engines then they are forced to either wait for the hardware manufacturers to fix their drivers (an effort that doesn't provide immediate results to the hardware vendor) or to highly optimize for a very small subset of cards or even one particular vendor. With only two major manufacturers left, ATI and NVIDIA, there is an unspoken understanding that the developers deserve as much, if not more, attention than the end-users. Although this hasn't always been the case, we're now hearing that both ATI and NVIDIA are equally responsive to developer driver issues.

We've just outlined a number of problems that currently exist in the way graphics is handled from both a reviewer's standpoint and from a developer-relations standpoint. But what to do about it?

Luckily one of the most prominent game developers happens to be in AnandTech's back yard and we've been working with them on addressing some of these very issues.

Epic's Engine
Comments Locked

0 Comments

View All Comments

Log in

Don't have an account? Sign up now