For as long as I can remember talking about video cards and GPU performance at AnandTech, there has been debate over the type of benchmarks used to represent that performance. In the old days, the debate was mostly manufacturer driven. Curiously enough, the discourse usually fired up when one manufacturer was at a significant deficit in GPU performance. NVIDIA made a big deal about moving away from timedemos and average frame rates during the early GeForce FX (NV30) days, when its cards might have delivered a decent gaming experience but were slaughtered in most benchmarks. Even Intel advocated for a shift away from most CPU bound gaming benchmarks back during the early years of the Pentium 4 - again, for obvious reasons.

It’s a shame that these revolutions in gaming performance testing were always associated with underperforming products (and later dropped once the product stack improved in the next generation or two). It’s a shame because there has always been merit in introducing additional metrics in order to provide the most complete picture when it came to gaming performance.

The issue lay mostly dormant over the past several years. Every now and then there’d be a new attempt to revolutionize GPU performance testing, but most failed to gain widespread traction for one reason or another. Broad repeatability, one of the basic tenets of the scientific method, was usually cast aside in pursuit of a lot of these new attempts at performance testing - which ultimately limited acceptance.

A year and a half ago, Scott Wasson over at the Tech Report did something no one since Dr. Pabst was able to do: he actually brought about a revolution in the 3D game benchmarking scene.

The approach seemed ridiculously simple - we’ve all had the tools for so very long. Scott used FRAPS to record frame times, and would calculate how long every frame in a benchmark took to render. By focusing on individual frame latencies, Scott’s method could better characterize the little hiccups and stutters that would get smoothed out in an average frame rate. With the new method came a bunch of nifty graphs, and the world changed.

The methodology wasn’t perfect, as FRAPS lacks a holistic view of the 3D rendering pipeline, but it did reveal some surprising issues (in addition to spawning further work that uncovered even more issues on the multi-GPU front). Interestingly enough, many of the issues uncovered by this focus on frame times/latency seemed to primarily impact AMD hardware.

AMD remained curiously quiet as to exactly why its hardware and drivers were so adversely impacted by these new testing methods. While our own foray into evolving GPU testing will come later this week, we had the opportunity to sit down with AMD to understand exactly what’s been going on.

Although neither strictly a defense nor merely an explanation of what we’ve been seeing over the past year, AMD wanted to sit down and better explain their position. This includes both why AMD’s products have been impacted in the manner they were, and why at the same time (and not unlike NVIDIA) AMD is worried about FRAPS being given more weight than it should be. Ultimately AMD believes that it’s to the benefit of buyers and journalists alike to better understand just what is happening, why it’s happening, and just what the most common tools can and are measuring.

What follows is based on our meeting with some of AMD's graphics hardware and driver architects, where they went into depth in all of these issues. In the following pages we’ll get into a high-level explanation of how the Windows rendering pipeline works, why this leads to single-GPU issues, why this leads to multi-GPU issues, and what various tools can measure and see in the rendering process.

The Start: The Rendering Pipeline In Detail
Comments Locked

103 Comments

View All Comments

  • Shark321 - Wednesday, March 27, 2013 - link

    Overall a good article, but it has one huge problem. Ryan, you are repeating about 10 times that there is no good tool to replace the Fraps measuring, which is inaccurate.

    But there is. PcPerformance has intruduced a new microstutter measuring method weeks ago: http://www.pcper.com/reviews/Graphics-Cards/Frame-...
  • rickcain2320 - Wednesday, March 27, 2013 - link

    I just bought an AMD/ATI card and not only do I have stuttering I have that horrid POWERPLAY kicking in all the time with screen tearing. I'm pulling my hair out and wondered why I didn't buy Geforce. My old 8800GTS was doing great but it finally gave up the ghost one day, I should have stuck with at least something consistent in performance.
  • Deo Domuique - Wednesday, March 27, 2013 - link

    This is the main problem on Anand's end, they need to sit down with a manufacturer firstly, in order to give us at least some valid graphs. It's understandable to a point, you don't bite the hand that feeds you, but... to a point. On the other hand, I trust TechReport's graphs... Actually TR is one of the very few websites I trust.
  • lally - Wednesday, March 27, 2013 - link

    There's actually been a lot of research on frame jitter's effects on people. You measure how well people do a specific task with different amounts of it, and compare their performance on the task to the jitter.

    http://lmgtfy.com/?q=virtual+reality+frame+rate+ji...
  • NerdT - Wednesday, March 27, 2013 - link

    First of all, it's a very good read. Thanks.

    Re problem of GPUView "Furthermore it still doesn’t show us when a GPU buffer swap actually takes place and the user sees a new frame, and that remains the basis of any kind of fine-grained look into stuttering." :

    It can actually show you a "flip queue" in yellow color where you can see when the frame was started to get flipped with the front buffer, the end of the flip process, and the wait time until it reaches VSync signal and that's the time user sees the frame. Not sure why you mentioned this. Better to revise it. I have been using GPUView for about two years and it's really unique, no other tool can yet compete with it.
  • mikato - Wednesday, March 27, 2013 - link

    Nvidia: ok we knew our ride here would end sometime. No more competitive advantage "secret bonus" in performance.

    AMD fanboy: argh, as usual my AMD parts will perform better with time, and not get the respect deserved since all the benchmarks were done already.
  • JeBarr - Thursday, March 28, 2013 - link

    What a long drawn out way of helping AMD in the PR department.

    Unlike most commenters, what I took away from this article is the fact that Ryan Smith is no longer qualified to conduct GPU benchmarks.

    GPUView too complicated? Seriously?

    lol.
  • Death666Angel - Thursday, March 28, 2013 - link

    First of all: Great read! Very technical, but very interesting and still easy to understand. :)

    Concerning V-Sync: I always enable it when I start playing a game for the first time. But 3 times out of 5, the gameplay gets too sluggish (that would probably be the added latency). So I have to turn it off and live with screen tearing and too much frames being rendered. It's a shame.

    And reading all this and the issues involved, it makes me wonder how Oculus and the involved parties are getting around this problem. They are working on minimizing latency left and right. I would like to see their input on this and if they are only optimizing for a few hardware setups. :)
  • LoccOtHaN - Wednesday, April 3, 2013 - link

    Mirillis Action! that Program is an Alternative to Fraps (no stutering ! and its werry light ) RECOMENDED by Ne01
  • KilledByAPixel - Thursday, April 4, 2013 - link

    It is great to finally see someone deconstructing the issue of stutter in games, it drives me nuts! I also wrote an article that actually offers a solution to this problem. I developed a simple system that allows games to smooth out their delta by predicting the time when a frame will be rendered rather then using the measured delta from the update.

    http://frankforce.com/?p=2636

Log in

Don't have an account? Sign up now