The Widespread Support Fallacy

NVIDIA acquired Ageia, they were the guys who wanted to sell you another card to put in your system to accelerate game physics - the PPU. That idea didn’t go over too well. For starters, no one wanted another *PU in their machine. And secondly, there were no compelling titles that required it. At best we saw mediocre games with mildly interesting physics support, or decent games with uninteresting physics enhancements.

Ageia’s true strength wasn’t in its PPU chip design, many companies could do that. What Ageia did that was quite smart was it acquired an up and coming game physics API, polished it up, and gave it away for free to developers. The physics engine was called PhysX.

Developers can use PhysX, for free, in their games. There are no strings attached, no licensing fees, nothing. Now if the developer wants support, there are fees of course but it’s a great way of cutting down development costs. The physics engine in a game is responsible for all modeling of newtonian forces within the game; the engine determines how objects collide, how gravity works, etc...

If developers wanted to, they could enable PPU accelerated physics in their games and do some cool effects. Very few developers wanted to because there was no real install base of Ageia cards and Ageia wasn’t large enough to convince the major players to do anything.

PhysX, being free, was of course widely adopted. When NVIDIA purchased Ageia what they really bought was the PhysX business.

NVIDIA continued offering PhysX for free, but it killed off the PPU business. Instead, NVIDIA worked to port PhysX to CUDA so that it could run on its GPUs. The same catch 22 from before existed: developers didn’t have to include GPU accelerated physics and most don’t because they don’t like alienating their non-NVIDIA users. It’s all about hitting the largest audience and not everyone can run GPU accelerated PhysX, so most developers don’t use that aspect of the engine.

Then we have NVIDIA publishing slides like this:

Indeed, PhysX is one of the world’s most popular physics APIs - but that does not mean that developers choose to accelerate PhysX on the GPU. Most don’t. The next slide paints a clearer picture:

These are the biggest titles NVIDIA has with GPU accelerated PhysX support today. That’s 12 titles, three of which are big ones, most of the rest, well, I won’t go there.

A free physics API is great, and all indicators point to PhysX being liked by developers.

The next several slides in NVIDIA’s presentation go into detail about how GPU accelerated PhysX is used in these titles and how poorly ATI performs when GPU accelerated PhysX is enabled (because ATI can’t run CUDA code on its GPUs, the GPU-friendly code must run on the CPU instead).

We normally hold manufacturers accountable to their performance claims, well it was about time we did something about these other claims - shall we?

Our goal was simple: we wanted to know if GPU accelerated PhysX effects in these titles was useful. And if it was, would it be enough to make us pick a NVIDIA GPU over an ATI one if the ATI GPU was faster.

To accomplish this I had to bring in an outsider. Someone who hadn’t been subjected to the same NVIDIA marketing that Derek and I had. I wanted someone impartial.

Meet Ben:


I met Ben in middle school and we’ve been friends ever since. He’s a gamer of the truest form. He generally just wants to come over to my office and game while I work. The relationship is rarely harmful; I have access to lots of hardware (both PC and console) and games, and he likes to play them. He plays while I work and isn't very distracting (except when he's hungry).

These past few weeks I’ve been far too busy for even Ben’s quiet gaming in the office. First there were SSDs, then GDC and then this article. But when I needed someone to play a bunch of games and tell me if he noticed GPU accelerated PhysX, Ben was the right guy for the job.

I grabbed a Dell Studio XPS I’d been working on for a while. It’s a good little system, the first sub-$1000 Core i7 machine in fact ($799 gets you a Core i7-920 and 3GB of memory). It performs similarly to my Core i7 testbeds so if you’re looking to jump on the i7 bandwagon but don’t feel like building a machine, the Dell is an alternative.

I also setup its bigger brother, the Studio XPS 435. Personally I prefer this machine, it’s larger than the regular Studio XPS, albeit more expensive. The larger chassis makes working inside the case and upgrading the graphics card a bit more pleasant.


My machine of choice, I couldn't let Ben have the faster computer.

Both of these systems shipped with ATI graphics, obviously that wasn’t going to work. I decided to pick midrange cards to work with: a GeForce GTS 250 and a GeForce GTX 260.

Putting this PhysX Business to Rest PhysX in Sacred 2: There, but not tremendously valuable
Comments Locked

294 Comments

View All Comments

  • SiliconDoc - Friday, April 24, 2009 - link

    You failed to read his post, and therefore the context of my response, you IDIOT.
    Can you run a second ATI card for PhysX - NO.
    Can you run an ati card and a second NV for PhysX - not without a driver hack - check techpowerup for the how to and files, as I've already mentioned.
    So, THAT'S WHAT WE WE'RE TALKING ABOUT DUMMY.
    Now you can take your stupidity along with you, noone can stop it.
  • pizzimp - Friday, April 3, 2009 - link

    From an objective point of view there is not really a clear winner. At the lower resolutions do you really care if you are getting 80 FPS Vs 100 FPS?

    IMO it is the higher resolutions that matter. I would think any real gamer is always looking to upgrade there monitor :).

    I wonder how old you guys are that are posting? Who cares if something is "rebadged" or just an OC version of something? Bottom line is how does the card play the game?

    IMO both cards are good. It comes down to price for me.
  • SiliconDoc - Monday, April 6, 2009 - link

    Ahh, you just have to pretend framerates you can't see or notice, and only the top rate or the average, never the bottom framerate...
    Then you must discount ALL the OTHER NVIDIA advantages, from cuda, to badaboom, to better folding scores, to physx, to game release day evga drivers ready to go, to forced game profiles in nvidia panel none for ati - and on and on and on...
    Now, after 6 months of these red roosters screaming ati wins it all because it had the top resolution of the 30" monitor sewed up and lost in lower resolutions, these red roosters have done a 180 degree about face.... now the top resolution just doesn't matter -
    Dude, the red ragers are lying loons, it's that simple.
    The 2 year old 9800X core is the 4870 without ddr5. Think about that, and how deranged they truly are.
    I bet they have been fervently praying to their red god hoping that change doesn't come in the form of ddr5 on that old g80/g92/g92b core - because then instead of it competing with the 4850 - it would be a 4870 - and THAT would be an embarrassment - a severe embarrassment. The crowing of the red roosters would diminish... and they'ed be bent over sucking up barnyard dirt and chickseed - for a long, long time. lol
    Oh well, at least ati might get 2 billion from Obama to cover it's losses ... it's sad when a red rooster card could really use a bailout, isn't it ?
  • helldrell666 - Friday, April 3, 2009 - link

    Well, you have a point there. But the card is still not operating on a WHQL driver, and the percentage of those who use 30" montiors is negligible compared to the owners of 22" / 24" monitors.

    I think this is probably due to the 256bit internal memory interface compared to the 484bit that the gtx275 has.even at xbitlabs the 4890 drops significantly in performance compared to the gtx285.



  • 7Enigma - Friday, April 3, 2009 - link

    From a subjective point of view you may feel that way, but from an objective point of view there is a clear winner, and it is the 4890. Left for Dead and Call of Duty are the only 2 30" display tests where the 275 significantly defeated the 4890. In all of the other tests either the 4890 either dominated (G.R.I.D., Fallout3), or was within 4% of the 275 which I would call a wash. At all other resolutions the 4890 was the undisputed leader. So I find it difficult to say there is no clear winner.

    What Nvidia should have done was not nerfed their 22" and 24" resolutions for the very few people that game at 30" with the latest drivers. To be honest I wish the article had included all of the results from the 182 drivers (they show just G.R.I.D. but allude to other games also having similar reduced results except at the highest res). It could very likely be a wash then if the 275 is more competetive at the resolutions 99% of the people buying this level/price of card are going to be playing at.

    Anand, any way you could post, even just in the comments, the numbers for the rest of the games with the 182 Nvidia drivers. I don't mind doing the comparison work to see how much closer the 275 would be to the 4890 if they had kept the earlier drivers.
  • 7Enigma - Friday, April 3, 2009 - link

    Ah, I see now that the 185's are specifically to enable support for the 275 card. So you can't run the 275 with the 182 drivers. Still would be interesting to see all the data for what happened to the 285 using the newest drivers that decrease the performance at lower resolutions.
  • minime - Friday, April 3, 2009 - link

    First, thanks for your review(s). I'm a silent reader and word-of-mouth spreader for years.

    Second, don't you think reviewers should point their fingers a little bit more aggressively to the power-consumption? Not because it's trendy nowadays, but because it's just not sane to waste that much energy in idle (2D, anyone remembers?) mode. I was thrilled what you alone (don't take it as a disrespect) were able to achieve on the SSD issue.
  • SiliconDoc - Monday, April 6, 2009 - link

    PSST ! The ati cards have like 30 watts more power useage in idle - and like 3 watts less in 3d - so the power thing - well they just declare ati the winner... LOL
    They said they were "really surprised" at the 30 watts less in idle for the nvidia - they just couldn't figure it out- and kept rechecking ... but yeah... the 260 was kicking butt.. but... that doesn't matter - ati takes the win using 1-3 watss less in 3D..
    So, you know, the red roosters shall not be impugned !
    capiche' ?
  • VulgarDisplay - Friday, April 3, 2009 - link

    It appears that you may have had Vsync turned on which caps the game at 60fps in some of the CoD:W@W tests. It's pretty apparent something is up when the nVidia card has the same FPS at 1680x and 1920x. Either way it still seems like the 4890 wins at those resolutions which is different than most sites that pretty much say it's a wash across the board. I'll take nVidia's drivers over ATi's any day.
  • SiliconDoc - Monday, April 6, 2009 - link

    Hey any little trick that smacks nvidia down a notch is not to be pointed out.

Log in

Don't have an account? Sign up now