Final Words

Our first few weeks playing with PhysX have been a bit of a mixed bag. On one hand, the technology is really exciting, game developers are promising support for it, two games already benefit from it, and the effects in supported games and demos do look good. On the other hand, only two games really currently support the hardware, the extent of the added content isn't worth the price of the hardware, promises can be broken, we've observed performance issues, and hardware is only really as good as the software that runs on it.

Playing the CellFactor demo for a while, messing around in the Hangar of Doom, and blowing up things in GRAW and City of Villains is a start, but it is only a start. As we said before, we can't recommend buying a PPU unless money is no object and the games which do support it are your absolute favorites. Even then, the advantages of owning the hardware are limited and questionable (due to the performance issues we've observed).

Seeing City of Villains behave in the same manner as GRAW gives us pause about the capability of near term titles to properly support and implement hardware physics support. The situation is even worse if the issue is not in the software implementation. If spawning lots of effects on the PhysX card makes the system stutter, then it defeats the purpose of having such a card in the first place. If similar effects could be possible on the CPU or GPU with no less of a performance hit, then why spend $300?

Performance is a large issue, and without more tests to really get under the skin of what's going on, it is very hard for us to know if there is a way to fix it or not. The solution could be as simple as making better use of the hardware while idle, or as complex as redesigning an entire game/physics engine from the ground up to take advantage of the hardware features offered by AGEIA.

We are still excited about the potential of the PhysX processor, but the practicality issue is not one that can be ignored. The issues are two fold: can developers properly implement support for PhysX without impacting gameplay while still making enhancements compelling, and will end users be able to wait out the problems with performance and variety of titles until there are better implementations in more games?

From a developer standpoint, PhysX hardware would provide a fixed resource. Developers love fixed resources as one of the most difficult aspects of PC game design is targeting a wide range of system requirements. While it will be difficult to decide how to best use the hardware, once the decision is made, there is no question about what type of physics processing resources will be afforded. Hopefully this fact, combined with the potential for expanded creativity, will keep game developers interested in using the hardware.

As an end user, we would like to say that the promise of upcoming titles is enough. Unfortunately, it is not by a long shot. We still need hard and fast ways to properly compare the same physics algorithm running on a CPU, a GPU, and a PPU -- or at the very least, on a (dual/multi-core) CPU and PPU. More titles must actually be released and fully support PhysX hardware in production code. Performance issues must not exist, as stuttering framerates have nothing to do with why people spend thousands of dollars on a gaming rig.

Here's to hoping everything magically falls into place, and games like CellFactor are much closer than we think. (Hey, even reviewers can dream... right?)

Playing Demos on PhysX
Comments Locked

67 Comments

View All Comments

  • AndreasM - Wednesday, May 17, 2006 - link

    In http://www.xtremesystems.org/forums/showthread.php...">some cases the PPU does increase performance. The next version of Ageia's SDK (ETA July) is supposed to support all physics effects in software, ATM liquid and cloth effects are hardware only; which is why some games like Cellfactor can't really run in software mode properly (yet). Hopefully Immersion releases a new version of their demo with official software support after Ageia releases their 2.4 SDK.
  • UberL33tJarad - Wednesday, May 17, 2006 - link

    How come there's never a direct comparison between CPU and PPU using the same physics? Making the PPU do 3x the work and not losing 3x performance doesn't seem so bad. It puts the card in a bad light because 90% of the people who will read this article will skip the text and go straight for the graphs. I know it can't be done in GRAW without different sets of physics (Havok for everything then Ageia for explosions) why not use the same Max Physics Debris Count?
  • Genx87 - Wednesday, May 17, 2006 - link

    I am still in contention it is a GPU limitation of having to render the higher amount of objects.

    One way to test this is to setup identical systems but one with SLI and the other with a single GPU.

    1. Test the difference between the two systems without physics applied so we get an idea of how much the game scales.
    2. Then test using identical setups using hardware physics and note if we see any difference. My theory is the amount of objects that need to be rendered is killing the GPU's.

    There is definately a bottleneck and it would be agreat if an article really tried to get to the bottom of it. Is it CPU, PPU or GPU? It doesnt appear that CPU is "that" big an issue as the difference between the FX57 and Opty 144 isnt that big.

  • UberL33tJarad - Wednesday, May 17, 2006 - link

    Well that's why I would be very intersted if some benchmarks could come out of http://pp.kpnet.fi/andreasm/physx/">this demo. The low res and lack of effects and textures makes it a great example to test CPUvsPPU strain. One guy said he went from http://www.xtremesystems.org/forums/showpost.php?p..."><5fps to 20fps, which is phenomenal.

    You can run the test in software or hardware mode and has 3k objects interacting with each other.

    Also, if you want to REALLY strain a system, try http://www.novodex.com/rocket/NovodexRocket_V1_1.e...">this demo. Some guy on XS tried a 3ghz Conroe and got <3fps.
  • DigitalFreak - Wednesday, May 17, 2006 - link

    Good idea.
  • maevinj - Wednesday, May 17, 2006 - link

    "then it is defeating its won purpose"
    should be one

  • JarredWalton - Wednesday, May 17, 2006 - link

    Actually, I think it was supposed to be "own", but I reworded it anyway. Thanks.
  • Nighteye2 - Wednesday, May 17, 2006 - link

    2 things:

    I'd like to see a comparison done with equal level of physics, even if it's the low level of physics. Such a comparison could be informative about the bottlenecks. In CoV you can set the number of particles - do tests at 200 and 400 without the physx card, and tests at 400, 800 and 1500 with the physx card. Show how the physics scale with and without the physx card.

    Secondly, do those slowdowns also occur in Cellfactor and UT2007 when objects are created? It seems to me like the slowdown is caused by suddenly having to route part of the data over the PPU, instead of using the PPU for object locations all the time.
  • DerekWilson - Wednesday, May 17, 2006 - link

    The real issue here is that the type of debris is different. Lowering number on the physx cards still gives me things like packing peanuts, while software never does.

    It is still an apples to oranges comparison. But I will play around with this.
  • darkdemyze - Wednesday, May 17, 2006 - link

    Seems there is a lot of "theory" and "ideal advantages" surrounding this card.

    Just as the chicken-egg comparison states, it's going to be a tough battle for AGEIA to get this new product going with lack of support from developers. I seriuosly doubt many people, even the ones who have the money, will want a product they don't get anything out of besides a few extra boxes flying through the air or a couple of extra grenade shards coming out of the explosion when there is such a decrament in performance.

    At any rate, seems like just one more hardware component to buy for gamers. Meh.

Log in

Don't have an account? Sign up now