Final Words

Our first few weeks playing with PhysX have been a bit of a mixed bag. On one hand, the technology is really exciting, game developers are promising support for it, two games already benefit from it, and the effects in supported games and demos do look good. On the other hand, only two games really currently support the hardware, the extent of the added content isn't worth the price of the hardware, promises can be broken, we've observed performance issues, and hardware is only really as good as the software that runs on it.

Playing the CellFactor demo for a while, messing around in the Hangar of Doom, and blowing up things in GRAW and City of Villains is a start, but it is only a start. As we said before, we can't recommend buying a PPU unless money is no object and the games which do support it are your absolute favorites. Even then, the advantages of owning the hardware are limited and questionable (due to the performance issues we've observed).

Seeing City of Villains behave in the same manner as GRAW gives us pause about the capability of near term titles to properly support and implement hardware physics support. The situation is even worse if the issue is not in the software implementation. If spawning lots of effects on the PhysX card makes the system stutter, then it defeats the purpose of having such a card in the first place. If similar effects could be possible on the CPU or GPU with no less of a performance hit, then why spend $300?

Performance is a large issue, and without more tests to really get under the skin of what's going on, it is very hard for us to know if there is a way to fix it or not. The solution could be as simple as making better use of the hardware while idle, or as complex as redesigning an entire game/physics engine from the ground up to take advantage of the hardware features offered by AGEIA.

We are still excited about the potential of the PhysX processor, but the practicality issue is not one that can be ignored. The issues are two fold: can developers properly implement support for PhysX without impacting gameplay while still making enhancements compelling, and will end users be able to wait out the problems with performance and variety of titles until there are better implementations in more games?

From a developer standpoint, PhysX hardware would provide a fixed resource. Developers love fixed resources as one of the most difficult aspects of PC game design is targeting a wide range of system requirements. While it will be difficult to decide how to best use the hardware, once the decision is made, there is no question about what type of physics processing resources will be afforded. Hopefully this fact, combined with the potential for expanded creativity, will keep game developers interested in using the hardware.

As an end user, we would like to say that the promise of upcoming titles is enough. Unfortunately, it is not by a long shot. We still need hard and fast ways to properly compare the same physics algorithm running on a CPU, a GPU, and a PPU -- or at the very least, on a (dual/multi-core) CPU and PPU. More titles must actually be released and fully support PhysX hardware in production code. Performance issues must not exist, as stuttering framerates have nothing to do with why people spend thousands of dollars on a gaming rig.

Here's to hoping everything magically falls into place, and games like CellFactor are much closer than we think. (Hey, even reviewers can dream... right?)

Playing Demos on PhysX
Comments Locked

67 Comments

View All Comments

  • phusg - Wednesday, May 17, 2006 - link

    > Performance issues must not exist, as stuttering framerates have nothing to do with why people spend thousands of dollars on a gaming rig.

    What does this sentence mean? No, really. It seems to try to say more than just, "stuttering framerates on a multi-thousand dollar rig is ridiculous", or is that it?
  • nullpointerus - Wednesday, May 17, 2006 - link

    I believe he means that the card can't survive in the market if it dramatically lowers framerates on even high end rigs.
  • DerekWilson - Wednesday, May 17, 2006 - link

    check plus ... sorry if my wording was a little cumbersome.
  • QChronoD - Wednesday, May 17, 2006 - link

    It seems to me like you guys forgot to set a baseline for the system with the PPU card installed. From the picture that you posted in the CoV test, the nuber of physics objects looks like it can be adjusted when the AGIEA support is enabled. You should have ran a benchmark with the card installed but keeping the level of physics the same. That would eliminate the loading on the GPU as a variable. Doing so would cause the GPU load to remain nearly the same with the only difference being to do the CPU and PPU taking time sending info back and forth.
  • Brunnis - Wednesday, May 17, 2006 - link

    I bet a game like GRAW actually would run faster if the same physics effects were run directly on the CPU instead of this "decelerator". You could add a lot of physics before the game would start running nearly as bad as with the PhysX card. What a great product...
  • DigitalFreak - Wednesday, May 17, 2006 - link

    I'm wondering the same thing.

    "We still need hard and fast ways to properly compare the same physics algorithm running on a CPU, a GPU, and a PPU -- or at the very least, on a (dual/multi-core) CPU and PPU."

    Maybe it's a requirement that the developers have to intentionally limit (via the sliders, etc.) how many "objects" can be generated without the PPU in order to keep people from finding out that a dual core CPU could provide the same effects more efficiently than their PPU.
  • nullpointerus - Wednesday, May 17, 2006 - link

    Why would ASUS or BFG want to get mixed up in a performance scam?
  • DerekWilson - Wednesday, May 17, 2006 - link

    Or EPIC with UnrealEngine 3?

    Makes you wonder what we aren't seeing here doesn't it?
  • Visual - Wednesday, May 17, 2006 - link

    so what you're showing in all the graphs is lower performance with the hardware than without it. WTF?
    yes i understand that testing without the hardware is only faster because it's running lower detail, but that's not clearly visible from a few glances over the article... and you do know how important the first impression really is.

    now i just gotta ask, why can't you test both software and hardware with the same level of detail? that's what a real benchmark should show atleast. Can't you request some complete software emulation from AGEIA that can fool the game that the card is present, and turn on all the extra effects? If not from AGEIA, maybe from ATI or nVidia, who seem to have worked on such emulations that even use their GFX cards. In the worst case, if you can't get the software mode to have all the same effects, why not then atleast turn off those effects when testing the hardware implementation? In the city of villians for example, why is the software test ran with lower "Max Physics Debris Count"? (though I assume there are other effects that get automatically enabled with the hardware present and aren't configurable)

    I just don't get the point of this article... if you're not able to compare apples to apples yet, then don't even bother with an article.
  • Griswold - Wednesday, May 17, 2006 - link

    I think they clearly stated in the first article, that GRAW for example, doesnt allow higher debris settings in software mode.

    But even if it did, a $300 part that is supposed to be lightning fast and what not, should be at least as fast as ordinary software calculations - at higher debris count.

    I really dont care much about apples and oranges here. The message seems to be clear, right now it isnt performing up to snuff for whatever reason.

Log in

Don't have an account? Sign up now