Final Words
Our first few weeks playing with PhysX have been a bit of a mixed bag. On one hand, the technology is really exciting, game developers are promising support for it, two games already benefit from it, and the effects in supported games and demos do look good. On the other hand, only two games really currently support the hardware, the extent of the added content isn't worth the price of the hardware, promises can be broken, we've observed performance issues, and hardware is only really as good as the software that runs on it.Playing the CellFactor demo for a while, messing around in the Hangar of Doom, and blowing up things in GRAW and City of Villains is a start, but it is only a start. As we said before, we can't recommend buying a PPU unless money is no object and the games which do support it are your absolute favorites. Even then, the advantages of owning the hardware are limited and questionable (due to the performance issues we've observed).
Seeing City of Villains behave in the same manner as GRAW gives us pause about the capability of near term titles to properly support and implement hardware physics support. The situation is even worse if the issue is not in the software implementation. If spawning lots of effects on the PhysX card makes the system stutter, then it defeats the purpose of having such a card in the first place. If similar effects could be possible on the CPU or GPU with no less of a performance hit, then why spend $300?
Performance is a large issue, and without more tests to really get under the skin of what's going on, it is very hard for us to know if there is a way to fix it or not. The solution could be as simple as making better use of the hardware while idle, or as complex as redesigning an entire game/physics engine from the ground up to take advantage of the hardware features offered by AGEIA.
We are still excited about the potential of the PhysX processor, but the practicality issue is not one that can be ignored. The issues are two fold: can developers properly implement support for PhysX without impacting gameplay while still making enhancements compelling, and will end users be able to wait out the problems with performance and variety of titles until there are better implementations in more games?
From a developer standpoint, PhysX hardware would provide a fixed resource. Developers love fixed resources as one of the most difficult aspects of PC game design is targeting a wide range of system requirements. While it will be difficult to decide how to best use the hardware, once the decision is made, there is no question about what type of physics processing resources will be afforded. Hopefully this fact, combined with the potential for expanded creativity, will keep game developers interested in using the hardware.
As an end user, we would like to say that the promise of upcoming titles is enough. Unfortunately, it is not by a long shot. We still need hard and fast ways to properly compare the same physics algorithm running on a CPU, a GPU, and a PPU -- or at the very least, on a (dual/multi-core) CPU and PPU. More titles must actually be released and fully support PhysX hardware in production code. Performance issues must not exist, as stuttering framerates have nothing to do with why people spend thousands of dollars on a gaming rig.
Here's to hoping everything magically falls into place, and games like CellFactor are much closer than we think. (Hey, even reviewers can dream... right?)
67 Comments
View All Comments
yanyorga - Monday, May 22, 2006 - link
Firstly, I think it's very likely that there is a slowdown due to the increased number of objects that need to be rendered, giving credence to the apples/oranges arguement.However, I think it is possible to test where there are bottlenecks. As someone already suggested, testing in SLI would show whether there is an increased GPU load (to some extent). Also, if you test using a board with a 2nd GPU slot which is only 8x and put only 1 GPU in that slot, you will be left with at least 8x left on the pci bus. You could also experiment with various overclocking options, focusing on the multipliers and bus.
Is there any info anywhere in how to use the PPU for physics or development software that makes use of it?
Chadder007 - Friday, May 26, 2006 - link
That makes wonder why City of Villans was tested with PPU at 1500 Debris objects comparing it to software at 422 Debris objects. Anandtech needs to go back and test WITH a PPU at 422 Debris objects to compare it to the software only mode to see if there is any difference.rADo2 - Saturday, May 20, 2006 - link
Well, people have now pretty hard time justifying spending $300 on a decelerator.I am afraid, however, that Ageia will be more than willing to "slow down a bit" their future software drivers, to show some real-world "benefits" of their decelerator. By adding more features to their SW (by CPU) emulation, they may very well slow it down, so that new reviews will finally bring their HW to the first place.
But these review will still mean nothing, as they compare Ageia SW drivers, made intentionally bad performing, with their HW.
Ageia PhysX is a totally wrong concept, Havok FX can do the same via SSE/SSE2/SSE3, and/or SM 3.0 shaders, it can also use dualcore CPUs. This is the future and the right approach, not additional slow card making big noise.
Ageia approach is just a piece of nonsense and stupid marketing..
Nighteye2 - Saturday, May 20, 2006 - link
Do not take your fears to be facts. I think Ageia's approach is the right one, but it'll need to mature - and to really get used. The concept is good, but execution so far is still a bit lacking.rADo2 - Sunday, May 21, 2006 - link
Well, I think Ageia approach is the worst possible one. If game developers are able to distribute threads between singlecore CPU and PhysX decelerator, they should be able to use dualcore CPUs for just the same, and/or SM3.0 shaders. This is the right approach. With quadcore CPUs, they will be able to use 4 core, within 5-6 yers about 8 cores, etc. PhysX decelerator is a wrong direction, it is useful only for very limited portfolio of calculations, while CPU can do them as well (probably even faster).I definitely do NOT want to see Ageix succeed..
Nighteye2 - Sunday, May 21, 2006 - link
That's wrong. I tested it myself running Cellfactor without PPU on my dual-core PC. Even without the liquid and cloth physics, large explosions with a lot of debree still caused large slowdowns, after which it stayed slow until most of the flying debree stopped moving.On videos I saw of people playing with a PPU, slowdowns also occurred but lasted only a fraction of a second.
Also, the CPU is also needed for AI, and does not have enough memory bandwidth to do proper physics. If you want to get it really detailed, hardware physics on a dedicated PPU is the best way to go.
DigitalFreak - Thursday, May 18, 2006 - link
Don't know how accurate this is, but it might give the AT guys some ideas...http://www.hardforum.com/showthread.php?t=1056037">HardForum
Nighteye2 - Saturday, May 20, 2006 - link
I tried it without the PPU - and there's very notable slowdowns when things explode and lots of crates are moving around. And that's from running 25 FPS without moving objects. I imagine performance hits at higher framerates will be even bigger. At least without PPU.Clauzii - Thursday, May 18, 2006 - link
The German site Hartware.de showed this in their test:Processor Type: AGEIA PhysX
Bus Techonology: 32-bit PCI 3.0 Interface
Memory Interface: 128-bit GDDR3 memory architecture
Memory Capacity: 128 MByte
Memory Bandwidth: 12 GBytes/sec.
Effective Memory Data Rate: 733 MHz
Peak Instruction Bandwidth: 20 Billion Instructions/sec
Sphere-Sphere collision/sec: 530 Million max
Convex-Convex(Complex) collisions/sec.: 533,000 max
If graphics are moved to the card, a 12GB/s memory will be limiting, I think :)
Would be nice to see the PhysiX RAM @ the specced 500MHz, just to see if it has anything to do with that issue..
Clauzii - Thursday, May 18, 2006 - link
Not test - preview, sorry.