Final Words

The sheer amount of data contained in the review is overwhelming, and if you've made it this far, congratulations.

Architecturally, ATI and NVIDIA both base their workstation level parts on consumer level boards. The 3Dlabs workstation-only approach is tried and true in the market place. The similarities between the architectures serve to validate all of the parts as high quality workstation solutions.

Among the disappointments that we suffered during testing was the lack of a GLSL benchmark test that could balance out the picture we saw with Shadermark. The consumer-based architectures of ATI and NVIDIA will have a natural bias toward HLSL support, while 3Dlabs hasn't the need to put much effort into optimizing its HLSL path. The firm grasp that OpenGL has as a standard among workstation applications goes well beyond inertia. The clean, state driven approach of OpenGL is very predictable, well defined, and powerful. It is only natural for 3Dlabs to prefer support for GLSL first and foremost, while NVIDIA and ATI cater to Microsoft before anyone else. We are working to solve this problem and hope to bring a solution to our next workstation article.

We also ran into an issue while testing our Quadro FX 4000 on the DK8N board. Running SPECviewperf without setting the affinity of the process to a single processor resulted in a BSOD (stop 0xEA) error. We are working with NVIDIA to determine the source of this issue.

In tallying up the final results of our testing today, we have to take a look at the situation from a couple of different perspectives.

The largest market in workstation graphics is the CAD/CAM market, and most large scale engineering and design firms have a very large budget for workstation components. In those cases, the top productivity is sought after at all times, and so the top performing part in the case of the application used will be purchased with little regard for cost. As most of our benchmarks show, the NVIDIA Quadro FX 4000 is able to push ahead of the competition. Notable exceptions are the ensight and solidworks SPECviewperf viewsets. Generally speaking, if an engineer needs the highest performing AGP workstation part on the market today, he or she will need the Quadro FX 4000, and cost will be no object.

The DCC workstation market is smaller than the CAD/CAM segment. It also sees more small to mid-sized design houses. Here, cost is more of a factor than at a company that would, for instance, design cars. When looking at a workstation part, productivity is going to be an important factor, but price/performance is going to be a much more important factor. With the 3Dlabs Wildcat Realizm 200 coming in just behind the Quadro FX 4000 in most cases, the significantly lower cost makes it a much better value to those on a budget. The street price of the Quadro FX 4000 is at least $700 more than either the Realizm 200 or the FireGL X3-256. That's almost enough to pick up a second 3Dlabs or ATI solution.

The ATI FireGL X3-256 is really targeted at an upper mid-range workstation position and the performance numbers hit their target very solidly. The ATI part is, after all, a 12 pixel pipe solution clocked at 490MHz. The high end consumer part from ATI is a 16 pixel pipe part clocked at 500MHz. Bringing out an AGP based solution derived from the XT line with 1.6ns GDDR3 (rather than the 2.0ns the X3 has), would very likely push ATI up in performance against its competition. It might simply be that ATI doesn't want to step on its FireGL V7100 PCI Express part, which is just about what we want to see in a high end workstation solution. When all is said and done, the FireGL X3-256 is a very nice upper mid-range workstation card that is even able to top the high end AGP workstation parts in a benchmark or two. The antialiased line support is faster and smoother looking than the competition in most cases, but when a lot of lines are piled on top of one another, the result can look a little blurrier than the other two cards.

The real downside of the FireGL X3-256 is that we were able to find Wildcat Realizm 200 cards for lower prices. The FireGL parts are currently selling for very nearly their MSRP, which may indicate that ATI is having some issue with availability even on the workstation side. With the 3Dlabs solution priced at the same level as the ATI solution, there is almost no reason why not to go with the higher performing Wildcat Realizm 200.

But if your line of work requires the use of HLSL shaders, or you are a game developer hoping to do double-duty with DCC applications and work with in-engine tools, the 3Dlabs Wildcat Realizm 200 is not for you. GLSL shaders are quite well supported on the Realizm line, but anything having to do with HLSL runs very slowly. Many of the Shadermark shaders looked fine, but the more complex ones seemed to break down. This can likely be fixed through driver updates if 3Dlabs is able to address HLSL issues in a timely and efficient manner. If price performance is an issue, a workstation part is called for, and HLSL is needed (say, you're with a game design firm and you want to test and run your HLSL shaders in your DCC application), then we can give a thumbs up to the FireGL X3-256.

We were also disappointed to see that the Wildcat Realizm didn't produce the expected line stippling under the 3DStudio Max 6 SP1. There are line stipple tests in the SPECviewperf 8.0.1 benchmark that appeared to run fine, so we are rather surprised to see this. A fix for the flickering viewports when using the custom driver is also something that we want to see.

The final surprise of the day was how poorly the consumer level cards performed in comparison to the rest of the lineup. Even though we took the time to select the highest clocked monstrosities that we could find, there was nothing that we could do to push past the workstation parts in performance most of the time. There were some cases where individual tests would be faster, but not in the types of tests that we see most used in workstation settings. Generally, pushing vertices and lines, accelerating OpenGL state and logic operations, supporting overlay planes, having multiple clip regions, supporting hardware 2-sided lighting in the fixed function pipeline, and all the other extra goodies that workstation class hardware has just makes these applications run a lot faster.

On the high end of performance in the AGP workstation market, we have the NVIDIA Quadro FX 4000. The leader in price/performance for AGP workstations at the end of 2004 is the 3Dlabs Wildcat Realizm 200. Hopefully, 2005 and our first PCI Express workstation graphics review will be as exciting as this one.

Image Quality
Comments Locked

25 Comments

View All Comments

  • Jeanlou - Thursday, December 1, 2005 - link

    Hello,
    I just bumped into AnandTech Video Card Tests, and I'm really impressed !

    As a Belgian Vision Systems Integration Consultant (since 1979), I'm very interrested about the ability to compare these 3 cards (Realizm 200 vs FireGL X3 256 vs NVIDIA Quatro FX 4000).

    I just had a bad experience with the Realizm 200 (!)

    On a ASUS NCCH-DL motherboard, Dual Xeon 2.8GHz, 2GB DDR 400, Seagate SCSI Ultra 320 HDD, 2 EIZO monitors (Monitor N°1= L985EX at 1600x1200 px), (Monitopr N°2= L565 at 1280x1024 px), Windows XP Pro SP2 x32bit partition C:\ 16GB, Windows XP Pro x64bit edition partition D:\ 16GB, plus Extended partions (2 logical E:\ and F:\). All NTFS.

    Using the main monitor for images analyses (quality control) and the slave monitor for tools, I was unable to have a stable image at 1600 by 1200 pixels. While the Wildcat4 - 7110, or even the VP990 Pro have a very stable screen at maximum resolution. But the 7110 and the VP990 Pro don't have drivers for Window XP x64bit.

    Tried everything, latest BIOS, latest drive for ChipSet...
    Even 3Dlabs was unable to give me the necessary support and do not answer anymore !

    As soon I reduced the resolution from the main monitor to 1280 by 1024, was everything stable, but that's not what I want, I need the maximum resolution on the main monitor.

    The specs from 3Dlabs resolution table is giving 3840 by 2400 pixels maximum!

    I send it back, and I'm looking for an other card.

    I wonder if the FireGL X3 256 will do the job ?
    We also use an other monitor from EIZO (S2410W) with 1920 by 1200 pixels !
    What are exactly the several resolutions possible with the FireGL X3 256 using 2 monitors ? I cannot find it on the specs.

    Any comment will be appreciated,

    Best regards,
    Jean
  • kaissa - Sunday, February 20, 2005 - link

    Excellent article. I hope that you make workstation graphic card comparision a regular article. How about an article on workstation notebooks? Thanks a lot.
  • laverdir - Thursday, December 30, 2004 - link

    dear derek wilson,

    could you tell us how much is the performance
    difference between numa and uma in general
    on this tests..

    and it would be great if you could post maya
    related results for guadro 4k with numa enabled..


    seasonal greetings
  • RedNight - Tuesday, December 28, 2004 - link

    This is the best workstation graphics card review I have read in ages. Not only does it present the positive and negatives of each the principal cards in question, it presents them in relationship to high end mainsteam cards and thereby helps many, including myself, understand the real differences in performance. Also, by inovatingly including AutoCAD and Gaming Tests one gets a clear indication of when the workstation cards are necessary and when they would be a waste of money. Thanks
  • DerekWilson - Monday, December 27, 2004 - link

    Dubb,

    Thanks for letting us know about that one :-) We'll have to have a nice long talk with NV's workstation team about what exactly is going on there. They very strongly gave us the idea that the featureset wasn't present on geforce cards.

    #19, NUMA was disabled because most people running a workstation with 4 or fewer GB of RAM on a 32 machine will not be running with the pae kernel installed. We wanted to test with a setup most people would be running under the circumstances. We will test NUMA capabilities in the future.

    #20,

    When we test workstation CPU performance or system performance, POVRay will be a possible inclusion. Thanks for the suggestion.

    Derek Wilson
  • mbhame - Sunday, December 26, 2004 - link

    Please include POVRay benchies in Workstation tests.
  • Myrandex - Saturday, December 25, 2004 - link

    I wonder why NUMA was fully supported but yet disabled. Maybe instabilities or something.
  • Dubb - Friday, December 24, 2004 - link

    http://newbietech.net/eng/qtoq/index.php

    http://forums.guru3d.com/showthread.php?s=2347485b...
  • Dubb - Friday, December 24, 2004 - link

    uhhh.. my softquadro'd 5900 ultra begs to differ. as would all the 6800 > qfx4000 mods being done by people on guru3d's rivatuner forum.

    I thought you guys knew that just because nvida says something doesn't mean it's true?

    they must consider "physically different sillicon" to be "we moved a resistor or two"...
  • DerekWilson - Friday, December 24, 2004 - link

    By high end features, I wasn't talking about texturing or prgrammatic vertex or fragment shading (which is highend in the consumer space).

    I was rather talking about hardware support for: AA lines and points, overlay plane support, two-sided lighting (fixed function path), logic operations, fast pixel read-back speeds, and dual 10-bit 400MHz RAMDACs and 2 dual-link DVI-I connectors supporting 3840x2400 on a single display (the IBM T221 comes to mind).

    There are other features, but these are key. In products like Maya and 3D Studio, not having overlay plane support creates an absolutely noticable performance hit. It really does depend on how you push the cards. We do prefer the in application benchmarks to SPECveiwperf. Even the SPECapc tests can give a better feel for where things will fall -- because the entire system is a factor rather than just the gfx card and CPU.

    #14, Dubb -- I hate to be the one to tell you this -- GeForce and Quadro are physically different silicon now (NV40 and NV40GL). AFAIK, ever since GF4/Quadro4, it has been impossible to softquadro an nvidia card. The Quadro team uses the GeForce as it's base core, but then adds on workstation class features.

Log in

Don't have an account? Sign up now