Far Cry Performance

Far Cry remains one of the more graphically intense games, and it gives a somewhat different performance picture between ATI and NVIDIA than Doom 3 and Half-Life 2. We tested with the 1.33 patch installed, though we didn't attempt to force HDR or instancing. We ran the standard Ubisoft benchmarks for the Regulator, Research, Training, and Volcano levels. We then averaged the results across these four benchmarks to come up with the final score. If you'd prefer to see the performance on the individual levels, you can download the 18 graphs in a Zip file. We tested with everything - including water - set to Very High quality. (Remember, this is a CPU overclocking article, not a GPU performance article.)

Going from 640x480 through 1024x768, we gain 41%, 29%, and 13%. As with Doom 3, adding antialiasing into the mix at 1024x768 really stresses the GPU, limiting performance gains to 5%. As we mentioned in our latest Mid-Range Guide, gamers are much better off getting a slower CPU with a faster GPU rather than the other way around. With some judicious overclocking, even the slowest Athlon 64 Venice part can achieve great performance with the right GPU. We should mention that we haven't shown results for 640x480 4xAA and 800x600 4xAA on any of the games for a reason: if you can't run at 1024x768 resolution or higher, you probably can't run 4xAA well either. We would definitely increase resolution at least to 1024x768 before worrying about antialiasing.

The trend of more expensive RAM realizing higher scores continues, as expected. This time, the gap grows to as much as 9%, indicating that Far Cry is slightly more demanding of the memory subsystem than other games. The 2T timing at 9x300 really kills performance, coming in 15% slower than 9x300 OCZ RAM. If you opt for the Venice 3000+ chip, you might want to spend a bit extra for 2-3-2 RAM as opposed to 2.5-3-3 RAM. The extra $15 or so should bring up performance at 9x300 substantially.

Doom 3 Performance Half-Life 2 Performance
Comments Locked

101 Comments

View All Comments

  • JarredWalton - Wednesday, October 5, 2005 - link

    Sorry if I missed this in the article. The reason a 3200+ may be better is the 10X multiplier vs. 9X. Sure, the DFI board used worked pretty well at either setting, but there are many boards that won't handle much above 250 MHz CPU bus stably. Needless to say, there's a reason 2800 MHz was only included at one setting. While it still wasn't stable, it would actually run most benchmarks at 10x280. 9x311 wouldn't even load Windows half the time. The extra $50 for added flexibility is also nice: you can try 9x300, 10x270, PC3200, PC2700, etc. to find the most stable, highest performing option.
  • Bakwetu - Wednesday, October 5, 2005 - link

    Thanks for a great article. I haven't been following the development so carefully since I upgraded last time (with one of the last unlocked Barton 2500+), so this article was a most welcome refresher for me, as I will probably get a x2 3800 rig in the near future.

    Last time I checked using the naked fingertip to smear out the paste was a big no-no. I have always used either a washed razorblade or fingertip in a clean plastic bag. The Arctic silver once sold without silver was a faked, copied product as far as I know. The real stuff in its many forms over the years has definitely shown that it is a good product.
  • javalino - Wednesday, October 5, 2005 - link

    Frist , great article, Jarred.
    Second, i m an anand fan since i remember (1999-2000).
    Third, Since yours conclusion focus on a dilema about overclock, why spend to much in an overclock symtem(or on a powerfull system) if you target is at games ? (wich is a GPU limited). An 125 bucks , like you said, will be more usefull in a video card.
    My idea is an article, about "Benefits, Costs, and Lessons Learned" about build a system for games. How much will be a performance gain from systems running high end cards ,at high resoltion and configurations ( like 1600 x 1200, and with an extra 4xAA 16XAF), with differents system . A FX VS 64(overclock) VS P4 (over) VS P-M VS AMD XP (over of course), for example. The conclusion will be, how much is "needed" to pay for a decent game machine wich is possible to play all current games(and maybe future) with great image quality and performance.

    Maybe the answer is obvious, go with the best FPS/price option possible, or maybe not.
  • AtaStrumf - Tuesday, October 4, 2005 - link

    Great article Jarred!!! I really like your choice of value parts and how you criticaly assesed the results based on the bang-for-the-buck. And finally you did away with pages and pages of bar charts, and combined them into line-scaling charts. How long have I been asking for something like that??? Now we can finally see the REAL difference (or lack of it), and analyse results properly, without having to go back and forth between tens of bar charts. Tell Anand to upgrade your graphing engine ASAP.

    I am a little worried about those voltages though. This sure looks like a bad chip to me (OC wise). WAY too high voltages. I would not go over 1,45 - 1,50 V or else you risk screwing up the chip. You see the memory controller on the chip doesn't like too high voltages and though it will still work, the chip will get slower eventually. Hard to explain really but I know my new 2,2 GHz A64 is faster and much cooler than my old 2,4 GHz A64 (same core - Newcastle, same cooer, same RPM, same case, same ...), which I bought from some crazy overclocker (last time BTW). The 2,4 GHz one gave me really shitty results in FAH for weeks. That's the only explanation a have so far anyway. Maybe you can do an investigaion into this -- burn in one A64 Venice at say 1,6V 24/7 for a few weeks and let's see what happens. I just don't have the $$$ and time to take the risk. I'd be very happy to hear from other forum members on this as well.

    Anyway, glad to see at least part of AT is back to the high quality standards we were used to.
  • AtaStrumf - Tuesday, October 4, 2005 - link

    Or maybe it's the SOI process that is to blame for not taking high voltages too kindly, or maybe both, don't know yet, but I would definitely advice caution goint over 1,5V (default for 0,13 mikron SOI chips). Just think about it, that's already a 15% increase. +10% is usualy max that is still considered safe.

    You just posted that this chip seems to have changed it's behavior (better OC). That may have something to do with the high voltages and it may not be all good. I'd suggest testing it again in a few benchmarks and comparing the results.
  • JarredWalton - Wednesday, October 5, 2005 - link

    Working on it. I think I ended up benching at 1.850V for the 10x280 setting and then not dropping voltages as much as I was supposed to. I'm a little skeptical that a CPU would get slower, though. Usually, they work or they fail. We'll see.

    My thought on the "safe limit" though: what voltage does the FX-57 run at? Whatever it is, at 10 to 15% to that and you're probably still okay. Good cooling will also help; on the stock HSF, I'd be a lot more nervous going over 1.550V.
  • OvErHeAtInG - Tuesday, October 4, 2005 - link

    Very useful article - thorough yet concise. And I would like to toss in another request: Add to the test a ULi-based motherboard (such as the recently reviewed ASRock 939Dual-SATA2). How do these Venices overclock when you can only feed them +.05v? As I recall the standard AT Clawhammer was used in that review.

    That would be hugely useful to a lot of us wanting to transition to A64. While the thing to do is probably just get a DFI or other top-end oc'er, what to do for those of us who are not yet ready to upgrade GPUs? On second thought: you could simulate the ASRock motherboard by simply setting the Venices to the lower voltage, on the DFI board, and testing for the max overclock on that. I think that would vary quite a bit from chip to chip, but just to get an idea - how much of a disadvantage is being limited in your voltage? Food for thought.
  • JarredWalton - Tuesday, October 4, 2005 - link

    I played around with voltages a bit more last night. It seems like I can hit about 2.40 GHz with only increasing the CPU voltage to 1.40V, though I didn't run all of the benchmarks to fully test that config. I'm not sure if the CPU has changed behavior over the past month, or if I was just too liberal with the voltages initially.

    For the ASRock, that Wes managed to get a 500 MHz OC even with the minimal voltage adjustments is promising. Truth be told, the DFI Infinity seems to undervolt the CPU slightly, so 1.500V actually shows up as closer to 1.455V. If the ASRock is exact with the voltages, or even a bit high, I think a 2.4+ GHz overclock is a reasonably safe bet.
  • OvErHeAtInG - Wednesday, October 5, 2005 - link

    Thanks for the info, Jarred. I'm sure there's a thread on this somewhere.... :)
  • araczynski - Tuesday, October 4, 2005 - link

    i haven't seen a better argument for not wasting money on the 'better' memory in ages.

    with those kinds of 'gains' i congratulate the companies for milking everyone with their markups for the 'higher end' components.

Log in

Don't have an account? Sign up now