Half-Life 2 Performance

Just as Doom 3 favors NVIDIA performance, Half-Life 2 has done better on ATI cards. Our X800 Pro should provide more than playable results at all the tested resolutions, as HL2 really depends on GPU fillrate for performance. HL2 is also somewhat more CPU limited than other titles, which should give some good performance scaling with our overclocks. Future Source engine games making use of HDR will probably move the bottleneck back towards the GPU, judging by what we've seen in other HDR-enabled titles. Hopefully, Source can manage to provide more realistic HDR modes without cutting performance in half, but we're doubtful.

As with Far Cry, we benchmarked several levels and averaged the results; in this case, we used Anand's C17_12, Canals_08, Coast_05, Coast_12, and Prison_05 demos. Unfortunately, this may be the last time that we use those benchmarks, as the recent Steam upgrade has broken compatibility with revision 7 demos. All of these benchmarks were completed prior to the 9/23 update, luckily, but future overclocking articles will use different demos and will thus not be directly comparable with these scores. For the 22 graphs of the individual levels, we've once again created a Zip file.

HL2 gains 41%, 39%, and 33% from overclocking as we increase the resolution. Also of interest is that even with 4xAA enabled, HL2 gains 40%, 37%, and 26% at the same tested resolutions. As we mentioned, HL2 is far more dependent on GPU fillrate than GPU memory bandwidth. At 1024x768 4xAA, the 2600 and 2700 configurations deliver basically equal performance, but all of the lower resolutions still show some increase with CPU clock speed.

In our biggest margin of victory, the OCZ RAM averages a 9% advantage over the value RAM. That's about the equivalent of a CPU upgrade (assuming that you don't overclock), at about the same cost as upgrading the processor. Like Far Cry, the 2T timing hurts performance. One of the things that you might have noticed is that the 10x280 setting has trailed behind the 2700 MHz configurations in most of the games. If we could get it running stably with 1T command rate, it would be better, but we were unable to accomplish this. The value RAM wouldn't even post at 10x280, so whatever limits that we're running into are at least lessened with higher quality RAM.

Far Cry Performance Closing Thoughts
Comments Locked

101 Comments

View All Comments

  • JarredWalton - Wednesday, October 5, 2005 - link

    Sorry if I missed this in the article. The reason a 3200+ may be better is the 10X multiplier vs. 9X. Sure, the DFI board used worked pretty well at either setting, but there are many boards that won't handle much above 250 MHz CPU bus stably. Needless to say, there's a reason 2800 MHz was only included at one setting. While it still wasn't stable, it would actually run most benchmarks at 10x280. 9x311 wouldn't even load Windows half the time. The extra $50 for added flexibility is also nice: you can try 9x300, 10x270, PC3200, PC2700, etc. to find the most stable, highest performing option.
  • Bakwetu - Wednesday, October 5, 2005 - link

    Thanks for a great article. I haven't been following the development so carefully since I upgraded last time (with one of the last unlocked Barton 2500+), so this article was a most welcome refresher for me, as I will probably get a x2 3800 rig in the near future.

    Last time I checked using the naked fingertip to smear out the paste was a big no-no. I have always used either a washed razorblade or fingertip in a clean plastic bag. The Arctic silver once sold without silver was a faked, copied product as far as I know. The real stuff in its many forms over the years has definitely shown that it is a good product.
  • javalino - Wednesday, October 5, 2005 - link

    Frist , great article, Jarred.
    Second, i m an anand fan since i remember (1999-2000).
    Third, Since yours conclusion focus on a dilema about overclock, why spend to much in an overclock symtem(or on a powerfull system) if you target is at games ? (wich is a GPU limited). An 125 bucks , like you said, will be more usefull in a video card.
    My idea is an article, about "Benefits, Costs, and Lessons Learned" about build a system for games. How much will be a performance gain from systems running high end cards ,at high resoltion and configurations ( like 1600 x 1200, and with an extra 4xAA 16XAF), with differents system . A FX VS 64(overclock) VS P4 (over) VS P-M VS AMD XP (over of course), for example. The conclusion will be, how much is "needed" to pay for a decent game machine wich is possible to play all current games(and maybe future) with great image quality and performance.

    Maybe the answer is obvious, go with the best FPS/price option possible, or maybe not.
  • AtaStrumf - Tuesday, October 4, 2005 - link

    Great article Jarred!!! I really like your choice of value parts and how you criticaly assesed the results based on the bang-for-the-buck. And finally you did away with pages and pages of bar charts, and combined them into line-scaling charts. How long have I been asking for something like that??? Now we can finally see the REAL difference (or lack of it), and analyse results properly, without having to go back and forth between tens of bar charts. Tell Anand to upgrade your graphing engine ASAP.

    I am a little worried about those voltages though. This sure looks like a bad chip to me (OC wise). WAY too high voltages. I would not go over 1,45 - 1,50 V or else you risk screwing up the chip. You see the memory controller on the chip doesn't like too high voltages and though it will still work, the chip will get slower eventually. Hard to explain really but I know my new 2,2 GHz A64 is faster and much cooler than my old 2,4 GHz A64 (same core - Newcastle, same cooer, same RPM, same case, same ...), which I bought from some crazy overclocker (last time BTW). The 2,4 GHz one gave me really shitty results in FAH for weeks. That's the only explanation a have so far anyway. Maybe you can do an investigaion into this -- burn in one A64 Venice at say 1,6V 24/7 for a few weeks and let's see what happens. I just don't have the $$$ and time to take the risk. I'd be very happy to hear from other forum members on this as well.

    Anyway, glad to see at least part of AT is back to the high quality standards we were used to.
  • AtaStrumf - Tuesday, October 4, 2005 - link

    Or maybe it's the SOI process that is to blame for not taking high voltages too kindly, or maybe both, don't know yet, but I would definitely advice caution goint over 1,5V (default for 0,13 mikron SOI chips). Just think about it, that's already a 15% increase. +10% is usualy max that is still considered safe.

    You just posted that this chip seems to have changed it's behavior (better OC). That may have something to do with the high voltages and it may not be all good. I'd suggest testing it again in a few benchmarks and comparing the results.
  • JarredWalton - Wednesday, October 5, 2005 - link

    Working on it. I think I ended up benching at 1.850V for the 10x280 setting and then not dropping voltages as much as I was supposed to. I'm a little skeptical that a CPU would get slower, though. Usually, they work or they fail. We'll see.

    My thought on the "safe limit" though: what voltage does the FX-57 run at? Whatever it is, at 10 to 15% to that and you're probably still okay. Good cooling will also help; on the stock HSF, I'd be a lot more nervous going over 1.550V.
  • OvErHeAtInG - Tuesday, October 4, 2005 - link

    Very useful article - thorough yet concise. And I would like to toss in another request: Add to the test a ULi-based motherboard (such as the recently reviewed ASRock 939Dual-SATA2). How do these Venices overclock when you can only feed them +.05v? As I recall the standard AT Clawhammer was used in that review.

    That would be hugely useful to a lot of us wanting to transition to A64. While the thing to do is probably just get a DFI or other top-end oc'er, what to do for those of us who are not yet ready to upgrade GPUs? On second thought: you could simulate the ASRock motherboard by simply setting the Venices to the lower voltage, on the DFI board, and testing for the max overclock on that. I think that would vary quite a bit from chip to chip, but just to get an idea - how much of a disadvantage is being limited in your voltage? Food for thought.
  • JarredWalton - Tuesday, October 4, 2005 - link

    I played around with voltages a bit more last night. It seems like I can hit about 2.40 GHz with only increasing the CPU voltage to 1.40V, though I didn't run all of the benchmarks to fully test that config. I'm not sure if the CPU has changed behavior over the past month, or if I was just too liberal with the voltages initially.

    For the ASRock, that Wes managed to get a 500 MHz OC even with the minimal voltage adjustments is promising. Truth be told, the DFI Infinity seems to undervolt the CPU slightly, so 1.500V actually shows up as closer to 1.455V. If the ASRock is exact with the voltages, or even a bit high, I think a 2.4+ GHz overclock is a reasonably safe bet.
  • OvErHeAtInG - Wednesday, October 5, 2005 - link

    Thanks for the info, Jarred. I'm sure there's a thread on this somewhere.... :)
  • araczynski - Tuesday, October 4, 2005 - link

    i haven't seen a better argument for not wasting money on the 'better' memory in ages.

    with those kinds of 'gains' i congratulate the companies for milking everyone with their markups for the 'higher end' components.

Log in

Don't have an account? Sign up now