Application Performance

We'll start with some general application performance, courtesy of Winstones 2004. Winstones runs a bunch of scripts in a variety of real-world applications. The problem is that many of the scripts simulate user input and operate at speeds that no human can approach. Rendering an image, encoding audio or video, etc. can take time. Word, Excel, and Outlook, on the other hand, are almost entirely user-limited. While the fastest systems do perform higher, in every day use the typical office applications are going to run so fast that differentiating between the various overclocked settings is difficult, if not impossible.

We get a decent performance increase from overclocking, but nowhere near the theoretical maximum. Going from 1.8 GHz to 2.8 GHz represents a 64% CPU performance increase, although other factors would almost never allow us to realize that gain in benchmarks. In the Business Winstones test, we see a range from 21.9 to 27.6, a 26% increase. The Content Creation test gives a slightly larger increase, ranging from 28.3 to 39.7 - 40% more performance. If you like to think about it this way, the lack of performance scaling in the Business test can also "simulate" the user-limited aspect of office applications.

Similar in some ways to Winstones performance, PCMark attempts to gauge system performance. The results are a little more theoretical, as PCMark takes 5 to 10 minutes to run compared to 20 to 30 minutes for the Winstones tests. PCMark also includes some 2D and 3D graphics tests, which make the GPU somewhat important to the overall score. With Windows Vista moving to more hardware acceleration for windowing tasks, though, that's not necessarily a bad thing.

The difference between the slowest and fastest scores for our configuration is about the same as Winstones. PCMark04 goes from 3851 to 5567, a 45% increase. PCMark05 shows less of a difference, ranging from 3259 to 4146 (27%). PCMark05 is also the sole benchmark that we couldn't run to completion on the 2.8 GHz overclock. A couple of the tests failed every time. Both of the PCMark tests serve as great stress-tests of CPU overclocks, which is one of the reasons why we included the results. The failure to run complete PCMark05 at 2.80 GHz means that we definitely won't run this particular system at that speed long-term.

In case the graphs don't convey this fact well enough, our standard application scores benefited very little from the use of higher quality RAM. While the 2T command rate on the 9x300 value configuration did worse than the 9x289 value configuration, nearly all of the other tests show increasing performance, even with slightly lower memory speeds and latencies. The biggest gap between the value and performance RAM was in Business Winstones at 2.4 GHz, and even then, it was only a 5% margin of victory.

RAM Latency Encoding Performance
Comments Locked

101 Comments

View All Comments

  • Deathcharge - Saturday, October 15, 2005 - link

    also what do you think of opteron 144 or 146? the 144 are very cheap and they OC quite well apprently
  • pmorcos - Thursday, October 13, 2005 - link

    Before I comment, you should know that I have been overclocking for 8 years now and literally overclocked all but one of the chips you mentioned in the beginning of this very good article. The HT multiplier was new to me with my most recent DFI NF4-SLI-DR board so I found that extremely useful and plan to see if I can up my speeds...but I digress.

    I think it would be extremely valuable to TRY to put in words the order with which an overclocker should approach making changes to settings. In other words, which is likely to be the most limiting/critical aspect(s) and from there tweak the others to max the system out.
    It would be interesting to say, for example, that you start with a "safe" power settings (which is pretty obviously the limiting factor). For example, let's say your CPU and memory are rated at 1.3 and 2.8 V respectively. Why not go straight to "safe" settings for the two and tweak from there? It seems that the most useful piece of information that is NOT provided by anandtech or anyone else for that matter is a voltage and temp graph of stability/viability for these chips. It would be simple to take 3 samples (at a cost) of each chip and run the test with "average" cooling and find out what is "safe". For example if running all stock settings but upping voltages to say 2.4/3.6 V in the example above, you might see stability up to 1.65 / 3.1 V with the parts catching fire at say 1.8/3.3 V or stable at temp readings for cpu/memory of 44/47C but unstable above that. Once armed with these two graphs of information averaged from 3 chips tested the rest is very straight forward.

    You simply set the cpu volts to 1.65 and memory to 3.1 V (the safe settings; check real voltages vis bios monitoring) and now you up your fsb and tweak your memory timings and in a few minutes you are running max.

    Why do I think this is more valuable that showing us a graph of your results? Because like many I'm squeemish about upping the voltage on my processor and memory. I'm worried much more about the power-on affects than I am the "long-term" effects.
    In computers, there are no long-terms for an overclocker. An overclocker's comp is 60% hardware and 40% software. Their greatest joy is in posting results on their favorite forum. I want to know that when I hit the power button...that the 1.7V setting does NOT have a 10% chance of blowing my processor.
    My ramblings. Thanks again for another great article from by far the VERY BEST place in the world to find out how computer parts work.
  • JarredWalton - Thursday, October 13, 2005 - link

    Thanks pmorcos.

    I'm working on the X2 3800+ OC followup, and I've gone back and done further testing of temperatures and voltages. Chips differ, so the real advice I have on that subject is to test your own chip extensively. I've heard of people doing 2.8 GHz on 1.500V with the Venice chips, but mine won't even POST at those settings. I think 1.65 or 1.70V was required to POST, and even then I couldn't run stable benchmarks without more voltage.

    I will also be trying to cover a bit more of the "how to" process in the next one. Consider this the foundation, and the next article will refine the approach a bit. Your comments on what you'd like to see more of are definitely welcome, though, and I'll try to address the order and approach I take next.

    Concerning another comment: "I want to know that when I hit the power button...that the 1.7V setting does NOT have a 10% chance of blowing my processor." I'm not quite sure I understand the concern or know how to test that. Are you saying that the power on process has more voltage fluctuations and may therefore toast the CPU in the first second? (I haven't had that happen over the past several months of testing this chip and others in overclocked setups.) I must admit that I'm extremely nervous about the 1.850V I used for running at 2.80 GHz, but even then the chip continued to function (for now - heheh).

    Cheers!
    Jarred Walton
  • WhipperSnapper - Thursday, October 13, 2005 - link


    That was one of the best computer enthusiast website articles that I've read in a long time, but perhaps I don't get around too much. I'd like to hear more about the problems that spilled over to other components, such as the SATA hard drive (mentioned in the Final Thoughts) and whether or not the overclocking can be isolated to the CPU and RAM. I also wondered if there was a reason why you guys used a SATA hard drive and not an IDE drive and whether overclocking requires a SATA hard drive. (I don't see why it would.)

    Also, have you guys tried to do any tests using memory stick heatsinks? Do they actually do anything? That subject might make for a worthwhile article on its own--RAM cooling.
  • aptinio - Saturday, October 8, 2005 - link

    bravo! great article. very informative but not too bloated. can't wait to finally upgrade my amd k6-II with 1mb l3 cache on the motherboard! lol!
  • Kougar7 - Thursday, October 6, 2005 - link

    Thank you for the excellent, comprehensive, and very thorough article! :-) It must have taken a massive amount of work and time to complete. It’s answered my recent musings about my own Crucial value ram, which looks much nicer now! It’s also solved a question about OCing with recent AMD 64 chips, amongst also correcting a few personal misconceptions I’ve had.

    I just wish to ask if you plan to include a similar article on OCing with P4s? I personally run a 2.8C (Northwood) @ 3.4 rock solid at the 3.4C’s default voltage, but am now wondering exactly what performance hits, if any, that I’ve taken from having to use a 5:4 CPU:DRAM ratio instead of the previous 1:1, even though I’ve kept it at DDR390 and the timings better than specs.

    I’m planning to bench the differences from a 1:1 ratio, a 3:2 ratio at highest speed I can get (sub-DDR333), my current setup, and finally one other setting where I got the value memory to run 2-2-2-6 timings, to get a more solid idea on which performs best with some solid figures.

    Although the core and the platform itself both have both changed, I’d still be interested in a Intel processor based test! Perhaps instead of a P4, maybe a Pentium “D” OCing article similar to what you have planned with the X2 3800+? ;-)

    I’m very much looking forward to your X2 3800+ OCing review!! You rock :-D Thanks in advance for it!
  • JarredWalton - Thursday, October 6, 2005 - link

    I'm trying to get a socket 775 motherboard that will overclock well with Pentium D 820. Once I get that, I can give it a go. I've also got a Pentium 4 505 and a 540 that I want to run some similar tests on. First, though, I need an appropriate motherboard.
  • clue22 - Thursday, October 6, 2005 - link

    so basically what the everybody is saying about the value RAM vs. low latency more expensive RAM is that for the athlon 64 it is basically a waste of money (i.e. you only get about 5% performance gain), but usually spend 100% or more money to get the "better" RAM. i have to build a couple of systems pretty soon and now i believe that my money would be better spent on 2GB of value RAM vs. 1GB of the more expensive stuff. does anyone know of a test that has been run with 2.5-3-3-8-1t vs. 2-2-2-5-1t? also why does every mid-range/gaming/hot-rod price guide ever recommend the either the samsung tccd (or tcc5) or winbond bh5/ch5 based memory if it has so little effect on performance. finally is it even important anymore (if it ever was) to get matched pairs of memory that are bundled together (supposedly manufactured at the same time)? i was looking at some corsair (had good experience with them in the past) xms3200xl RAM but now i think i should get more of their value select memory instead.

    thanks
  • RupertS - Wednesday, October 26, 2005 - link

    so basically what the everybody is saying about the value RAM vs. low latency more expensive RAM is that for the athlon 64 it is basically a waste of money

    This may not be a general rule.

    It may just be that at this stage of development for GPU's, CPU's and memory, memory has more than enough capacity - it is not the choke point. If GPU and CPU speed were to improve while memory speed stayed the same, you might reach the point where increasing GPU and CPU speed was non-productive for games, while overclocking memory provided large performance improvements.
  • rabbit fighter - Wednesday, October 5, 2005 - link

    Where was this explained? He said the 3200 was better in the first paragraph and that he would explain later, but I can't find the later explanation!

Log in

Don't have an account? Sign up now