The Intel Skylake i7-6700K Overclocking Performance Mini-Test to 4.8 GHz
by Ian Cutress on August 28, 2015 2:30 PM ESTConclusions
The how, what and why questions that surround overclocking often result in answers that either confuse or dazzle, depending on the mind-set of the user listening. At the end of the day, it originated from trying to get extra performance for nothing. Buying the low-end, cheaper processors and changing a few settings (or an onboard timing crystal) would result in the same performance as a more expensive model. When we were dealing with single core systems, the speed increase was immediate. With dual core platforms, there was a noticeable difference as well, and overclocking gave the same performance as a high end component. This was noticeable particularly in games which would have CPU bottlenecks due to single/dual core design. However in recent years, this has changed.
Intel sells mainstream processors in both dual and quad core flavors, each with a subset that enable hyperthreading and some other distinctions. This affords five platforms – Celeron, Pentium, i3, i5 and i7 going from weakest to strongest. Overclocking is now enabled solely reserved for the most extreme i5 and i7 processors. Overclocking in this sense now means taking the highest performance parts even further, and there is no recourse to go from low end to high end – extra money has to be spent in order to do so.
As an aside, in 2014, Intel released the Pentium G3258, an overclockable dual core processor without hyperthreading. When we tested, it overclocked to a nice high frequency and it performed in single threaded workloads as expected. However, a dual core processor is not a quad core, and even with a +50% increase in frequency, it will not escape a +100% or +200% increase in threads over the i5 or i7 high end processors. With software and games now taking advantage of multiple cores, having too few cores is the bottleneck, not frequency. Unfortunately you cannot graft on extra silicon as easily as pressing a few buttons.
One potential avenue is to launch an overclockable i3 processor, using a dual core with hyperthreading, which might play on par with an i5 even though we have hyperthreads compared to actual core count. But if it performed, it might draw away sales from the high end overclocking processors, and Intel does not have competition in this space, so I doubt we would see it any time soon.
But what exactly does overclocking the highest performing processor actually achieve? Our results, including all the ones in Bench not specifically listed in this piece, show improvements across the board in all our processor tests.
Here we get three very distinct categories of results. The move of +200 MHz is rounded to about a 5% jump, and with our CPU tests it is more nearer 4% for each step up and slightly less in our Linux Bench. In both of these there were benchmarks that bought the average down due to other bottlenecks in the system: Photoscan Stage 2 (the complex multithreaded stage) was variable and in Linux Bench both NPB and Redis-1 gave results that were more DRAM limited. Remove these and the results get closer to the true % gain.
Meanwhile, all of our i7-6700K overclocked testing are now also available in Bench, allowing direct comparison to other processors. Other CPUs when overclocked will be updated in due course.
Moving on, with our discrete testing on a GTX 980, our series of games had little impact on increased frequency at 1080p or even SoM at 4K. Some might argue that this is to be expected, because at high settings the onus is more on the graphics card – but ultimately with a GTX 980 you would be running at 1080p or better at maximum settings where possible.
Finally, the integrated graphics results are a significantly different ball game. When we left the IGP at default frequencies, and just overclocked the processor. The results give a decline in average frame rates, despite the higher frequency, which is perhaps counterintuitive and not expected. The explanation here is due to power delivery budgets – when overclocked, the majority of the power pushes through to the CPU and items are processed quicker. This leaves less of a power budget within the silicon for the integrated graphics, either resulting in lower frequencies to maintain the status quo or by the increase in graphical data occurring over the DRAM-to-CPU bus causing a memory latency bottleneck. Think of it like a see-saw: when you push harder on the CPU side, the IGP side effect is lower. Normally this would be mitigated by increasing the power limit on the processor as a whole in the BIOS, however in this case this had no effect.
When we fixed the integrated graphics frequencies however, this issue disappeared.
Taking Shadow of Mordor as the example, raising the graphics frequency not only gave a boost in performance when we used the presets provided on the ASRock motherboard, but also the issue of balancing power between the processor and the graphics disappeared and our results were within expected variance.
103 Comments
View All Comments
Zoeff - Friday, August 28, 2015 - link
As an owner of a 6700K that's running at 4.8GHz, this is a very interesting article for me. :)I've currently entered 1.470v in the UEFI and I can get up to 1.5v in CPUz. Anything lower and it becomes unstable. So I guess I'm probably on the high side voltage wise...
zepi - Friday, August 28, 2015 - link
Sounds like a scorching voltage for 24/7 operations considering it is 14nm process... But obviously, we don't really know if this is detrimental on longer term.0razor1 - Friday, August 28, 2015 - link
I believe it is. Ion shift. High voltage = breakdown at some level. Enough damage and things go amiss.When one considers 1.35+ for 22nm high, I wonder why we're doing this (1.35+) at 14nm.
If it's OK, then can someone illustrate why one should not go over say 1.6V on the DRAM in 22nm, why stick to 1.35V for 14nm? Might as well use standard previous generation voltages and call it a day?
Further, where are the AVX stable loads? Sorry, but no P95 small in place FFTs with AVX = NOT stable enough for me. It's not the temps ( I have an h100i) for sure. For example, on my 4670k, it takes 1.22VCore for 4.6GHz, but 1.27VCore when I stress with AVX loads ( P95 being one of them).
It's *not* OK to say hey that synthetic is too much of a stress etc. I used nothing but P95 since K-10 and haven't found a better error catcher.
0razor1 - Friday, August 28, 2015 - link
To add to the above, downclocking the core on GPU's and running memcheck in OCCT is *it* for my VRAM stability tests when I OC my graphics cards. I wonder how people just 'look' for corruption in benchmarks like firestrike and call their OC's stable. It doesn't work.Run a game and leave it idle for ~ 10 hours and come back. You will find glitches all over the place on your 'stable' OC.
Just sayin- OC stability testing has fallen to new lows in the recent past, be it graphic cards or processors.
Zoeff - Friday, August 28, 2015 - link
I tend to do quick tests such as Cinebench 15 and HandBrake, then if that passes I just run it for a week with regular usage such as gaming and streaming. If it blue screens or I get any other oddities I raise the voltage by 0.01v. I had to do that twice in the space of 1 week (started at 1.45v, 4.8GHz)Oxford Guy - Saturday, August 29, 2015 - link
That's a great way to corrupt your OS and programs.Impulses - Saturday, August 29, 2015 - link
Yeah I do all my strenuous testing first, if I have to simulate real world conditions by leaving two tests running simultaneously I do it too... Like running an encode with Prime in the background; or stressing the CPU, GPU, AND I/O simultaneously.AFTER I've done all that THEN I'll restore a pre-tinkering OS image, unless I had already restored one after my last BSOD or crash... Which I'll do sometimes mid-testing if I think I've pushed the OC far enough that anything might be hinky.
It's so trivial to work with backups like that, should SOP.
Oxford Guy - Sunday, August 30, 2015 - link
If a person is using an unstable overclock for daily work it may be hard to know if stealth corruption is happening.kuttan - Sunday, August 30, 2015 - link
haha that is funny.kmmatney - Saturday, September 19, 2015 - link
I do the same as the OP (but use Prime95 and Handbrake). If it passes a short test there (say one move in Handbrake) I just start using the machine. I've had blue screens, but never any corruption issues. I guess corruption could happen, but the odds are pretty low. My computer gets backed up every night to a WHS server, so I can be fearless..