The Skylake CPU Architecture

As with any new Intel architecture, the devil is in the details. Previously at AnandTech we have been able to provide deep dives into what exactly is going on in the belly of the beast, although the launch of Skylake has posed a fair share of problems.

Nominally we rely on a certain amount of openness from the processor/SoC manufacturer in providing low level details that we can verify and/or explain. In the past, this information has typically been provided in advance of the launch by way of several meetings/consultations with discussions talking to the engineers. There are some things we can probe, but others are like a black box. The black box nature of some elements, such as Qualcomm’s Adreno graphics, means that it will remain a mystery until Pandora’s box is opened.

In the lead up to the launch of Intel’s Skylake platform, architecture details have been both thin on the ground and thin in the air, even when it comes down to fundamental details about the EU counts of the integrated graphics, or explanations regarding the change in processor naming scheme. In almost all circumstances, we’ve been told to wait until Intel’s Developer Forum in mid-August for the main reason that the launch today is not the full stack Skylake launch, which will take place later in the quarter. Both Ryan and I will be at IDF taking fastidious notes and asking questions for everyone, but at this point in time a good portion of our analysis comes from information provided by sources other than Intel, and while we trust it, we can't fully verify it as we normally would.

As a result, the details on the following few pages have been formed through investigation, discussion and collaboration outside the normal channels, and may be updated as more information is discovered or confirmed. Some of this information is mirrored in our other coverage in order to offer a complete picture in each article as well. After IDF we plan to put together a more detailed architecture piece as a fundamental block in analyzing our end results.

The CPU

As bad as it sounds, the best image of the underlying processor architecture is the block diagram:

From a CPU connectivity standpoint, we discussed the DDR3L/DDR4 dual memory controller design on the previous page so we won’t go over it again here. On the PCI-Express Graphics allocation side, the Skylake processors will have sixteen PCIe 3.0 lanes to use for directly attached devices to the processor, similar to Intel's previous generation processors. These can be split into a single PCIe 3.0 x16, x8/x8 or x8/x4/x4 with basic motherboard design. (Note that this is different to early reports of Skylake having 20 PCIe 3.0 lanes for GPUs. It does not.)

With this, SLI will work up to x8/x8. If a motherboard supports x8/x4/x4 and a PCIe card is placed into that bottom slot, SLI will not work because only one GPU will have eight lanes. NVIDIA requires a minimum of PCIe x8 in order to enable SLI. Crossfire has no such limitation, which makes the possible configurations interesting. Below we discuss that the chipset has 20 (!) PCIe 3.0 lanes to use in five sets of four lanes, and these could be used for graphics cards as well. That means a motherboard can support x8/x8 from the CPU and PCIe 3.0 x4 from the chipset and end up with either dual-SLI or tri-CFX enabled when all the slots are populated.

DMI 3.0

The processor is connected to the chipset by the four-lane DMI 3.0 interface. The DMI 3.0 protocol is an upgrade over the previous generation which used DMI 2.0 – this upgrade boosts the speed from 5.0 GT/s (2GB/sec) to 8.0 GT/s (~3.93GB/sec), essentially upgrading DMI from PCIe 2 to PCIe 3, but requires the motherboard traces between the CPU and chipset to be shorter (7 inches rather than 8 inches) in order to maintain signal speed and integrity. This also allows one of the biggest upgrades to the system, chipset connectivity, as shown below in the HSIO section.

CPU Power Arrangements

Moving on to power arrangements, with Skylake the situation changes as compared to Haswell. Prior to Haswell, voltage regulation was performed by the motherboard and the right voltages were then put into the processor. This was deemed inefficient for power consumption, and for the Haswell/Broadwell processors Intel decided to create a fully integrated voltage regulator (FIVR) in order to reduce motherboard cost and reduce power consumption. This had an unintended side-effect – while it was more efficient (good for mobile platforms), it also acted as a source of heat generation inside the CPU with high frequencies. As a result, overclocking was limited by temperatures and the quality of the FIVR led to a large variation in results. For Skylake on the desktop, the voltage regulation is moved back into the hands of the motherboard manufacturers. This should allow for cooler processors depending on how the silicon works, but it will result in slightly more expensive motherboards.

A slight indication of this will be that some motherboards will go back to having a large amount of multiplexed phases on the motherboard, and it will allow some manufacturers to use this as a differentiating point, although the usefulness of such a design is sometimes questionable.

Also Launching Today: Z170 Motherboards, Dual Channel DDR4 Kits Skylake's iGPU: Intel Gen9
Comments Locked

477 Comments

View All Comments

  • xxxGODxxx - Saturday, October 31, 2015 - link

    Hi guys I would like to know whether I should buy the 6600k with a z170 mobo at $417 or should I buy a 3930k with a x79 mobo at $330? I'm not too sure if the extra IPC of the 6600k is enough to warrant the extra $87 over the 3930k especially since I will be overclocking the cpu and I will be gaming on a r9 390 (maybe I will add one more 390 in the future) at 1440p.
  • Toyevo - Wednesday, November 25, 2015 - link

    Even now I hesitate at updating a Phenom II X4 945. The Samsung 950 Pro pushed me over the line, and with it the need for PCIe M.2 only available in recent generations. There's no holy grail in CPUs, only what's relevant for each individual today. Of several other systems I have, none demand any change yet. On the Intel side my 2500K (and up) I wouldn't bother with even Skylake. With AMD my FX6300 (and up) are more power hungry but entirely adequate. And our E5-2xxx servers sit on Ivy Bridge until early 2017.

    What does all this mean? Not a lot.. In the same way many of you see Skylake as a non event, I equally saw Broadwell and Haswell as non events. 20 years ago the jumps were staggering, overclocking wasn't nearly as trendy, nor as straight forward, but entirely necessary, the cost of new hardware prohibitively expensive. The generations were so definitive and fast back then.
  • i_will_eat_you - Saturday, December 12, 2015 - link

    This is a good review, especially the look at memory latency. The 4690K is left out however from a lot of benchmarks. If you include that then I don't see much of an attraction to skylake. There is also the concern about the new rootkit support skylake introduces with protected code execution. This is not something I see being used for the good of the consumer.

    My one gripe is the lack of benchmarks for intense game engines (simulations, etc). Total war is there which is a step forward but I'm not sure if that benchmark really measures simulation engine performance.

    If you take games such as Sins of a Solar Empire or Supreme Commander then they have a separate thread for graphics so tend to maintain a decent frame rate even when the game engine runs at a crawl. The more units you add to the map and the more that is going on the slower it goes. But this is not in FPS. It means that ordering a ship across the solar system might take 10 s when there are 1000 units in the game but 5 minutes when there are 100000 units in the game. I would love to see some benchmarks measuring engine performance of games such as this with the unit limits greatly increased. It is a bit of a niche but many sim games (RTS, etc) scale naturally which means you can increase the unit limit, map size, AI difficulty, number of AIs, etc as your hardware becomes more powerful.

    This is especially relevant with CPUs such as the broadwell which might gain a big advantage each game loop processing the very large simulation engine dataset.
  • systemBuilder - Tuesday, July 19, 2016 - link

    Wow your review really sucked. Where are the benchmarks for the i5-6600k? Did you forget?
  • POPCORNS - Friday, August 19, 2016 - link

    To me, It doesn't matter if there's no IPC improvement over Sandy Bridge, Ivy Bridge or Haswell,
    Because I've upgraded from a Wolfdale Celeron (E3300) to a Skylake (6700K), lol.
  • oranos - Thursday, December 29, 2016 - link

    This article seems to be confused. DDR4 brings more sustained framerates for higher resolutions (especially 4k). Really a waste of time doing a 1080p comparison.
  • oranos - Thursday, December 29, 2016 - link

    if you wanted to do a proper test for DDR4 gaming performance you should run a 6700K and GTX 1080 minimum and run multiple games in 4K for testing.

Log in

Don't have an account? Sign up now