A small portion of this page was posted as part of our initial Skylake-K coverage.

The High Level Core

Taking a high level view, the application of the Skylake-S/H architecture features results in the following:

From most perspectives, the central image provided by Intel could have been taken from a Broadwell presentation if you scrubbed out the ‘GT4’ element of the design, but the move to Skylake suggests improvements in each of these areas either in terms of functionality, bandwidth, power consumption, or all of the above.

Out of the box, the Skylake IA core is designed to extract instruction level parallelism from code, and the design allows for more operations to be dispatched, queued and retired in a single clock. We cover this more in the next page. But to augment this, the data transfer ring between cores, cache, graphics and the system agent is also increased in read bandwidth to 128b, allowing more data to be supplied to the cores.  In terms of the cores themselves, the big upgrades surround in-flight management of data, especially with respect to cache misses. We have been told that a number of these upgrades, while they might not be readily usable by current software, should have effects in certain vertical markets (finance, big data, etc).

From a memory connectivity standpoint, most processors will support both DDR3L and DDR4 with a dual memory controller design (or a single memory controller that supports both). Skylake-Y (Core M) is the exception here, and will only work in DDR3L/LPDDR3 modes. As always with multi-DRAM support designs, only one standard can be in use at any time.

On the PCI-Express Graphics allocation side, the Skylake-H and Skylake-K processors will have sixteen PCIe 3.0 lanes to use for directly attached devices to the processor, similar to Intel's previous generation processors. These can be split into a single PCIe 3.0 x16, x8/x8 or x8/x4/x4 with basic motherboard design. (Note that this is different to early reports of Skylake having 20 PCIe 3.0 lanes for GPUs. It does not.)

With this SLI will work up to x8/x8 and CrossFire to x8/x8 or x8/x4/x4 in both desktop and mobile designs. For desktops, if a motherboard supports x8/x4/x4 and a PCIe card is placed into that bottom slot, SLI will not work because only one GPU will have eight lanes. NVIDIA requires a minimum of PCIe x8 in order to enable SLI. Crossfire has no such limitation, which makes the possible configurations interesting. Below we discuss that the chipset has 20 (!) PCIe 3.0 lanes to use in five sets of four lanes, and these could be used for graphics cards as well. That means a motherboard can support x8/x8 from the CPU and PCIe 3.0 x4 from the chipset and end up with either dual-SLI or tri-CFX enabled when all the slots are populated.

For Skylake-U/Y, these processors are not typically paired with discrete graphics and as far as we can tell, the PCIe lanes have been removed from these lines. As a result, any storage based on PCIe (such as M.2) for devices based on these processors will be using the chipset PCIe lanes. As mentioned later, the chipsets on U/Y also differ to their more desktop oriented counterparts.

DMI 3.0

The Skylake-S and H processors are connected to the chipset by the four-lane DMI 3.0 interface. The DMI 3.0 protocol is an upgrade over the previous generation which used DMI 2.0 – this upgrade boosts the speed from 5.0 GT/s (2GB/sec) to 8.0 GT/s (~3.93GB/sec), essentially upgrading DMI from PCIe 2 to PCIe 3, but requires the motherboard traces between the CPU and chipset to be shorter (7 inches rather than 8 inches) in order to maintain signal speed and integrity. This also allows one of the biggest upgrades to the system, chipset connectivity, as shown below in the HSIO section.

Skylake-Y/H, by virtue of having the chipset on the processor package, can keep the interface between the CPU and IO very simple and uses what they call an OPIO – on package input-output. We have seen this as far back as Haswell to deliver 4GB/s of bandwidth at 1pJ/bit, and has been listed as being very power efficient as well as highly scalable.

CPU Power Delivery – Moving the FIVR

Moving on to power arrangements, with Skylake the situation changes as compared to Haswell. Prior to Haswell, voltage regulation was performed by the motherboard and the right voltages were then put into the processor. This was deemed inefficient for power consumption, and for the Haswell/Broadwell processors Intel decided to create a fully integrated voltage regulator (FIVR) in order to reduce motherboard cost and reduce power consumption. This had an unintended side-effect – while it was more efficient (good for mobile platforms), it also acted as a source of heat generation inside the CPU with high frequencies. As a result, overclocking was limited by temperatures and the quality of the FIVR led to a large variation in results. For Broadwell-Y, this also resulted in an increase for the z-height of the processor due to having leveling transistors on the rear of the package. As a result, in order to decrease the z-height of any corresponding product, a hole in the motherboard had to be forged:

For Skylake, the voltage regulation is moved back into the hands of the motherboard manufacturers. This should allow for cooler processors depending on how the silicon works, but it will result in slightly more expensive motherboards.

A slight indication of this will be that some motherboards will go back to having a large amount of multiplexed phases on the motherboard, and it will allow some manufacturers to use this as a differentiating point, although the usefulness of such a design is sometimes questionable.

The Claims: Performance and Power The Skylake Package: 100-Series Chipsets and DRAM
Comments Locked

173 Comments

View All Comments

  • just4U - Wednesday, September 2, 2015 - link

    I have to agree with Jumangi,

    If your gaming plans revolve around a integrated GPU your still better served to go the AMD route.. While the CPU is not as fast it's no slouch either.. and gaming performance is going to be acceptable in comparison on most titles.
  • sundragon - Monday, September 7, 2015 - link

    Um, first hand experience: Macbook Pro 2015, (Iris 6200): Skyrim, ESO, Civilization 5, Homeworld, all run at 1440x - I love all these people talk about intel integrated graphics sucking, meanwhile I'm getting crushed in Civ5 and kicking ass in Homeworld and ESO.
    I'm not lugging an integrated laptop around to play games, I have a laptop and I like to have ONE LAPTOP, and guess what, everything I've thrown on here has played. My MBA 2012 HD4000 struggled with Skyrim and Civ 5 but I still played. Please stop talking theoretical and talk about your actual rig... /end rant
  • BurntMyBacon - Thursday, September 3, 2015 - link

    @retrospooty: Core2 era was more a return to parity. One of the most even matchups I can remember was the ironically similarly numbered Phenom II 955 and the Core 2 Quad 9550. Nahalem is what really did the damage. Here's hoping Zen can put AMD back in the ballpark.

    I do think AMD has a pretty significant GPU advantage in the area of gaming over Intel. However, as you've stated, the power/thermal constraints do not allow them to fully exploit this advantage. A CPU intense game, even if not CPU limited, will chew up much of the GPU's available thermal envelop, effectively eliminating any advantage AMD had. Granted, there are cases where the thermal solutions in play provide the necessary thermal headroom, but these are mostly found in laptops that are already using discrete chips.
  • MrBungle123 - Thursday, September 3, 2015 - link

    The Phenom II didn't come out until after Intel had retired the Core 2 line. Everyone wants AMD to be competitive but the fact is they are miles behind Intel.
  • MapRef41N93W - Friday, September 4, 2015 - link

    Guess you didn't read the review of Broadwell Iris Pro on this very site. AMD's GPU loses by as much as 20-30% in most games vs Broadwell Iris Pro. Skylake Iris Pro will be offering up to 50% more performance.
  • V900 - Wednesday, September 2, 2015 - link

    4: Not everybody who are interested in a gaming machine can afford a Core i7 and several 1000$ graphic cards in a SLI configuration. A lot of gamers have a budget between 500$-1000$, and if you can get/get close to XB1 performance with just an Intel IGP, it would be perfect for that kind of budget.

    Also: Why would you think a 13' laptop with Iris Pro and 72 execution units would "fail miserably" in comparison with an XB1/PS4?!?

    That's ridiculous. Any advantage the console would have is tiny.

    Just get two wireless controllers and hook up the laptop to your HDTV with a HDMI cable, and the experience would be close to identical....
  • MrSpadge - Wednesday, September 2, 2015 - link

    "Also: Why would you think a 13' laptop with Iris Pro and 72 execution units would "fail miserably" in comparison with an XB1/PS4?!?"

    Because he specifically mentioned this in conjunction with "user experience". The PC gives you freedom but certainly not the ease of use of a console. Which is mainly why these things exist at all.
  • Jumangi - Wednesday, September 2, 2015 - link

    Lolz if you think an Intel only machine with any sort of Integrated graphics(even the best Iris Pro) will give you anything close to an Xbox One game your seriously naive and ignorant. Stop looking at theoretical Gflops numbers to make comparisons.
  • IanHagen - Wednesday, September 2, 2015 - link

    Well, a few posts back up you're stating that AMD's A10 APU have "far better graphics" when it failed to beat last generation Iris 5200 GPU and now there you are, talking about naiveness and ignorance.
  • Jumangi - Wednesday, September 2, 2015 - link

    Compare actual gaming on the two mr naive one. also compare the huge cost differences of these chips. An Iris Pro laptop will be far far more expensive.

Log in

Don't have an account? Sign up now