Skylake's iGPU: Intel Gen9

Both the Skylake processors here use Intel’s HD 530 graphics solution. When I first heard the name, alarm bells went off in my head with questions: why is the name different, has the architecture changed, and what does this mean fundamentally?

Not coming up with many details, we did the obvious thing – check what information comes directly out of the processor. Querying HD 530 via Intel's OpenCL driver reports a 24 EU design running at 1150 MHz. This is different than what GPU-Z indicates, which points to a 48 EU design instead, although GPU-Z is not often correct on newer graphics modules before launch day. We can confirm that this is a 24 EU design, and this most likely follows on from Intel’s 8th Generation graphics in the sense that we have a base GT2 design featuring three sub-slices of 8 EUs each.

As far as we can tell, Intel calls the HD 530 graphics part of its 9th Generation (i.e. Gen9). We have been told directly by Intel that they have changed their graphics naming scheme from a four digit (e.g. HD4600) to a three digit (HD 530) arrangement in order "to minimize confusion" (direct quote). Personally we find that it adds more confusion, because the HD 4600 naming is not directly linked to the HD 530 naming. While you could argue that 5 is more than 4, but we already have HD 5200, HD 5500, Iris 6100 and others. So which is better, HD 530 or HD 5200? At this point it will already create a miasma of uncertainty, probably exaggerated until we get a definite explanation of the stack nomenclature.

Naming aside, Generation 9 graphics comes with some interesting enhancements. The slice and un-slice now have individual power and clock domains, allowing for a more efficient use of resources depending on the load (e.g. some un-slice not needed for some compute tasks). This lets the iGPU better balance power usage between fixed-function operation and programmable shaders.

Generation 9 will support a feature called Multi Plane Overlay, which is a similar feature to AMD’s video playback path adjustments in Carrizo. The principle here is that when a 3D engine has to perform certain operations to an image (blend, resize, scale), the data has to travel from the processor into DRAM then to the GPU to be worked on, then back out to DRAM before it hits the display controller, a small but potentially inefficient operation in mobile environments. What Multi Plane Overlay does is add fixed function hardware to the display controller to perform this without ever hitting the GPU, minimizing power consumption from the GPU and taking out a good portion of DRAM data transfers. This comes at a slight hit for die area overall due to the added fixed function units.

As shown above, this feature will be supported on Win 8.1 with Skylake’s integrated graphics. That being said, not all imaging can be moved in this way, but where possible the data will take the shorter path.

To go along with the reduced memory transfer, Gen9 has support for memory color stream compression. We have seen this technology come into play for other GPUs, where by virtue of fixed function hardware and lossless algorithms this means that smaller quantities of image and texture data is transferred around the system, again saving power and reducing bandwidth constraints. The memory compression is also used with a scalar and format conversion pipe to reduce the encoding pressure on the execution units, reducing power further.

Adding into the mix, we have learned that Gen9 includes a feature called the ‘Camera Pipe’ for quick standard adjustments to images via hardware acceleration. This adjusts the programmable shaders to work in tandem for specific DX11 extensions on common image manipulation processes beyond resize/scale. The Camera Pipe is teamed with SDKs to help developers connect into optimized imaging APIs.

Media Encoding & Decoding

In the world of encode/decode, we get the following:

Whereas Broadwell implemented HEVC decoding in a "hybrid" fashion using a combination of CPU resources, GPU shaders, and existing GPU video decode blocks, Skylake gets a full, low power fixed function HEVC decoder. For desktop users this shouldn't impact things by too much - maybe improve compatibility a tad - but for mobile platforms this should significantly cut down on the amount of power consumed by HEVC decoding and increase the size and bitrate that the CPU can decode. Going hand-in-hand with HEVC decoding, HEVC encoding is now also an option with Intel's QuickSync encoder, allowing for quicker HEVC transcoding, or more likely real-time HEVC uses such as video conferencing.

Intel is also hedging their bets on HEVC by also implementing a degree of VP9 support on Skylake. VP9 is Google's HEVC alternative codec, with the company pushing it as a royalty-free option. Intel calls VP9 support on Skylake "partial" for both encoding and decoding, indicating that VP9 is likely being handled in a hybrid manner similar to how HEVC was handled on Broadwell.

Finally, JPEG encoding is new for Skylake and set to support images up to 16K*16K.

Video Support

The analog (VGA) video connector has now been completely removed from the CPU/chipset combination, meaning that any VGA/D-Sub video connection has to be provided via an active digital/analog converter chip. This has been a long time coming, and is part of a previous committment made by Intel several years ago to remove VGA by 2015. Removing analog display functionality will mean added cost for legacy support in order to drive analog displays. Arguably this doesn’t mean much for Z170 as the high end platform is typically used with a discrete graphics card that has HDMI or DisplayPort, but we will see motherboards with VGA equipped in order to satisfy some regional markets with specific requirements.

HDMI 2.0 is not supported by default, and only the following resolutions are possible on the three digital display controllers:

A DP to HDMI 2.0 converter, specifically an LS-Pcon, is required to do the adjustments, be it on the motherboard itself or as an external adapter. We suspect that there will not be many takers buying a controller to do this, given the capabilities and added benefits listed by the Alpine Ridge controller.

The Skylake CPU Architecture Skylake's Launch Chipset: Z170
Comments Locked

477 Comments

View All Comments

  • vdek - Thursday, August 6, 2015 - link

    I'm still running my x58 motherboard. I ended up upgrading to a Xeon 5650 for $75, which is a 6 core 32nm CPU compatible with the x58. Overclocked at 4.2ghz on air, the thing has excellent gaming performance, I see absolutely no reason to upgrade to Skylake.
  • bischofs - Thursday, August 6, 2015 - link

    Absolutely agree, My overclocked 920 still runs like a watch after 8 years. Not sure what Intel is doing these days, but lack of competition is really impacting this market.
  • stux - Friday, August 7, 2015 - link

    I upgraded my 920 to a 990x, it runs at about 4.4ghz on air in an XPC chassis! and has 6/12 cores.

    I bought it off ebay cheap, and with an SSD on a SATA3 card I see no reason to upgrade. It works fantastically well, and is pretty much as fast as any modern 4 core machine.
  • Samus - Sunday, October 25, 2015 - link

    If you single GPU and don't go ultra-high-end then gaming is still relevant on x58, but it really isn't capable of SLI due to PCIe 2.0 and the lanes being reduced to 8x electrical when more than one 16x length slot is used. QPI also isn't very efficient by todays standards and at the time, AMD still had a better on-die memory controller, but Intel's first attempt was commendable, but completely overhauled with Sandy Bridge which offered virtually the same performance from 2 channels. Anybody who has run dual channel on X58 knows how bad it actually is and why triple channel is needed to keep it competitive with todays platforms.

    I loved X58. It is undoubtedly the most stable platform I'd had since the 440BX. But as I said, by todays standards, it makes Sandy Bridge seem groundbreaking, not because of the IPC, but because of the chipset platform. The reduced power consumption, simplicity and overall smaller-size and lower cost of 60/70 series chipsets, then the incredibly simplified VRM layout in 80/90 chipsets (due to the ondie FIVR of Haswell) makes X58 "look" ancient, but as I said, still relevant.

    Just don't load up the PCIe bus. A GPU, sound card and USB 3.0 controller is about as far as you want to go, and for the most part, as far as you need too!
  • vdek - Thursday, August 6, 2015 - link

    Get a Xeon 5650, 6 core CPU, 32nm, will run at 4-4.2ghz all day on air. I upgraded my i7 920 the X5650 and I couldn't be happier. They go for about $70-80 on amazon or ebay. I'm planning on keeping my desktop for another 2-3 years, I upgraded the GPU to a GTX970 and it maxes out most of what I can throw at it. I don't really see my CPU as a bottleneck here.
  • mdw9604 - Tuesday, August 11, 2015 - link

    Can you OC a Xeon 5650?
  • mapesdhs - Wednesday, August 12, 2015 - link

    Of course, back then the main oc'ing method was still bclk-based based, though X58 was a little more involved than that compared to P55 (uncore, etc.)
  • LCTR - Saturday, August 15, 2015 - link

    I'd been pondering the 6700K until I saw these posts from 920 users :)
    I use mine for gaming / video editing, it's running non-hyperthreaded at 4.2GHz on air (about 4Ghz with HT on)

    I also upgraded my GPU to a 970 and have seen decent gaming performance - if I could jump to a X5650 and stretch things for 1-2 years that'd be great...

    What sort of performance do you see from the X5650? Would it win 4GHz with HT enabled?
    The Xeon 5650's don't need any special mobo support or anything, do they? I have a gigabyte GA-EX58-UD5

  • Nfarce - Wednesday, August 5, 2015 - link

    Well sadly, ever since SB (which I have one that's 4 years old, a 2500K, alongside a newer Haswell 4690K, each new tick/tock has not been much. The days of getting 50% boost in performance between a few generations are long gone, let alone 100% boost, or doubling performance. Also keep in mind that there is a reason for this decrease in increased performance: as dies shrink, physics with electrons start becoming an issue. Intel has been focusing more on decreased power usage. At some point CPU manufacturers will need to look at an entirely different manufacturing material and design as silicon and traditional PCB design is coming to its limit.
  • Mr Perfect - Wednesday, August 5, 2015 - link

    It's not even 30% in high-end gaming. There is a clear improvement between SB and Skylake, but why should I build a whole new PC for 5FPS? I can't justify that expense.

    I'd be curious to see the high-end gaming benchmarks rerun with the next generation of GPUs. Will next gen GPUs care more about the CPU, or does DX12 eliminate the difference altogether?

Log in

Don't have an account? Sign up now