Addressing the Memory Bandwidth Problem

Integrated graphics solutions always bumped into a glass ceiling because they lacked the high-speed memory interfaces of their discrete counterparts. As Haswell is predominantly a mobile focused architecture, designed to span the gamut from 10W to 84W TDPs, relying on a power-hungry high-speed external memory interface wasn’t going to cut it. Intel’s solution to the problem, like most of Intel’s solutions, involves custom silicon. As a owner of several bleeding edge foundries, would you expect anything less?

As we’ve been talking about for a while now, the highest end Haswell graphics configuration includes 128MB of eDRAM on-package. The eDRAM itself is a custom design by Intel and it’s built on a variant of Intel’s P1271 22nm SoC process (not P1270, the CPU process). Intel needed a set of low leakage 22nm transistors rather than the ability to drive very high frequencies which is why it’s using the mobile SoC 22nm process variant here.

Despite its name, the eDRAM silicon is actually separate from the main microprocessor die - it’s simply housed on the same package. Intel’s reasoning here is obvious. By making Crystalwell (the codename for the eDRAM silicon) a discrete die, it’s easier to respond to changes in demand. If Crystalwell demand is lower than expected, Intel still has a lot of quad-core GT3 Haswell die that it can sell and vice versa.

Crystalwell Architecture

Unlike previous eDRAM implementations in game consoles, Crystalwell is true 4th level cache in the memory hierarchy. It acts as a victim buffer to the L3 cache, meaning anything evicted from L3 cache immediately goes into the L4 cache. Both CPU and GPU requests are cached. The cache can dynamically allocate its partitioning between CPU and GPU use. If you don’t use the GPU at all (e.g. discrete GPU installed), Crystalwell will still work on caching CPU requests. That’s right, Haswell CPUs equipped with Crystalwell effectively have a 128MB L4 cache.

Intel isn’t providing much detail on the connection to Crystalwell other than to say that it’s a narrow, double-pumped serial interface capable of delivering 50GB/s bi-directional bandwidth (100GB/s aggregate). Access latency after a miss in the L3 cache is 30 - 32ns, nicely in between an L3 and main memory access.

The eDRAM clock tops out at 1.6GHz.

There’s only a single size of eDRAM offered this generation: 128MB. Since it’s a cache and not a buffer (and a giant one at that), Intel found that hit rate rarely dropped below 95%. It turns out that for current workloads, Intel didn’t see much benefit beyond a 32MB eDRAM however it wanted the design to be future proof. Intel doubled the size to deal with any increases in game complexity, and doubled it again just to be sure. I believe the exact wording Intel’s Tom Piazza used during his explanation of why 128MB was “go big or go home”. It’s very rare that we see Intel be so liberal with die area, which makes me think this 128MB design is going to stick around for a while.

The 32MB number is particularly interesting because it’s the same number Microsoft arrived at for the embedded SRAM on the Xbox One silicon. If you felt that I was hinting heavily at the Xbox One being ok if its eSRAM was indeed a cache, this is why. I’d also like to point out the difference in future proofing between the two designs.

The Crystalwell enabled graphics driver can choose to keep certain things out of the eDRAM. The frame buffer isn’t stored in eDRAM for example.

Peak Theoretical Memory Bandwidth
  Memory Interface Memory Frequency Peak Theoretical Bandwidth
Intel Iris Pro 5200 128-bit DDR3 + eDRAM 1600MHz + 1600MHz eDRAM 25.6GB/s + 50GB/s eDRAM (bidirectional)
NVIDIA GeForce GT 650M 128-bit GDDR5 5016MHz 80.3 GB/s
Intel HD 5100/4600/4000 128-bit DDR3 1600MHz 25.6GB/s
Apple A6X 128-bit LPDDR2 1066MHz 17.1 GB/s

Intel claims that it would take a 100 - 130GB/s GDDR memory interface to deliver similar effective performance to Crystalwell since the latter is a cache. Accessing the same data (e.g. texture reads) over and over again is greatly benefitted by having a large L4 cache on package.

I get the impression that the plan might be to keep the eDRAM on a n-1 process going forward. When Intel moves to 14nm with Broadwell, it’s entirely possible that Crystalwell will remain at 22nm. Doing so would help Intel put older fabs to use, especially if there’s no need for a near term increase in eDRAM size. I asked about the potential to integrate eDRAM on-die, but was told that it’s far too early for that discussion. Given the size of the 128MB eDRAM on 22nm (~84mm^2), I can understand why. Intel did float an interesting idea by me though. In the future it could integrate 16 - 32MB of eDRAM on-die for specific use cases (e.g. storing the frame buffer).

Intel settled on eDRAM because of its high bandwidth and low power characteristics. According to Intel, Crystalwell’s bandwidth curve is very flat - far more workload independent than GDDR5. The power consumption also sounds very good. At idle, simply refreshing whatever data is stored within, the Crystalwell die will consume between 0.5W and 1W. Under load, operating at full bandwidth, the power usage is 3.5 - 4.5W. The idle figures might sound a bit high, but do keep in mind that since Crystalwell caches both CPU and GPU memory it’s entirely possible to shut off the main memory controller and operate completely on-package depending on the workload. At the same time, I suspect there’s room for future power improvements especially as Crystalwell (or a lower power derivative) heads towards ultra mobile silicon.

Crystalwell is tracked by Haswell’s PCU (Power Control Unit) just like the CPU cores, GPU, L3 cache, etc... Paying attention to thermals, workload and even eDRAM hit rate, the PCU can shift power budget between the CPU, GPU and eDRAM.

Crystalwell is only offered alongside quad-core GT3 Haswell. Unlike previous generations of Intel graphics, high-end socketed desktop parts do not get Crystalwell. Only mobile H-SKUs and desktop (BGA-only) R-SKUs have Crystalwell at this point. Given the potential use as a very large CPU cache, it’s a bit insane that Intel won’t even offer a single K-series SKU with Crystalwell on-board.

As for why lower end parts don’t get it, they simply don’t have high enough memory bandwidth demands - particularly in GT1/GT2 graphics configurations. According to Intel, once you get to about 18W then GT3e starts to make sense but you run into die size constraints there. An Ultrabook SKU with Crystalwell would make a ton of sense, but given where Ultrabooks are headed (price-wise) I’m not sure Intel could get any takers.

Haswell GPU Architecture & Iris Pro The Core i7-4950HQ Mobile CRB
Comments Locked

177 Comments

View All Comments

  • Death666Angel - Tuesday, June 4, 2013 - link

    "What Intel hopes however is that the power savings by going to a single 47W part will win over OEMs in the long run, after all, we are talking about notebooks here."
    This plus simpler board designs and fewer voltage regulators and less space used.
    And I agree, I want this in a K-SKU.
  • Death666Angel - Tuesday, June 4, 2013 - link

    And doesn't MacOS support Optimus?
    RE: "In our 15-inch MacBook Pro with Retina Display review we found that simply having the discrete GPU enabled could reduce web browsing battery life by ~25%."
  • GullLars - Tuesday, June 4, 2013 - link

    Those are strong words in the end, but i agree Intel should make a K-series CPU with Crystalwell. What comes to mind is they may be doing that for Broadwell.

    The Iris Pro solution with eDRAM looks like a nice fit for what i want in my notebook upgrade coming this fall. I've been getting by on a Core2Duo laptop, and didn't go for Ivy Bridge because there were no good models with a 1920x1200 or 1920x1080 display without dedicated graphics. For a system that will not be used for gaming at all, but needs resolution for productivity, it wasn't worth it. I hope this will change with Haswell, and that i will be able to get a 15" laptop with >= 1200p without dedicated graphics. 4950HQ or 4850HQ seems like an ideal fit. I don't mind spending $1500-2000 for a high quality laptop :)
  • IntelUser2000 - Tuesday, June 4, 2013 - link

    ANAND!!

    You got the FLOPs rating wrong on the Sandy Bridge parts. They are at 1/2 of Ivy Bridge.

    1350MHz with 12 EUs and 8 FLOPs/EU will result in 129.6GFlops. While its true in very limited scenarios Sandy Bridge's iGPU can co-issue, its small enough to be non-existent. That is why a 6EU HD 2500 comes close to 12EU HD 3000.
  • Hrel - Tuesday, June 4, 2013 - link

    If they use only the HD4600 and Iris Pro that'd probably be better. As long as it's clearly labeled on laptops. HD 4600 Pro (don't expect to do any video work on this) Iris Pro (it's passable in a pinch).

    But I don't think that's what's going to happen. Iris Pro could be great for Ultrabooks; I don't really see any use outside of that though. A low end GT740M is still a better option in any laptop that has the thermal room for it. Considering you can put those in 14" or larger ultrabooks I still think Intel's graphics aren't serious. Then you consider the lack of Compute, PhysX, Driver optimization, game specific tuning...

    Good to see a hefty performance improvement. Still not good enough though. Also pretty upsetting to see how many graphics SKU's they've released. OEM'S are gonna screw people who don't know just to get the price down.
  • Hrel - Tuesday, June 4, 2013 - link

    The SKU price is 500 DOLLARS!!!! They're charging you 200 bucks for a pretty shitty GPU. Intel's greed is so disgusting it over rides the engineering prowess of their employees. Truly disgusting Intel; to charge that much for that level of performance. AMD we need you!!!!
  • xdesire - Tuesday, June 4, 2013 - link

    May i ask a noob question? Question: Do we have no i5s, i7s WITHOUT on board graphics any more? As a gamer i'd prefer to have a CPU + discrete GPU in my gaming machine and i don't like to have extra stuff stuck on the CPU, lying there consuming power and having no use (for my part) whatsoever. No ivy bridge or haswell i5s, i7s without iGPU or whatever you call it?
  • flyingpants1 - Friday, June 7, 2013 - link

    They don't consume power while they're not in use.
  • Hrel - Tuesday, June 4, 2013 - link

    WHY THE HELL ARE THOSE SO EXPENSIVE!!!!! Holy SHIT! 500 dollars for a 4850HQ? They're charging you 200 dollars for a shitty GPU with no dedicated RAM at all! Just a cache! WTFF!!!

    Intel's greed is truly disgusting... even in the face of their engineering prowess.
  • MartenKL - Wednesday, June 5, 2013 - link

    What I don't understand is why Intel didn't do a "next-gen console like processor". Like takeing the 4770R and doubling the GPU or een quadrupling, wasn't there space? The thermal headroom must have been there as we are used to CPUs with as high as 130W TDP. Anyhow, combining that with awesome drivers for Linux would have been a real competition to AMD/PS4/XONE for Valve/Steam. A complete system under 150w capable of awesome 1080p60 gaming.

    So now I am looking for the best performing GPU under 75W, ie no external power. Which is it, still the Radeon HD7750?

Log in

Don't have an account? Sign up now