The Prelude

As Intel got into the chipset business it quickly found itself faced with an interesting problem. As the number of supported IO interfaces increased (back then we were talking about things like AGP, FSB), the size of the North Bridge die had to increase in order to accommodate all of the external facing IO. Eventually Intel ended up in a situation where IO dictated a minimum die area for the chipset, but the actual controllers driving that IO didn’t need all of that die area. Intel effectively had some free space on its North Bridge die to do whatever it wanted with. In the late 90s Micron saw this problem and contemplating throwing some L3 cache onto its North Bridges. Intel’s solution was to give graphics away for free.

The budget for Intel graphics was always whatever free space remained once all other necessary controllers in the North Bridge were accounted for. As a result, Intel’s integrated graphics was never particularly good. Intel didn’t care about graphics, it just had some free space on a necessary piece of silicon and decided to do something with it. High performance GPUs need lots of transistors, something Intel would never give its graphics architects - they only got the bare minimum. It also didn’t make sense to focus on things like driver optimizations and image quality. Investing in people and infrastructure to support something you’re giving away for free never made a lot of sense.

Intel hired some very passionate graphics engineers, who always petitioned Intel management to give them more die area to work with, but the answer always came back no. Intel was a pure blooded CPU company, and the GPU industry wasn’t interesting enough at the time. Intel’s GPU leadership needed another approach.

A few years ago they got that break. Once again, it had to do with IO demands on chipset die area. Intel’s chipsets were always built on a n-1 or n-2 process. If Intel was building a 45nm CPU, the chipset would be built on 65nm or 90nm. This waterfall effect allowed Intel to help get more mileage out of its older fabs, which made the accountants at Intel quite happy as those $2 - $3B buildings are painfully useless once obsolete. As the PC industry grew, so did shipments of Intel chipsets. Each Intel CPU sold needed at least one other Intel chip built on a previous generation node. Interface widths as well as the number of IOs required on chipsets continued to increase, driving chipset die areas up once again. This time however, the problem wasn’t as easy to deal with as giving the graphics guys more die area to work with. Looking at demand for Intel chipsets, and the increasing die area, it became clear that one of two things had to happen: Intel would either have to build more fabs on older process nodes to keep up with demand, or Intel would have to integrate parts of the chipset into the CPU.

Not wanting to invest in older fab technology, Intel management green-lit the second option: to move the Graphics and Memory Controller Hub onto the CPU die. All that would remain off-die would be a lightweight IO controller for things like SATA and USB. PCIe, the memory controller, and graphics would all move onto the CPU package, and then eventually share the same die with the CPU cores.

Pure economics and an unwillingness to invest in older fabs made the GPU a first class citizen in Intel silicon terms, but Intel management still didn’t have the motivation to dedicate more die area to the GPU. That encouragement would come externally, from Apple.

Looking at the past few years of Apple products, you’ll recognize one common thread: Apple as a company values GPU performance. As a small customer of Intel’s, Apple’s GPU desires didn’t really matter, but as Apple grew, so did its influence within Intel. With every microprocessor generation, Intel talks to its major customers and uses their input to help shape the designs. There’s no sense in building silicon that no one wants to buy, so Intel engages its customers and rolls their feedback into silicon. Apple eventually got to the point where it was buying enough high-margin Intel silicon to influence Intel’s roadmap. That’s how we got Intel’s HD 3000. And that’s how we got here.

Haswell GPU Architecture & Iris Pro
Comments Locked

177 Comments

View All Comments

  • Death666Angel - Tuesday, June 4, 2013 - link

    "What Intel hopes however is that the power savings by going to a single 47W part will win over OEMs in the long run, after all, we are talking about notebooks here."
    This plus simpler board designs and fewer voltage regulators and less space used.
    And I agree, I want this in a K-SKU.
  • Death666Angel - Tuesday, June 4, 2013 - link

    And doesn't MacOS support Optimus?
    RE: "In our 15-inch MacBook Pro with Retina Display review we found that simply having the discrete GPU enabled could reduce web browsing battery life by ~25%."
  • GullLars - Tuesday, June 4, 2013 - link

    Those are strong words in the end, but i agree Intel should make a K-series CPU with Crystalwell. What comes to mind is they may be doing that for Broadwell.

    The Iris Pro solution with eDRAM looks like a nice fit for what i want in my notebook upgrade coming this fall. I've been getting by on a Core2Duo laptop, and didn't go for Ivy Bridge because there were no good models with a 1920x1200 or 1920x1080 display without dedicated graphics. For a system that will not be used for gaming at all, but needs resolution for productivity, it wasn't worth it. I hope this will change with Haswell, and that i will be able to get a 15" laptop with >= 1200p without dedicated graphics. 4950HQ or 4850HQ seems like an ideal fit. I don't mind spending $1500-2000 for a high quality laptop :)
  • IntelUser2000 - Tuesday, June 4, 2013 - link

    ANAND!!

    You got the FLOPs rating wrong on the Sandy Bridge parts. They are at 1/2 of Ivy Bridge.

    1350MHz with 12 EUs and 8 FLOPs/EU will result in 129.6GFlops. While its true in very limited scenarios Sandy Bridge's iGPU can co-issue, its small enough to be non-existent. That is why a 6EU HD 2500 comes close to 12EU HD 3000.
  • Hrel - Tuesday, June 4, 2013 - link

    If they use only the HD4600 and Iris Pro that'd probably be better. As long as it's clearly labeled on laptops. HD 4600 Pro (don't expect to do any video work on this) Iris Pro (it's passable in a pinch).

    But I don't think that's what's going to happen. Iris Pro could be great for Ultrabooks; I don't really see any use outside of that though. A low end GT740M is still a better option in any laptop that has the thermal room for it. Considering you can put those in 14" or larger ultrabooks I still think Intel's graphics aren't serious. Then you consider the lack of Compute, PhysX, Driver optimization, game specific tuning...

    Good to see a hefty performance improvement. Still not good enough though. Also pretty upsetting to see how many graphics SKU's they've released. OEM'S are gonna screw people who don't know just to get the price down.
  • Hrel - Tuesday, June 4, 2013 - link

    The SKU price is 500 DOLLARS!!!! They're charging you 200 bucks for a pretty shitty GPU. Intel's greed is so disgusting it over rides the engineering prowess of their employees. Truly disgusting Intel; to charge that much for that level of performance. AMD we need you!!!!
  • xdesire - Tuesday, June 4, 2013 - link

    May i ask a noob question? Question: Do we have no i5s, i7s WITHOUT on board graphics any more? As a gamer i'd prefer to have a CPU + discrete GPU in my gaming machine and i don't like to have extra stuff stuck on the CPU, lying there consuming power and having no use (for my part) whatsoever. No ivy bridge or haswell i5s, i7s without iGPU or whatever you call it?
  • flyingpants1 - Friday, June 7, 2013 - link

    They don't consume power while they're not in use.
  • Hrel - Tuesday, June 4, 2013 - link

    WHY THE HELL ARE THOSE SO EXPENSIVE!!!!! Holy SHIT! 500 dollars for a 4850HQ? They're charging you 200 dollars for a shitty GPU with no dedicated RAM at all! Just a cache! WTFF!!!

    Intel's greed is truly disgusting... even in the face of their engineering prowess.
  • MartenKL - Wednesday, June 5, 2013 - link

    What I don't understand is why Intel didn't do a "next-gen console like processor". Like takeing the 4770R and doubling the GPU or een quadrupling, wasn't there space? The thermal headroom must have been there as we are used to CPUs with as high as 130W TDP. Anyhow, combining that with awesome drivers for Linux would have been a real competition to AMD/PS4/XONE for Valve/Steam. A complete system under 150w capable of awesome 1080p60 gaming.

    So now I am looking for the best performing GPU under 75W, ie no external power. Which is it, still the Radeon HD7750?

Log in

Don't have an account? Sign up now