Ivy Bridge Intro: Putting Intel’s Mobile CPUs in Perspective

Last year in retrospect looks like it was a phenomenal year for Intel: other than stumbling out of the gate with a chipset bug, Sandy Bridge (2nd Generation Core i-Series Processor) proved to be amazingly capable, particularly on the mobile front. Sandy Bridge processors provided excellent performance, great battery life, and reasonable graphics for most uses outside of gaming. Pair a Sandy Bridge CPU with NVIDIA’s Optimus Technology and you could get everything you’d want from a laptop: mobility, performance, gaming…well, everything except a budget price. But as the adage goes, you get what you pay for, and many people were more than happy to pay for Sandy Bridge laptops.

The real reason for Sandy Bridge’s success is that it finally merged Intel’s mobile strategy into one line, along with delivering in spades on the performance front. Prior to Sandy Bridge, Intel had two different architectures that were wildly different servicing the mobile sector. At the high performance end of the spectrum was Clarksfield, a mobile variant of desktop Lynnfield CPU. Clarksfield/Lynnfield were in essence the mainstream version of Bloomfield/Nehalem, Intel’s original Core i7 processor, with dual-channel memory and a lower price point. The problem with Clarksfield from the mobile standpoint is that it still used a lot of power, so even with large batteries you were typically limited to three or four hours of battery life at most. Meanwhile, for mainstream users that didn’t need quite as much CPU performance, Intel had the dual-core Arrandale with their newly minted Intel HD Graphics. The result was a substantially lower price, and thanks to the IGP Arrandale could deliver on the battery life front as well—and it really paved the way for the adoption of NVIDIA’s Optimus Technology. With Sandy Bridge, Intel brought the high-end and mainstream mobile CPUs together into one product, with quad-core and dual-core offerings that could work in the same socket.

Sandy Bridge wasn’t just about unifying their mobile product line, however. Late in the Core 2 era, Intel started a push for decent performance with exceptional battery life, at prices that would no longer break the bank. ULV (Ultra Low Voltage) processors have been around for some time, but they typically ended up in business oriented ultraportables that could set you back $2000 or more. With the rise of the netbook, such ultraportables would no longer sell at massive premiums, and Intel recognized this and created their CULV products—Consumer Ultra Low Voltage CPUs. Along with the rebranding came a drop in price, and around the end of 2009 and early 2010, CULV laptops came out en masse. Pricing was about 2-2.5X as high as Atom-based netbooks for most of the CULV laptops, but performance was often three times as high and you got a great business laptop that had a full copy of Windows 7 (rather than the castrated Windows 7 Starter) and enough RAM to make it run properly.

So what does all of this have to do with Sandy Bridge? Well, Arrandale never really could live up to the promises of CULV; Arrandale ULV processors improved performance but at the cost of battery life, and pricing on most models was higher than consumers were willing to pay. With Sandy Bridge, Intel came up with a new way to sell people on ULV processors: the Ultrabook. Sure, on the surface it was little more than a rebranded ultraportable with the requirement that all models include an SSD, and ultrabooks also borrowed heavily from the MacBook Air design document. We’re still waiting for the ultimate ultrabook, but even so there has been quite a bit of talk about these sleek little laptops, and thanks to improved Turbo Boost and HD 3000 graphics, for thin and light users there’s plenty to like.

That brings us up to today’s release of Ivy Bridge. Last year with Sandy Bridge, I posited that Sandy Bridge was actually more important to Intel on the mobile side of the equation. The desktop versions were certainly attractive, but saving a few extra watts of power with an IGP instead of a discrete GPU doesn’t matter so much on the desktop, and performance was only moderately faster than Lynnfield. Even Intel seemed to acknowledge Sandy Bridge was more for laptops by the fact that many of the desktop CPUs shipped with the trimmed down HD 2000 IGP instead of the full HD 3000 IGP—though ironically the high-end K-series SKUs got the full IGP (which often went unused). Ivy Bridge basically follows in the footsteps of Sandy Bridge, which is in line with Intel’s “Tick Tock” cadence.

As a “tick”, Ivy Bridge shifts to a new process technology (22nm tri-gate transistors) but otherwise largely builds off of Sandy Bridge. There will presumably still be dual- and quad-core CPUs that can run in the same socket (Intel is only detailing their quad-core IVB parts right now, though dual-core parts are coming), and what’s more Ivy Bridge can work as a drop-in replacement for Sandy Bridge (at least on the desktop), provided you have an updated BIOS. But then, Intel also decided to make things interesting by doing a “tock” on the GPU side of the equation; Ivy Bridge’s HD 4000 IGP brings Intel into the DX11 playfield, promising a fairly sizeable improvement in IGP performance along with compatibility with DX11 games and applications. The result is that Ivy Bridge is a “tick+”.

Intel’s IGP has been the whipping boy of graphics pretty much since its inception, but with Arrandale’s HD Graphics Intel finally started to address performance and driver concerns. Arrandale wasn’t really fast enough for most games, even at minimum detail settings and a low resolution, but it could handle Blu-ray decoding and represented a healthy ~doubling of performance compared to Intel’s previous generation GMA 4500 IGP. Sandy Bridge basically doubled down again, so in the course of two generations Intel went from a completely anemic DX9 IGP to something that was nipping at the heels of the entry-level AMD and NVIDIA discrete GPUs. If Ivy Bridge continues the trend while adding DX11 features, it would end up firmly in the realm of modern GPUs…but Intel isn’t actually promising that much of an improvement over HD 3000. Instead, we’ve been led to expect performance that’s anywhere from 30-60% better (sometimes more) than HD 3000; that’s still enough of an increase that our “Value” gaming settings (basically targeting medium detail at 1366x768) may finally prove playable on most titles.

It’s not just about graphics performance, naturally. Having the best GPU hardware on the planet won’t do you any good unless your hardware works properly with all the latest games and applications, and that means having good drivers. Intel has been promising better drivers for a few years, and for the most part they’ve delivered. Still, AMD and NVIDIA have been doing high performance graphics for a lot longer, and in general they have larger driver teams and perform compatibility testing with more titles. We can’t provide such testing on our own, but we will run tests on both our 2012 and 2011 gaming suites, along with running some other games we don’t normally benchmark, just to see how many driver problems we do—or don’t—encounter.

We’ve already posted a detailed analysis of the Ivy Bridge architecture elsewhere, and others are covering the desktop aspects of Ivy Bridge, so this article will primarily focus on the mobility side of the equation. Will the shift to a new manufacturing process improve thermals and power requirements, and thus deliver better batter life? How will the new and improved—and larger—HD 4000 IGP affect performance as well as power use? Remember that this is Intel’s first 22nm chip, and early silicon off of a new process node often won’t be as efficient as what we’ll see in six months. Finally, we need to mention that the laptop we’re testing is basically pre-release hardware; the final version that ships should look similar to what we have in our hands, but there are a few indications that this is a not-for-retail product that we’ll discuss more in a moment. What that means is that while our results should be representative of what Ivy Bridge has to offer on a broad scale, firmware tweaks and other differences between laptops may result in slightly higher (or lower) performance on shipping laptops. With that out of the way, let’s take a look at Intel’s mobile Ivy Bridge lineup and then see what the ASUS N56VM has to offer.

Mobile Ivy Bridge Lineup and New Chipsets


View All Comments

  • krumme - Monday, April 23, 2012 - link

    There is a reason Intel is bringing 14nm to the atoms in 2014.

    The product here doesnt make sense. Its expensive and not better than the one before it, except better gaming - that is, if the drivers work.

    I dont know if the SB notebooks i have in the house is the same as the ones Jarred have. Mine didnt bring a revolution, but solid battery life, like the penryn notebook and core duo i also have. In my world more or less the same if you apply a ssd for normal office work.

    Loads of utterly uninteresting benchmark doest mask the facts. This product excels where its not needed, and fails where it should excell most: battery life.

    The trigate is mostyly a failure now. There is no need to call it otherwise, and the "preview" looks 10% like a press release i my world. At least trigate is not living up to expectations. Sometimes that happen with technology development, its a wonder its so smooth for Intel normally, and a testament to their huge expertise. When the technology matures and Intel makes better use of the technology in the arch, we will se huge improvements. Spare the praise until then, this is just wrong and bad.
  • JarredWalton - Monday, April 23, 2012 - link

    Seriously!? You're going to mention Atom as the first comment on Ivy Bridge? Atom is such a dog as far as performance is concerned that I have to wonder what planet you're living on. 14nm Atom is going to still be a slow product, only it might double the performance of Cedar Trail. Heck, it could triple the performance of Cedar Trail, which would make it about as fast as Core 2 CULV from three years ago. Hmmm.....

    If Sandy Bridge wasn't a revolution, offering twice the performance as Clarksfield at the high end and triple the battery life potential (though much of that is because Clarksfield was paired with power hungry GPUs), I'm not sure what would be a revolution. Dual-core SNB wasn't as big of a jump, but it was still a solid 15-25% faster than Arrandale and offered 5% to 50% better battery life--the 50% figure coming in H.264 playback; 10-15% better battery life was typical of office workloads.

    Your statement with regards to battery life basically shows you either don't understand laptops, or you're being extremely narrow minded with Ivy Bridge. I was hoping for more, but we're looking at one set of hardware (i7-3720QM, 8GB RAM, 750GB 7200RPM HDD, switchable GT 630M GPU, and a 15.6" LCD that can hit 430 nits), and we're looking at it several weeks before it will go on sale. That battery life isn't a huge leap forward isn't a real surprise.

    SNB laptops draw around 10W at idle, and 6-7W of that is going to the everything besides the CPU. That means SNB CPUs draw around 2-3W at idle. This particular IVB laptop draws around 10W at idle, and all of the other components (especially the LCD) will easily draw at least 6-7W, which means once again the CPU is using 2-3W at idle. IVB could draw 0W at idle and the best we could hope for would be a 50% improvement in battery life.

    As for the final comment, 22nm and tri-gate transistors are hardly a failure. They're not the revolution many hoped for, at least not yet. Need I point out that Intel's first 32nm parts (Arrandale) also failed to eclipse their outgoing and mature 45nm parts? I'm not sure what the launch time frame is for ULV IVB, but I suspect by the time we see those chips 22nm will be performing a lot better than it is in the first quad-core chips.

    From my perspective, to shrink a process node, improve performance of your CPU by 5-25%, and keep power use static is still a definite success and worthy of praise. When we get at least three or four other retail IVB laptops in for review, then we can actually start to say with conviction how IVB compares to SNB. I think it's better and a solid step forward for Intel, especially for lower cost laptops and ultrabooks.

    If all you're doing is office work, which is what it sounds like, you're right: Core 2, Arrandale, Sandy Bridge, etc. aren't a major improvement. That's because if all you're doing is office work, 95% of the time the computer is waiting for user input. It's the times where you really tax your PC that you notice the difference between architectures, and the change from Penryn to Arrandale to Sandy Bridge to Ivy Bridge represents about a doubling in performance just for mundane tasks like office work...and a lot of people would still be perfectly content to run Word, Excel, etc. on a Core 2 Duo.
  • usama_ah - Monday, April 23, 2012 - link

    Trigate is not a failure, this move to Trigate wasn't expected to bring any crazy amounts of performance benefits. Trigate was necessary because of the limitations (leaks) from ever smaller transistors. Trigate has nothing to do with the architecture of the processor per se, it's more about how each individual transistor is created on such a small scale. Architectural improvements are key to significant improvements.

    Sandy Bridge was great because it was a brand new architecture. If you have been even half-reading what they post on Anandtech, Intel's tick-tock strategy dictates that this move to Ivy Bridge would be small improvements BY DESIGN.

    You will see improvements in battery life with the NEW architecture, AFTER Ivy Bridge (when Intel stays at 22nm), the so-called "tock," called "Haswell." And yes, tri-gate will still be in use at that time.
  • krumme - Monday, April 23, 2012 - link

    As I understand trigate, trigate provides the oportunity to even better granularity of power for the individual transistor, by using different numbers of gates. If you design your arch to the process (using that oportunity,- as IB is not, but the first 22nm Atom aparently is), there should be "huge" savings

    I asume you BY DESIGN mean "by process" btw.

    In my world process improvement is key to most industrial production, with tools often being the weak link. The process decides what is possible in your design. That why Intel have used billions "just" mounting the right equipment.
  • JarredWalton - Monday, April 23, 2012 - link

    No, he means Ivy Bridge is not the huge leap forward by design -- Intel intentionally didn't make IVB a more complex, faster CPU. That will be Haswell, the 22nm tock to the Ivy Bridge tick. Making large architectural changes requires a lot of time and effort, and making the switch between process nodes also requires time and effort. If you try to do both at the same time, you often end up with large delays, and so Intel has settled on a "tick tock" cadence where they only do one at a time.

    But this is all old news and you should be fully aware of what Intel is doing, as you've been around the comments for years. And why is it you keep bringing up Atom? It's a completely different design philosophy from Ivy Bridge, Sandy Bridge, Merom/Conroe, etc. Atom is more a competitor to ARM SoCs, which have roughly an order of magnitude less compute performance than Ivy Bridge.
  • krumme - Monday, April 23, 2012 - link

    - Intel speeds up Atom development, - not using depreciated equipment for the future.
    - Intel invest heavily to get into new business areas and have done for years
    - Haswell will probably be slimmer on the cpu part

    The reason they do so is because the need of cpu power outside of the servermarket, is stagnating. And new third world markets is emergin. And all is turning mobile - its all over your front page now i can see.

    The new Atom probably will provide adequate for most. (like say core 2 culv). Then they will have the perfect product. Its about mobility and price and price. Haswell will probably be the product for the rest of the mainstream market leaving even less for the dedicated gpu.

    IB is an old style desktop cpu, maturing a not quite ready 22nm trigate process. Designed to fight a BD that did not arive. Thats why it does not impress. And you can tell Intel knows because the mobile lineup is so slim.

    The market have changed. The shareprice have rocketed for AMD even though their high-end cpu failed, because the Atom sized bobcat and old technology llano could enter the new market. I could note have imagined the success of Llano. I didnt understand the purpose of it, because trinity was comming so close. But the numbers talk for themselves. People buy an user experience where it matter at lowest cost, not pcmark, encoding times, zip, unzip.

    You have to use new benchmarks. And they have to be reinvented again. They have to make sense. Obviously cpu have to play a less role and the rest more. You have a very strong team, if not the strongest out there. Benchmark methology should be at the top of your list and use a lot of your development time.
  • JarredWalton - Monday, April 23, 2012 - link

    The only benchmarks that would make sense under your new paradigm are graphics and video benchmarks, well, and battery life as well, because those are the only areas where a better GPU matters. Unless you have some other suggestions? Saying "CPU speed is reaching the point where it really doesn't matter much for a large number of people" is certainly true, and I've said as much on many occasions. Still, there's a huge gulf between Atom and Core 2 still, and there are many tasks where CULV would prove insufficient.

    By the time the next Atom comes out, maybe it will be fixed in the important areas so that stuff like YouTube/Netflix/Hulu all work without issue. Hopefully it also supports at least 4GB RAM, because right now the 2GB limit along with bloated Windows 7 makes Atom a horrible choice IMO. Plus, margins are so low on Atom that Intel doesn't really want to go there; they'd rather figure out ways to get people to continue paying at least $150 per CPU, and I can't fault their logic. If CULV became "fast enough" for everyone Intel's whole business model goes down the drain.

    Funny thing is that even though we're discussing Atom and by extension ARM SoCs, those chips are going through the exact same rapid increases in performance. And they need it. Tablets are fine for a lot of tasks, but opening up many web sites on a tablet is still a ton slower than opening the same sites on a Windows laptop. Krait and Tegra 3 are still about 1/3 the amount of performance I want from a CPU.

    As for your talk about AMD share prices, I'd argue that AMD share prices have increased because they've rid themselves of the albatross that was their manufacturing division. And of course, GF isn't publicly traded and Abu Dhabi has plenty of money to invest in taking over CPU manufacturing. It's a win-win scenario for those directly involved (AMD, UAE), though I'm not sure it's necessarily a win for everyone.
  • bhima - Monday, April 23, 2012 - link

    I figure Intel wants everyone to want their CULV processors since they seem to charge the most for them to the OEMs, or are the profit margins not that great because they are a more difficult/expensive processor to make? Reply
  • krumme - Tuesday, April 24, 2012 - link

    Yes - video and gaming is what matters for the consumer now, everything is okey as it will - hopefully - be 2014. What matters is ssd, screen quality, and everything else, - just not cpu power. It just needs to have far less space. Cpu having so much space is just old habits for us old geeks.

    AMD getting rid of GF burden have been in the plan for years. Its known and can not influence share price. Basicly the, late, move to mobile focus, and the excellent execution of those consumer / not reviewer shaped apus is a part of the reason.

    The reviewers need to move their mindset :) - btw its my impression Dustin is more in line with what the general consumer want. Ask him if he thinks the consumer want a new ssd benchmark with 100 hours of 4k reading and writing.
  • MrSpadge - Monday, April 23, 2012 - link

    No, the finer granularity is just a nice side effect (which could probably be used more aggressively in the future). However, the main benefit of tri-gate is more control over the channel, which enables IB to reach high clock speeds at comparably very low voltages, and at very low leakage. Reply

Log in

Don't have an account? Sign up now