Twenty two months ago Intel launched its LGA-2011 platform and Sandy Bridge E aimed at the high-end desktop enthusiast. The platform brought more cores, more PCIe lanes and more memory bandwidth to those users who needed more than what had become of Intel's performance desktop offerings. It was an acknowledgement of a high end market that seems to have lost importance over the past few years. On the surface, Sandy Bridge E was a very good gesture on Intel's part. Unfortunately, the fact that it's been nearly two years since we first met LGA-2011 without a single architecture update, despite seeing the arrival of both Ivy Bridge and Haswell, doesn't send a great message to the users willing to part with hard earned money to buy into the platform.

Today we see that long awaited update. LGA-2011 remains unchanged, but the processor you plug into the socket moves to 22nm. This is Ivy Bridge Extreme.

Ivy Bridge E: 1.86B Transistors, Up to 6 Cores & 15MB L3

There’s a welcoming amount of simplicity in the Extreme Edition lineup. There are only three parts to worry about:

With the exception of the quad-core 4820K, IVB-E launch pricing is identical to what we saw with Sandy Bridge E almost two years ago. The 4820K is slightly cheaper than the highest end Haswell part, but it’s still $25 more expensive than its SNB-E counterpart was at launch. The difference? The 4820 is a K-SKU, meaning it’s fully unlocked, and thus comes with a small price premium.

All of the IVB-E parts ship fully unlocked, and are generally capable of reaching the same turbo frequencies as their predecessors. The Core i7-4960X and the i7-3970X before it, are the only Intel CPUs officially rated for frequencies of up to 4GHz (although we’ve long been able to surpass that via overclocking). Just as before, none of these parts ship with any sort of cooling (because profit), you'll need to buy a heatsink/fan or closed loop water cooler separately. Intel does offer a new cooler for IVB-E, the TS13X:

While Sandy Bridge E was an 8-core die with two cores disabled, Ivy Bridge E shows up in a native 6-core version. There’s no die harvesting going on here, all of the transistors on the chip are fully functional. The result is a significant reduction in die area, from the insanity that was SNB-E’s 435mm2 down to an almost desktop-like 257mm2.

CPU Specification Comparison
CPU Manufacturing Process Cores GPU Transistor Count (Schematic) Die Size
Haswell GT3 4C 22nm 4 GT3 ? 264mm2 (est)
Haswell GT2 4C 22nm 4 GT2 1.4B 177mm2
Haswell ULT GT3 2C 22nm 2 GT3 1.3B 181mm2
Intel Ivy Bridge E 6C 22nm 6 N/A 1.86B 257mm2
Intel Ivy Bridge 4C 22nm 4 GT2 1.2B 160mm2
Intel Sandy Bridge E 6C 32nm 6 N/A 2.27B 435mm2
Intel Sandy Bridge 4C 32nm 4 GT2 995M 216mm2
Intel Lynnfield 4C 45nm 4 N/A 774M 296mm2
AMD Trinity 4C 32nm 4 7660D 1.303B 246mm2
AMD Vishera 8C 32nm 8 N/A 1.2B 315mm2

Cache sizes remain unchanged. The highest end SKU features a full 15MB L3 cache, while the mid-range SKU comes with 12MB and the entry-level quad-core part only has 10MB. Intel adds official support for DDR3-1866 (1 DIMM per channel) with IVB-E, up from DDR3-1600 in SNB-E and Haswell.

TDPs all top out at 130W, bringing back memories of the high-end desktop SKUs of yesterday. Obviously these days much of what we consider to be high-end exists below 100W.

Of course processor graphics is a no-show on IVB-E. As IVB-E retains the same socket as SNB-E, there are physically no pins set aside for things like video output. Surprisingly enough, early rumors indicate Haswell E will also ship without an integrated GPU.

The Extreme Cadence & Validated PCIe 3.0

Understanding why we’re talking about Ivy Bridge E now instead of Haswell E is pretty simple. The Extreme desktop parts come from the Xeon family. Sandy Bridge E was nothing more than a 6-core Sandy Bridge EP variant (Xeon E5), and Ivy Bridge E is the same. In the Xeon space, the big server customers require that Intel keep each socket around for at least two generations to increase the longevity of their platform investment. As a result we got two generations of Xeon CPUs (SNB-E/EP, and IVB-E/EP) that leverage LGA-2011. Because of when SNB-E was introduced, the LGA-2011 family ends up out of phase with the desktop/notebook architectures by around a year. So we get IVB-E in 2013 while desktop/notebook customers get Haswell. Next year when the PC clients move to 14nm Broadwell, the server (and extreme desktop) customers will get 22nm Haswell-E.

The only immediate solution to this problem would be for the server parts to skip a generation - either skip IVB-E and go to Haswell-E (not feasible as that would violate the 2 generations rule above), or skip Haswell-E and go directly to Broadwell-E next year. Intel tends to want to get the most use out of each one of its architectures, so I don’t see a burning desire to skip an architecture.

Server customers are more obsessed with core counts than modest increases in IPC, so I don’t see a lot of complaining there. On the desktop however, Ivy Bridge E poses a more interesting set of tradeoffs.

The big advantages that IVB-E brings to the table are a ridiculous number of PCIe lanes, a quad-channel memory interface and 2 more cores in its highest end configuration.

While the standard desktop Sandy Bridge, Ivy Bridge and Haswell parts all feature 16 PCIe lanes from the CPU’s native PCIe controller, the Extreme parts (SNB-E/IVB-E) have more than twice that.

There are 40 total PCIe 3.0 lanes that branch off of Ivy Bridge E. Since IVB-E and SNB-E are socket compatible, that’s the same number of lanes we got last time. The difference this time around is IVB-E’s PCIe controller has been fully validated with PCIe 3.0 devices. While Sandy Bridge E technically supported PCIe 3.0 the controller was finalized prior to PCIe 3.0 devices being on the market and thus wasn’t validated with any of them. The most famous case being NVIDIA’s Kepler cards which by default run in PCIe 2.0 mode on SNB-E systems. Forcing PCIe 3.0 mode on SNB-E worked in many cases, while in others you’d see instability.

NVIDIA tells us that it plans to enable PCIe 3.0 on all IVB-E systems. Current drivers (including the 326.80 beta driver) treat IVB-E like SNB-E and force all Kepler cards to PCIe 2.0 mode, but NVIDIA has a new driver going through QA right now that will default to PCIe 3.0 when it detects IVB-E. SNB-E systems will continue to run in PCIe 2.0 mode.

Intel’s X79: Here for One More Round

Unlike its mainstream counterpart, Ivy Bridge E does not come with a new chipset. That’s right, not only is IVB-E socket compatible with SNB-E, it ships with the very same chipset: X79.

As a refresher Intel’s X79 chipset has no native USB 3.0 support and only features two native 6Gbps SATA ports. Motherboard makers have worked around X79’s limitations for years now by adding a plethora of 3rd party controllers. I personally prefer Intel’s native solutions to those we find from 3rd parties, but with X79 you’ve got no choice.

The good news is that almost all existing X79 motherboards will see BIOS/EFI updates enabling Ivy Bridge E support. The keyword there is almost.

When it exited the desktop motherboard market, Intel only promised to release new Haswell motherboards and to support them through the end of their warranty period. Intel never promised to release updated X79 motherboards for Ivy Bridge E, nor did it promise to update its existing X79 boards to support the new chips. In a very disappointing move, Intel confirmed to me that none of its own X79 boards will support Ivy Bridge E. I confirmed this myself by trying to boot a Core i7-4960X on my Intel DX79SI - the system wouldn’t POST. While most existing X79 motherboards will receive BIOS updates enabling IVB-E support, anyone who bought an Intel branded X79 motherboard is out of luck. Given that LGA-2011 owners are by definition some of the most profitable/influential/dedicated customers Intel has, I don’t think I need to point out how damaging this is to customer relations. If it’s any consolation, IVB-E doesn’t actually offer much of a performance boost over SNB-E - so if you’re stuck with an Intel X79 motherboard without IVB-E support, you’re not missing out on too much.

The Testbed: ASUS’ New X79 Deluxe

As all of my previous X79 boards were made by Intel, I actually had no LGA-2011 motherboards that would work with IVB-E on hand. ASUS sent over the latest revision of its X79 Deluxe board with official IVB-E support:

The board worked relatively well but it seems like there’s still some work that needs to be done on the BIOS side. When loaded with 32GB of RAM I saw infrequent instability at stock voltages. It’s my understanding that Intel didn’t provide final BIOS code to the motherboard makers until a couple of weeks ago, so don’t be too surprised if there are some early teething pains. For what it’s worth, that this makes Ivy Bridge E the second high-end desktop launch in a row that hasn’t gone according to Intel’s previously high standards.

Corsair supplied the AX1200i PSU and 4 x 8GB DDR3-1866 Vengeance Pro memory for the testbed.

For more comparisons be sure to check out our performance database: Bench.

Testbed Configurations
Motherboard(s)
ASUS X79 Deluxe
ASUS P8Z77-V Deluxe
ASUS Crosshair V Formula
Intel DX58SO2
Memory
Corsair Vengeance DDR3-1866 9-10-9-27
SSD
Corsair Neutron GTX 240GB
OCZ Agility 3 240GB
OCZ Vertex 3 240GB
Video Card
NVIDIA GeForce GTX Titan x 2 (only 1 used for power tests)
PSU
Corsair AX1200i
OS
Windows 8 64-bit
Windows 7 64-bit
Windows Vista 32-bit (for older benchmarks)

 

Memory & General Purpose Performance
POST A COMMENT

119 Comments

View All Comments

  • ShieTar - Tuesday, September 03, 2013 - link

    Whats the point? A 10-core only runs at 2GHz, and a 8-core only runs at 3 GHz, so both have less overall performance than a 6-core overclocked to more than 4GHz. You simply cannot put more computing power into a reasonable power envelope for a single socket. If a water-cooled Enthusiast 6-core is not enough for your needs, you automatically need a 2-socket system.

    And its not like that is not feasible for enthusiasts. The ASUS Z9PE-D8 WS, the EVGA Classified SR-X and the Supermicro X9DAE are mainboard aiming at the enthusiast / workstation market, combining two sockets for XEON-26xx with the capability to run GPUs in SLI/CrossFire. And if you are looking to spend significantly more than 1k$ for a CPU, the 400$ on those boards and the extra cost for ECC Memory should not scare you either.

    Just go and check Anandtech own benchmarking: http://www.anandtech.com/show/6808/westmereep-to-s... . It's clear that you need two 8-cores to be faster then the enthusiast 6-cores even before overclocking is taken into account.

    Maybe with Haswell-E we can get 8 cores with >3.5GHz into <130W, but with Ivy Bridge, there is simply no point.
    Reply
  • f0d - Tuesday, September 03, 2013 - link

    who cares if the power envelope is "reasonable"?
    i already have my SBE overclocked to 5.125Ghz and if they release a 10core i would oc that thing like a mutha******

    that link you posted is EXACTLY why i want a 10/12 core instead of dual socket (which i could afford if it made sense performance wise) - its obvious that video encoding doesnt work well with NUMA and dual sockets but it does work well with multi cored single cpu's

    so i say give me a 10 core and let me OC it like crazy - i dont care if it ends up using 350W+ i have some pretty insane watercooling to suck it up (3k ultra kaze's in push/pull on a rx480rad 24v laingd5s raystorm wb - a little over the top but isnt that what these extreme cpu's are for?)
    Reply
  • 1Angelreloaded - Tuesday, September 03, 2013 - link

    I have to agree with you in the extreme market who gives a damn about being green, most will run 1200watt Plat mod PSUs with an added extra 450 watt in the background, and 4GPUs as this is pretty much the only reason to buy into 2011 socket in the first place 2 extra cors and 40x PCIe lanes. Reply
  • crouton - Tuesday, September 03, 2013 - link

    I could not agree with you more! I have a OC'd i920 that just keeps chugging along and if I'm going to drop some coin on an upgrade, I want it to be an UPGRADE. Let ME decide what's reasonable for power consumption. If I burn up a 8/10 core CPU with some crazy cooling solution then it's MY fault. I accept this. This is the hobby that I've chosen and it comes with risks. This is not some elementary school "color by numbers" hobby where you can follow a simple set of instructions to get the desired result in 10 minutes. This is for the big boys. It takes weeks or more to get it right and even then, we know we can do better. Not interested in XEON either. Reply
  • Assimilator87 - Tuesday, September 03, 2013 - link

    The 12 core models run at 2.7Ghz, which will be slightly faster than six cores at 5.125Ghz. You could also bump up the bclk to 105, which would put the CPU at 2.835Ghz. Reply
  • Casper42 - Tuesday, September 03, 2013 - link

    2690 v2 will be 10c @ 3.0 and 130W. Effectively 30Ghz.
    2697 v2 will be 12c @ 2.7 and 130W. Effectively 32.4Ghz

    Assuming a 6 Core OC'd to 5Ghz Stable, 6c @ 5.0 and 150W? (More Power due to OC)
    effectively 30Ghz.

    So tell me again how a highly OC'd and large unavailable to the masses 6c is better than a 10/12c when you need Multiple Threads?
    Keep in mind those 10 and 12 core Server CPUs are almost entirely AIR cooled and not overclocked.

    I think they should have released an 8 and 10 core Enthusiast CPU. Hike up the price and let the market decide which one they want.
    Reply
  • MrSpadge - Tuesday, September 03, 2013 - link

    6c @ 5.0 will eat more like 200+ W instead of 130/150. Reply
  • ShieTar - Wednesday, September 04, 2013 - link

    For Sandy Bridge, we had:
    2687, 8c @ 3.1 GHz => 24.8 GHz effectively
    3970X, 6c @ 3.5 GHz => 21 GHz before overclocking, only 4.2 GHz required to exceed the Xeon.

    Fair enough, for Ivy Bridge Xeons, the 10core at 3 GHz has been announced. I'll believe that claim when I see some actual benchmarks on it. I have some serious doubts that a 10core at 3 GHz can actually use less power than an 8 core at 3.4 GHz. So lets see on what frequency those parts will actually run, under load.

    Furthermore, the effective GHz are not the whole truth, even on highly parallel tasks. While cache seems to scale with the number of cores for most Xeons, memory bandwidth does not, and there are always overheads due to the common use of the L3 cache and the memory.

    Finally, not directly towards you but to several people talking about "green": Entirely not the point. No matter how much power your cooling system can remove, you are always creating thermal gradients when generating too much heat on a very small space. Why do you guys think there was no 3.5GHz 8 core for Sandy Bridge-EP? The silicon is the same for 6-core and 8-core, the core itself could run the speed. But INTEL is not going to verify the continued operation of a chip with a TDP >150W.

    They give a little leeway when it comes to the K-class, because there the risk is with customer to a certain point. But they just won't go and sell a CPU which reliably destroys itself or the MB the very moment somebody tries to overclock it.
    Reply
  • psyq321 - Thursday, September 05, 2013 - link

    I am getting 34.86 @Cinebench with dual Xeon 2697 v2 running @3 GHz (max all-core turbo).

    Good luck reaching that with superclocked 4930/4960X ;-)
    Reply
  • piroroadkill - Tuesday, September 03, 2013 - link

    All I really learn from these high end CPU results is that if you actually invested in high end 1366 in the form of 980x all that time ago, you've got probably the longest lasting system in terms of good performance that I can even think of. Reply

Log in

Don't have an account? Sign up now