GPU Performance: Iris Pro in the Wild

The new iMac is pretty good, but what drew me to the system was it’s among the first implementations of Intel’s Iris Pro 5200 graphics in a shipping system. There are some pretty big differences between what ships in the entry-level iMac and what we tested earlier this year however.

We benchmarked a Core i7-4950HQ, a 2.4GHz 47W quad-core part with a 3.6GHz max turbo and 6MB of L3 cache (in addition to the 128MB eDRAM L4). The new entry-level 21.5-inch iMac is offered with no CPU options in its $1299 configuration: a Core i5-4570R. This is a 65W part clocked at 2.7GHz but with a 3GHz max turbo and only 4MB of L3 cache (still 128MB of eDRAM). The 4570R also features a lower max GPU turbo clock of 1.15GHz vs. 1.30GHz for the 4950HQ. In other words, you should expect lower performance across the board from the iMac compared to what we reviewed over the summer. At launch Apple provided a fairly old version of Iris Pro drivers for Boot Camp, I updated to the latest available driver revision before running any of these tests under Windows.

Iris Pro 5200’s performance is still amazingly potent for what it is. With Broadwell I’m expecting to see another healthy increase in performance, and hopefully we’ll see Intel continue down this path with future generations as well. I do have concerns about the area efficiency of Intel’s Gen7 graphics. I’m not one to normally care about performance per mm^2, but in Intel’s case it’s a concern given how stingy the company tends to be with die area.

The comparison of note is the GT 750M, as that's likely closest in performance to the GT 640M that shipped in last year's entry-level iMac. With a few exceptions, the Iris Pro 5200 in the new iMac appears to be performance competitive with the 750M. Where it falls short however, it does by a fairly large margin. We noticed this back in our Iris Pro review, but Intel needs some serious driver optimization if it's going to compete with NVIDIA's performance even in the mainstream mobile segment. Low resolution performance in Metro is great, but crank up the resolution/detail settings and the 750M pulls far ahead of Iris Pro. The same is true for Sleeping Dogs, but the penalty here appears to come with AA enabled at our higher quality settings. There's a hefty advantage across the board in Bioshock Infinite as well. If you look at Tomb Raider or Sleeping Dogs (without AA) however, Iris Pro is hot on the heels of the 750M. I suspect the 750M configuration in the new iMacs is likely even faster as it uses GDDR5 memory instead of DDR3.

It's clear to me that the Haswell SKU Apple chose for the entry-level iMac is, understandably, optimized for cost and not max performance. I would've liked to have seen an option with a high-end R-series SKU, although I understand I'm in the minority there.

Metro: Last Light

Metro: Last Light

BioShock: Infinite

BioShock: Infinite

Sleeping Dogs

Sleeping Dogs

Tomb Raider (2013)

Tomb Raider (2013)

Crysis: Warhead

Crysis: Warhead

Crysis: Warhead

GRID 2

 

These charts put the Iris Pro’s performance in perspective compared to other dGPUs of note as well as the 15-inch rMBP, but what does that mean for actual playability? I plotted frame rate over time while playing through Borderlands 2 under OS X at 1080p with all quality settings (aside from AA/AF) at their highest. The overall experience running at the iMac’s native resolution was very good:

With the exception of one dip into single digit frame rates (unclear if that was due to some background HDD activity or not), I could play consistently above 30 fps.

Using BioShock Infinite I actually had the ability to run some OS X vs. Windows 8 gaming performance numbers:

OS X 10.8.5 vs. Windows Gaming Performance - Bioshock Infinite
  1366 x 768 Normal Quality 1600 x 900 High Quality
OS X 10.8.5 29.5 fps 23.8 fps
Windows 8 41.9 fps 23.2 fps

Unsurprisingly, when we’re not completely GPU bound there’s actually a pretty large performance difference between OS X and Windows gaming performance. I’ve heard some developers complain about this in the past, partly blaming it on a lack of lower level API access as OS X doesn’t support DirectX and must use OpenGL instead. In our mostly GPU bound test however, performance is identical between OS X and Windows - at least in BioShock Infinite.

CPU Performance Storage & Fusion Drive
Comments Locked

127 Comments

View All Comments

  • g1011999 - Monday, October 7, 2013 - link

    Finally. I check anandtech several times recently for Iris Pro based iMac 21" review.
  • malcolmcraft - Thursday, October 9, 2014 - link

    It's nice, I agree. But for a full-size work station I'd not recommend Mac. /Malcolm from http://www.consumertop.com/best-desktop-guide/
  • Shivansps - Monday, October 7, 2013 - link

    I suspecting that the big loss in performance on high details compared to 750M may be related to L4 eDRAM running short than driver issue, as AA, Intel never had good performance with filters, they support hardware x2 AA yet?
  • tipoo - Monday, October 7, 2013 - link

    Yeah, doesn't AA hammer bandwidth? The eDRAM helps performance, but it's still quite low compared to what the other cards are paired with, even in best case scenarios.
  • IntelUser2000 - Tuesday, November 12, 2013 - link

    I don't think its just that. Compared to the competition like the Trinity's iGPU and the GT 650M, the texture fill rate is rather low. That impacts performance not only in texture bound scenarios with settings cranked up but anti-aliasing as well. The fillrate of the top of the line Iris Pro 5200 is about equal to Trinity while the version in the iMac would fall short. The GT 650M is 40% better than the top of the line Iris Pro and over 55% better than iMac version.

    There's also something to be desired about Intel's AA implementation. Hopefully Broadwell improves on this.
  • IanCutress - Monday, October 7, 2013 - link

    Interestingly we see Crystalwell not have any effect on CPU benchmarks, although we can probe latency as seen before.
  • willis936 - Monday, October 7, 2013 - link

    This seems counter intuitive. It's acting as a CPU+GPU shared cache correct? Intel architectures are relatively cache bandwidth starved and you'd think that 128MB of L4 would help keep the lower levels filled.
  • Flunk - Monday, October 7, 2013 - link

    Perhaps it means that the assumption that Intel architectures are relatively cache bandwidth starved is faulty.
  • name99 - Monday, October 7, 2013 - link

    Or that the working set of most benchmarks (if not most apps) is captured with a 4 or 6MB cache?
    Caching's basically irrelevant for data that is streamed through.
  • tipoo - Thursday, October 10, 2013 - link

    The L4 is pretty low bandwidth for a cache though.

Log in

Don't have an account? Sign up now