GPU Choices

The modern Apple is a big fan of GPU power. This is true regardless of whether we’re talking about phones, tablets, notebooks or, more recently, desktops. The new Mac Pro is no exception as it is the first Mac in Apple history to ship with two GPUs by default.

AMD won the contract this time around. The new Mac Pro comes outfitted with a pair of identical Pitcairn, Tahiti LE or Tahiti XT derived FirePro branded GPUs. These are 28nm Graphics Core Next 1.0 based GPUs, so not the absolute latest tech from AMD but the latest of what you’d find carrying a FirePro name.

The model numbers are unique to Apple. FirePro D300, D500 and D700 are the only three options available on the new Mac Pro. The D300 is Pitcairn based, D500 appears to use a Tahiti LE with a wider 384-bit memory bus while D700 is a full blown Tahiti XT. I’ve tossed the specs into the table below:

Mac Pro (Late 2013) GPU Options
  AMD FirePro D300 AMD FirePro D500 AMD FirePro D700
SPs 1280 1536 2048
GPU Clock (base) 800MHz 650MHz 650MHz
GPU Clock (boost) 850MHz 725MHz 850MHz
Single Precision GFLOPS 2176 GFLOPS 2227 GFLOPS 3481 GFLOPS
Double Precision GFLOPS 136 GFLOPS 556.8 GFLOPS 870.4 GFLOPS
Texture Units 80 96 128
ROPs 32 32 32
Transistor Count 2.8 Billion 4.3 Billion 4.3 Billion
Memory Interface 256-bit GDDR5 384-bit GDDR5 384-bit GDDR5
Memory Datarate 5080MHz 5080MHz 5480MHz
Peak GPU Memory Bandwidth 160 GB/s 240 GB/s 264 GB/s
GPU Memory 2GB 3GB 6GB
Apple Upgrade Cost (Base Config) - +$400 +$1000
Apple Upgrade Cost (High End Config) - - +$600

Despite the FirePro brand, these GPUs have at least some features in common with their desktop Radeon counterparts. FirePro GPUs ship with ECC memory, however in the case of the FirePro D300/D500/D700, ECC isn’t enabled on the GPU memories. Similarly, CrossFire X isn’t supported by FirePro (instead you get CrossFire Pro) but in the case of the Dx00 cards you do get CrossFire X support under Windows. 

Each GPU gets a full PCIe 3.0 x16 interface to the Xeon CPU via a custom high density connector and flex cable on the bottom of each GPU card in the Mac Pro. I believe Apple also integrated CrossFire X bridge support over this cable. 

With two GPUs standard in every Mac Pro configuration, there’s obviously OS support for the configuration. Under Windows, that amounts to basic CrossFire X support. Apple’s Boot Camp drivers ship with CFX support, and you can download the latest Catalyst drivers directly from AMD and enable CFX under Windows as well. I did the latter and found that despite the option being there I couldn’t actually disable CrossFire X under Windows. Disabling CFX would drop power consumption, but I didn't always see a corresponding decrease in performance.

Under OS X the situation is a bit more complicated. There is no system-wide CrossFire X equivalent that will automatically split up rendering tasks across both GPUs. By default, one GPU is setup for display duties while the other is used exclusively for GPU compute workloads. GPUs are notoriously bad at context switching, which can severely limit compute performance if the GPU also has to deal with the rendering workloads associated with display in a modern OS. NVIDIA sought to address a similar problem with their Maximus technology, combining Quadro and Tesla cards into a single system for display and compute.

Due to the nature of the default GPU division under OS X, all games by default will only use a single GPU. It is up to the game developer to recognize and split rendering across both GPUs, which no one is doing at present. Unfortunately firing up two instances of a 3D workload won’t load balance across the two GPUs by default. I ran Unigine Heaven and Valley benchmarks in parallel, unfortunately both were scheduled on the display GPU leaving the compute GPU completely idle.

The same is true for professional applications. By default you will see only one GPU used for compute workloads. Just like the gaming example however, applications may be written to spread compute workloads out across both GPUs if they need the horsepower. The latest update to Final Cut Pro (10.1) is one example of an app that has been specifically written to take advantage of both GPUs in compute tasks.

The question of which GPU to choose is a difficult one. There are substantial differences in performance between all of the options. The D700 for example offers 75% more single precision compute than the D300 and 56% more than the D500. All of the GPUs have the same number of render backends however, so all of them should be equally capable of driving a 4K display. In many professional apps, the bigger driver for the higher end GPU options will likely be the larger VRAM configurations.

I was particularly surprised by how much video memory Final Cut Pro appeared to take up on the primary (non-compute) GPU. I measured over 3GB of video memory usage while on a 1080p display, editing 4K content. The D700 is the only configuration Apple offers with more than 3GB of video memory. I’m not exactly sure how the experience would degrade if you had less, but throwing more VRAM at the problem doesn’t seem to be a bad idea. The compute GPU’s memory usage is very limited (obviously) until the GPU is actually in use. OS X reported ~8GB of usage when idle, which I can only assume is a bug and a backwards way of saying that none of the memory was in use. Under a GPU compute load (effects rendering in FCP), I saw around 2GB of memory usage on the compute GPU.

Since Final Cut Pro 10.1 appears to be a flagship app for the Mac Pro’s CPU + GPU configuration, I did some poking around to see how the three separate processors are used in the application. Basic rendering still happens on the CPU. With 4K content and the right effects I see 20 - 21 threads in use, maxing out nearly all available cores and threads. I still believe the 8-core version may be a slightly better choice if you're concerned about cost, but that's a guess on my part since I don't have a ton of 4K FCP 10.1 projects to profile. The obvious benefit to the 12-core version is you get more performance when the workload allows it, and when it doesn't you get a more responsive system.

Live preview of content that has yet to be rendered is also CPU bound. I don’t see substantial GPU compute use here, and the same is actually true for the CPU. Scrubbing through and playing back non-rendered content seems to use between 1 - 3 CPU cores. Even if you apply video effects to the project, prior to rendering this ends up being a predominantly CPU workload with the non-compute (display) GPU spending some cycles.

It’s when you actually go to render visual effects that the compute GPU kicks in. Video rendering/transcoding, as I mentioned earlier, is still a CPU bound affair but all effects rendering takes place on the GPUs. The GPU workload increases depending on the number of effects layered upon one another. Effects rendering appears to be spread over both GPUs, with the compute GPU taking the brunt of the workload in some cases and in others the two appear more balanced.


GPU load while running my 4K CPU+GPU FCP 10.1 workload

Final Cut Pro’s division of labor between CPU and GPUs exemplifies what you’ll need to see happen across the board if you want big performance gains going forward. If you’re not bound by storage performance and want more than double digit increases in performance, your applications will have to take advantage of GPU computing to get significant speedups. There are some exceptions (e.g. leveraging AVX hardware in the CPU cores), but for the most part this heterogeneous approach is what needs to happen. What we’ve seen from FCP shows us that the solution won’t come in the form of CPU performance no longer mattering and GPU performance being all we care about. A huge portion of my workflow in Final Cut Pro is still CPU bound, the GPU is used to accelerate certain components within the application. You need the best of both to build good, high performance systems going forward.

The PCIe Layout Gaming Performance
Comments Locked

267 Comments

View All Comments

  • Dandu - Friday, January 10, 2014 - link

    Hi,

    It's possible to use a 2 560 x 1 440 HiDPI definition, with a NVIDIA card, a 4K Display and the (next) version of SwitchResX.

    I have tested that : http://www.journaldulapin.com/2014/01/10/ultra-144...
  • Haravikk - Sunday, January 12, 2014 - link

    The news about the USB3 ports is a bit strange, doesn't that mean a maximum throughput of 4gbps? I know most USB3 storage devices will struggle to push past 500mb/sec, but that seems pretty badly constrained. Granted, Thunderbolt is the interface that any storage *should* be using, but the choices are still pretty poor for the prices you're paying, and no-one offers Thunderbolt to USB3 cables (only insanely priced hubs with external power).

    Otherwise the review is great, though it'd be nice to see more on the actual capabilities of Apple's FirePro cards. Specifically, how many of the FirePro specific features do they have such as 30-bit colour output, EDC, ECC cache memory, order-independent-transparency (under OpenGL) and so-on? I'm assuming they do given that they're using the FirePro name, but we really need someone to cover it in-depth to finally put to rest claims that consumer cards would be better ;)
  • eodeot - Monday, February 24, 2014 - link

    I'd love a realistic comparison with an i7 4770k and say, 780ti.

    You also compare 12 cored version to older 12 core versions that hide behind (fairly) anonymous xeon labeling that hide their chip age (sandy/ ivy bridge/haswell...). I'd like to see in how any real world applications does a 12 core chip perform faster. Excluding 3d work and select video rendering, I doubt there is much need to extra cores. You note how its nice to have buffer of free cores for everyday use, while heavy rendering- but I never noticed a single hiccup or a slowdown with 3d rendering on my i7 4770k with all 8 logical cores taxed to their max. How much of better performance then "butter smooth" one already provided with a much cheaper CPU can you get?

    Also you compare non apple computers with same ridiculous CPU/GPU combinations. Who in their right mind would choose a 4core Xeon chip over a haswell i7? The same goes for silly "workstation" GPU over say a Titan. Excluding dated opengl 3d apps, no true modern workstation benefits from a "workstation" GPU, if we exclude select CUDA based 3d renderers like iray and vray rt that can benefit from 12gb of ram. GPUs included with Apple Mac pro have 2gb... Not a single valid reason a sane person would buy such a card. Not one.

    Also, you point out how gaming makes the most sense on windows, but do no such recommendation for 3d work. Like games, 3d programs perform significantly better under directX and that leaves windows as a sole option for any serious 3d work...

    I found this review interesting for design Apple took, but everything else appears one sided praise...
  • pls.edu.yourself - Wednesday, February 26, 2014 - link

    QUOTE: "The shared heatsink makes a lot of sense once you consider how Apple handles dividing compute/display workloads among all three processors (more on this later)."

    Can anyone help point me to this. I think one of my GPU's is not being used.
  • PiMatrix - Saturday, March 8, 2014 - link

    Apple Fixed the HiDPI issue on Sharp K321 in OS 10.9.3. Works great. Supported HiDPI resolutions are the native 3840x2160, and HiDPI: 3200x1800, 2560x1440, 1920x1080, and 1280x720. You can also define more resolutions with QuickResX but the above seem to be enough. Using 3200 x1800 looks fantastic on this 4K display. Great job Apple!
  • le_jean - Monday, March 10, 2014 - link

    Any information on updated 60Hz compatibility concerning Dell's UP 2414Q in 10.9.3?
    I would be very interested to get some feedback in relation to:
    nMP & Dell UP 2414Q
    rMBP & Dell UP 2414Q

    I remember in anandtech review of late 2013 nMP there have been issues concerning that specific display, while Sharp and ASUS performed just fine
  • philipus - Monday, April 14, 2014 - link

    As a happy photo amateur, I have to say the previous Mac Pro is good enough for me. I have the early 2008 version which I like because of its expandability. Over the years I have added drives, RAM and most recently a Sonnet Tempo Pro with two Intel 520 in order to get a faster system. As cool and powerful as the new Mac Pro is, it would cost me quite a lot to add Thunderbolt boxes for the drives I currently use, so it is not worth it for me.

    I do agree that it is about time a manufacturer of desktop computers pushed the platform envelope. It's been tediously samey for a very long time. I'm not surprised it was Apple that made the move - it's in Apple's DNA to be unexpected design-wise. But as much as it is nice to see a radical re-design of the concept of the desktop computer, I think a future version of the Mac Pro needs to be a bit more flexible and allow more user-based changes to the hardware. Even if I could afford the new Mac Pro - and I would also place it on my desktop because it's really pretty - I wouldn't want to have several Thunderbolt boxes milling around with cables variously criss-crossing and dangling from my desk.
  • walter555999 - Saturday, June 7, 2014 - link

    Dear Anand, could you post how to connect a up2414Q to macbook pro retina (2013) ? I have tried a cable mini display port-HDMI. But there are no image in the dell monitor. Thank you very much. Walter
  • Fasarinen - Saturday, August 9, 2014 - link

    Thanks for an excellent review. (And hello, everybody; this is my first post on this site.)

    I noticed, in the "GPU choices" section, what seems to be a very useful utility for monitoring the GPU. The title on the top of the screen is "OpenCL Driver Monitor"; the individual windows (which are displaying graphs of GPU utilisation) seem to be titled "AMDRadeonXL4000OpenCLDriver".

    I'm probably just being dim, but a bit of googling doesn't shed much light. If anybody could point to me to where this utility can be obtained from, I'd be most grateful.

    Thanks ....
  • pen-helm - Friday, September 12, 2014 - link

    I showed this page to a Mac user. They replied:

    I'm pretty sure that this simple fix takes care of the issue with
    monitors where OS X doesn't offer a HiDPI mode:

    http://cocoamanifest.net/articles/2013/01/turn-on-...

Log in

Don't have an account? Sign up now