Memory Subsystem

With the same underlying CPU and GPU architectures, porting games between the two should be much easier than ever before. Making the situation even better is the fact that both systems ship with 8GB of total system memory and Blu-ray disc support. Game developers can look forward to the same amount of storage per disc, and relatively similar amounts of storage in main memory. That’s the good news.

The bad news is the two wildly different approaches to memory subsystems. Sony’s approach with the PS4 SoC was to use a 256-bit wide GDDR5 memory interface running somewhere around a 5.5GHz datarate, delivering peak memory bandwidth of 176GB/s. That’s roughly the amount of memory bandwidth we’ve come to expect from a $300 GPU, and great news for the console.

Xbox One Motherboard, courtesy Wired

Die size dictates memory interface width, so the 256-bit interface remains but Microsoft chose to go for DDR3 memory instead. A look at Wired’s excellent high-res teardown photo of the motherboard reveals Micron DDR3-2133 DRAM on board (16 x 16-bit DDR3 devices to be exact). A little math gives us 68.3GB/s of bandwidth to system memory.

To make up for the gap, Microsoft added embedded SRAM on die (not eDRAM, less area efficient but lower latency and doesn't need refreshing). All information points to 32MB of 6T-SRAM, or roughly 1.6 billion transistors for this memory. It’s not immediately clear whether or not this is a true cache or software managed memory. I’d hope for the former but it’s quite possible that it isn’t. At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesn’t sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) it’s actually enough to have a substantial hit rate in current workloads (although there’s not much room for growth).

Vgleaks has a wealth of info, likely supplied from game developers with direct access to Xbox One specs, that looks to be very accurate at this point. According to their data, there’s roughly 50GB/s of bandwidth in each direction to the SoC’s embedded SRAM (102GB/s total bandwidth). The combination of the two plus the CPU-GPU connection at 30GB/s is how Microsoft arrives at its 200GB/s bandwidth figure, although in reality that’s not how any of this works. If it’s used as a cache, the embedded SRAM should significantly cut down on GPU memory bandwidth requests which will give the GPU much more bandwidth than the 256-bit DDR3-2133 memory interface would otherwise imply. Depending on how the eSRAM is managed, it’s very possible that the Xbox One could have comparable effective memory bandwidth to the PlayStation 4. If the eSRAM isn’t managed as a cache however, this all gets much more complicated.

Microsoft Xbox One vs. Sony PlayStation 4 Memory Subsystem Comparison
  Xbox 360 Xbox One PlayStation 4
Embedded Memory 10MB eDRAM 32MB eSRAM -
Embedded Memory Bandwidth 32GB/s 102GB/s -
System Memory 512MB 1400MHz GDDR3 8GB 2133MHz DDR3 8GB 5500MHz GDDR5
System Memory Bus 128-bits 256-bits 256-bits
System Memory Bandwidth 22.4 GB/s 68.3 GB/s 176.0 GB/s

There are merits to both approaches. Sony has the most present-day-GPU-centric approach to its memory subsystem: give the GPU a wide and fast GDDR5 interface and call it a day. It’s well understood and simple to manage. The downsides? High speed GDDR5 isn’t the most power efficient, and Sony is now married to a more costly memory technology for the life of the PlayStation 4.

Microsoft’s approach leaves some questions about implementation, and is potentially more complex to deal with depending on that implementation. Microsoft specifically called out its 8GB of memory as being “power friendly”, a nod to the lower power operation of DDR3-2133 compared to 5.5GHz GDDR5 used in the PS4. There are also cost benefits. DDR3 is presently cheaper than GDDR5 and that gap should remain over time (although 2133MHz DDR3 is by no means the cheapest available). The 32MB of embedded SRAM is costly, but SRAM scales well with smaller processes. Microsoft probably figures it can significantly cut down the die area of the eSRAM at 20nm and by 14/16nm it shouldn’t be a problem at all.

Even if Microsoft can’t deliver the same effective memory bandwidth as Sony, it also has fewer GPU execution resources - it’s entirely possible that the Xbox One’s memory bandwidth demands will be inherently lower to begin with.

CPU & GPU Hardware Analyzed Power/Thermals, OS, Kinect & TV
Comments Locked

245 Comments

View All Comments

  • Thermalzeal - Wednesday, May 29, 2013 - link

    Anand, any information on whether the Xbox One will utilize HMA (Hybrid Memory Access) in comparison to the PS4?
  • tipoo - Wednesday, May 29, 2013 - link

    Do you mean HUMA by any chance? Yes, both would have that.
  • Buccomatic - Friday, May 31, 2013 - link

    xbox one - everything we don't want in a video game console, except the controller.
    ps3 - everything we do want in a video game console, except the controller.

    that's how i see it.
  • Buccomatic - Friday, May 31, 2013 - link

    can anyone tell me if the following statement is correct or incorrect?

    pc games will be ports from the games made for consoles. both consoles (xbox one and ps4) will have 5gb vram in their gpu. so that means the system requirements for pc games, as early as december when they start porting games over from the consoles to pc, will require a pc gamer to have a video card of at least 5gb vram, or more, just to run the game.

    ?

    yes or no and why?
  • fteoath64 - Monday, June 10, 2013 - link

    Before the hardware is released and analysed, we have no idea how much of the PS4 GDDR5 ram is going to be shared and/or dedicated for gpu use and how much of those are going to be available to user data. It is anyone's guess at this stage. But the improvements in hUMA design with dual ported frame buffer for gpu and cpu makes it a rather quick gpu by PC standards. Since only one game is loaded at a time, there can be shared memory reconfiguration going on just before the game loading so it can depend on the game and how much ram it can grab. The cpu counts very little in the process and it is why it can be clocked at 1.6Ghz rather than storming at 3.6Ghz as in Trinity chips. Still with faster gpu and globs of ram now, there is certainly greater leeway in the development process and optimizing process for game developers. One can assume at 3X the Trinity gpu core counts, the PS4 must be at least 2.5X the speed of Trinity gpus since those ran at 900Mhz. With good cooling, the PS4 could well clock their gpu cores at 1.2Ghz since Intel is going 1.3Ghz on the GT3 core.
  • SnOOziie - Sunday, June 2, 2013 - link

    Looking at the motherboard they have use solder balls on CPU to BGA it's going to RROD
  • Wolfpup - Monday, June 3, 2013 - link

    This has never been an easier choice-Microsoft doesn't let you buy games, Sony does, and their system is 50% more powerful, more focused on games, while Microsoft's off doing yet more Kinnect.
  • SirGCal - Thursday, June 13, 2013 - link

    YUP, and as a cripple, what good is flailing my arms about and hopping around like a retard going to do me? Kinetic is about the dumbest thing I've seen people use. Accept for work-out stuff and kids stuff sure, makes sense. But then now they give those in the dial-up and cellular internet locations the finger and say 'stick with the 360' when they know damn well developers won't make games for it within a year... Morons. I'm done with M$. If I do get a new console, it will be the PS4. Besides, I've always loved the Kingdom Hearts series more then any others...
  • NoKidding - Monday, June 24, 2013 - link

    i am glad that these consoles have finally seen the light of day. though a bit underpowered compared to an average mid range rig, at least game developers will be forced to utilized each and every available core at such low clock rates on these consoles. heavily threaded games will finally be the norm and not just a mere exception. if the playing field no longer relies heavily on ipc advantages, will amd's "more cores but cheaper" strategy finally catch intel's superior ipc advantage? will amd finally reclaim the low to mid range market? no, not likely. but one can hope so. i yearn for the good old c2d days when intel was forced to pull all the stops.
  • kondor999 - Tuesday, July 16, 2013 - link

    Who gives a shit about heat and power consumption in a console? Both machines are miserly, and they're not notebooks for Gods sake. Looks like MS simply cheaped out to me. Letting them off the hook by pointing out the tiny heat/power savings as a "benefit" is a real reach. By this logic, why not just cut the compute power even more?

    No thanks.

Log in

Don't have an account? Sign up now