The Future of DDR4

DDR4 was first launch in the enthusiast space for several reasons. On the server side, any opportunity to use lower power and drive cooling costs down is a positive, so aiming at Xeons and high-end consumer platforms was priority number one. Any of the big players in the datacenter space most likely had hardware in and running for several months before the consumer arms got hold of it. Being such a new element in the twisting dynamic of memory, the modules command a premium and the big purchasers got first pick. The downside of when that shifts to consumer where budgets are tighter and some of the intended benefits of DDR4 are not that important, such as lower power, it causes problems. When we first launched our Haswell-E piece, the cost of a 4x4GB kit of JEDEC DRAM for even a basic eight-core system was over $250, and not much has changed since. Memory companies have lower stock levels, driving up the cost, and will only make and sell more if people start buying them.  At this point, Haswell-E and DDR4 is really restricted to early adopters or those with a professional requirement to go down this route.

DDR4 will start to get interesting when we see it in the mainstream consumer level. This means when regular Core i3/i5 desktops come into being, and eventually SO-DIMM variants in notebooks. The big question, as always, is when. If you believe the leaks, all arrows point towards a launch with Skylake on the Intel side, after Broadwell. Most analysts are also in this category, with the question being on how long the Broadwell platform on desktops is to last. The 14nm process node had plenty of issues, meaning that Q1 2015 is when we have started to see more Core-M (Broadwell-Y) products in notebooks and the launch of Broadwell-U, aiming at the AIO and mini-PC (such as the NUC and BRIX) market as well as laptops. This staggered launch would suggest that Broadwell on desktops should be due in the next few months, but there is no official indication as to when Skylake will hit the market, and in what form first. As always, Intel does not comment on unreleased product when asked.

On the AMD side of the equation, despite talks of a Kaveri refresh popping up in our forums and discussions about Carrizo focusing only on the sub-45W market with Excavator cores, we look to the talk surrounding Zen, K12 and everything that points to AMD’s architecture refresh with Jim Keller at the helm sometime around 2016. In a recent round table talk, Jim Keller described Zen as scaling from tablet to desktop but also probing servers. One would hope (as well as predict and imagine) that AMD is aiming for DDR4 with the platform. It makes sense to approach the memory subsystem of the new architecture from this angle, although for any official confirmation we might have to wait a few months at the earliest when AMD start releasing more information.

When DDR4 comes to desktop we will start to see a shift in the proportions of the market share that both DDR4 and DDR3 will get. The bulk memory market for desktop designs and mini-PCs will be a key demographic which will shift more to an equal DDR3-DDR4 stage and we can hope to achieve price parity before then. If we are to see mainstream DDR4 adoption, the bulk markets have to be interested in the performance of the platforms that require DDR4 specifically but also remain price competitive. It essentially means that companies like G.Skill that rely on DRAM sales for the bulk of their revenue have to make predictions on the performance of platforms like Skylake in order to tell their investors how quick DDR4 will take the market. It could be the difference between 10% and 33% adoption by the end of 2015.

One of the questions that sometimes appears with DDR4 is ‘what about DDR5?’. It looks like there are no plans to consider a DDR5 version ever for a number of reasons.

Firstly, but perhaps minor, is the nature of the DRAM interface.  It relies on a parallel connection and if other standards are indicative of direction, it should probably be upgraded to a serial connection, similarly as how PCI/PCI Express and PATA/SATA has evolved in order to increase throughput while at the same time decreasing pin counts and being easier to design for the same bandwidth.

Secondly, and more importantly, are the other memory standards currently being explored in the research labs. Rather than attempt to copy verbatim a piece from ExtremeTech, I’ll summarize it here. The three standards of interest, whilst mostly mobile focused, are:

Wide I/O 2: Designed to be placed on top of processors directly, abusing a larger number of I/O pins by TSVs, keeping frequencies down in order to reduce heat generation. This has benefits in industries where space is at a premium, saving some PCB area in exchange for processor Z-height.

Hybrid Memory Cube (HMC): Similar to current monolithic DRAM dies but using a stacked slice over a logic base, allowing for much higher density and much higher bandwidth within a single module. This also increases energy efficiency, but introduces higher cost and requires higher power consumption per module.

High Bandwidth Memory (HBM): This is almost a combination of the two above, specifically aimed more at graphics by offering multiple DRAM dies stacked on or near the memory controller to increase density and bandwidth. It is more described as a specialized implementation of Wide I/O 2, but should afford up to a 256GB/s bandwidth on a 128-bit bus with 4-8 stacks on a single interface.

Image from ExtremeTech

Moving some of the memory power consumption onto the processor directly has thermal issues to consider, which means that memory bandwidth/cost might be improved at the expense of operating frequencies. Adding packages onto the processor also introduces a heavy element of cost, which might leave these specialist technologies to the super-early adopters to begin with.

Given the time from DDR4 being considered to it actually entering the desktop market, we can safely say that DDR4 will become the standard memory option over the next four years, just as DDR3 is right now. Beyond DDR4 is harder to predict, and depends on how Intel/AMD want to approach a solution that offers higher memory bandwidth, depending at what cost. Both companies will be looking at how their integrated graphics are performing, as that will ultimately be the best beneficiary to the design. AMD has some leverage in the discrete GPU space and will be able to transfer any knowledge used over to the CPU side, but Intel has a big wallet. Both Intel and AMD has experimented with eDRAM/SRAM as extra level caches with Crystal Well and PS4/XBone, which puts less stress on the external memory demands when it comes to processor graphics, which leads me to the prediction that DDR4 will be here in the market longer than DDR3 or DDR2.

If any of the major CPU/SoC manufacturers want to invest heavily in Wide I/O 2, HBM or HMC, we will have to wait. If it changes what we see on the desktop, the mini-PC or the laptop, we might have to wait even longer.

Comparing DDR3 to DDR4 Conclusions on Haswell-E DDR4 Scaling
Comments Locked

120 Comments

View All Comments

  • jabber - Friday, February 6, 2015 - link

    Well I've added into my T5400 workstation USB3.0, eSATA, 7870 GPU, SSHD and SSD. I haven't added SATA III as its way too costly for a decent card, plus even though I can only push 260MBps from a SSD, with 0.1ms access times I really can't notice in real world. The main chunk of the machine only cost around £200 to put together.
  • Striker579 - Friday, February 6, 2015 - link

    omg those retro color mb's....good times
  • Wardrop - Saturday, February 7, 2015 - link

    Wow, how did you accidentally insert your motherboard model in the middle of the word "provide"? Quite an impressive typo, lol
  • msroadkill612 - Saturday, September 2, 2017 - link

    To be the devils advocate, many say there are few downside for most using 8 lane gpu vs 16 lanes for gpu.

    if nvme an ssd means reducing to 8 lanes for gpu to free some lanes, I would be tempted.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    Core 2 is getting weak - right click and open ttask manager then see how often your quad is maxxed at 100% useage (you can minimize and check the green rectangle by the clock for percent used).

    That's how to check it - if it's hammered it's time to sell it and move up. You might be quite surprised what a large jump it is to Sandy Bridge.
  • blanarahul - Thursday, February 5, 2015 - link

    TOTALLY OFF TOPIC but this is how Samsung's current SSD lineup should be:

    850: 120 GB, 250 GB TLC with TurboWrite

    850 Pro: 128 GB, 256 GB MLC

    850 EVO: 500/512 GB, 1000/1024 GB TLC w/o TurboWrite

    Because:
    a) 500 GB and 1000 GB 850 EVOs don't get any speed benefit from TurboWrite.
    b) 512/1024 GB PRO has only 10 MB/s sequential read, 2K IOPS and 12/24 GB capacity advantage over 500/1000 GB EVO. Sequential write speed, advertised endurance, random write speed, features etc. are identical between them.
    c) Remove TurboWrite from 850 EVO and you get a capacity boost because you are no longer running TLC NAND in SLC mode.
  • Cygni - Thursday, February 5, 2015 - link

    Considering what little performance impact these memory standards have had lately, DDR2 is essentially just as useful and relevant as the latest stuff... with the added of advantage of the fact that you already own it.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    If you screw around long enough on core 2 boards with slight and various cpu OC's with differing FSB's and result memory divisors and timings with mechanical drives present, you can sometimes produce and enormous performance increase and reduce boot times massively - the key seems to have been a differing sound in the speedy access of the mechanical hard drive - though it offten coincided with memory access time but not always.
    I assumed and still do assume it is an anomaly in the exchanges on the various buses where cpu, ram, harddrive, and the north and south bridges timings just happen to all jibe together - so no subsystem is delayed waiting for some other overlap to "re-access".

    I've had it happen dozens of times on many differing systems but never could figure out any formula and it was always just luck goofing with cpu and memory speed in the bios.
    I'm not certain if it works with ssd's on core 2's (socket 775 let's say) - though I assume it very well could but the hard drive access sound would no longer be a clue.
  • retrospooty - Thursday, February 5, 2015 - link

    I love reviews like this... I will link it and keep it for every time some newb doof insists that high bandwidth RAM is important. We saw almost no improvement going from DDR400 cas2 to DDR3-1600 CAS10 now the same to DDR4 3000+ CAS freegin 80 LOL
  • menting - Thursday, February 5, 2015 - link

    depends on usage. for applications that require high total bandwidth, new generations of memory will be better, but for applications that require short latency, there won't be much improvement due to physical restraints of light speed

Log in

Don't have an account? Sign up now