I'm serious, what's the point of touting lower voltage all the time. Intel chips way back when used to run at 3.3volts now they run at 1 volt yet use up 100watts+.
DDR4 memory has roughly a 0.5 ohm resistance (varies with make, model, speed, etc). Power = Voltage ^2 / resistance. So at 1.2 V it will use roughly (1.2 V)^2/(0.5 ohm) = 2.88 W power.
The same memory at 1.1 V would then use (1.1 V)^2/(0.5 ohm) = 2.42 W power. At 1.2 V, it uses 19% more power than at 1.1 V. Of course, that all depends on the load on the memory. It only uses full power when in use.
Newer CPUs run on lower voltage which vastly helps power. But, they are also getting larger in a parallel fashion. Overall resistance drops when in more resistors are in parallel. Thus, a many core chip could have a very low resistance and use even more power than a low core count chip with a higher voltage.
But, since power goes with voltage squared, the most important thing for power reduction is a voltage reduction.
Uh... your simplification with regard to resistance would only hold water if the CPUs would be clocked with DC (or a rather low-frequency clock). But they aren't. The transistors in the circuits switch with a frequency of several hundred or thousand MHz. With regard to power and power losses, frequency-dependent cost such as gate-charge costs supremely dominate over any resistive costs with such high frequencies in mind.
So, no. Your assumption that more cores "in parallel" is like wiring resistors in parallel is wrong, unless you would only consider the silly scenario of clocking the CPU/switching the CPU transistors at a few (kilo) Hertz. The more cores, the more transistors. The more transistors, the more gates with their associated gate losses. The higher the switching frequency of the transistors, the higher the gate losses. Etc, etc...
You mean dynamic power consumption has to live within the limits granted by Ohm's Law. :-) There's definitely moments when dynamic load can temporarily appear to ignore Ohm's Law... ie exhibiting imaginary impedance while a magnetic field collapses. But an AC circuit can never draw more current than a DC one with the same voltage and resistance.
Yes, with AC circuits (anything with signalling is AC) the formula is Voltage*Charge*Frequency, not Voltage*Voltage/Resistance
Resistance does still become a limiting factor in AC circuits though. If the power supply to the DRAM modules has a 1/2 Ohm resistance and a supply voltage of 1.1V then the maximum possible wattage that can be drawn from it is still V*V/R = 2.42W. For those of you not familiar with basic electronics concepts, google 'impedance matching'.
> Thus, a many core chip could have a very low resistance and use even more power than a low core count chip with a higher voltage.
You know what too has a very low resistance (relatively speaking)? Mains wiring. So, does mains wiring use a lot of power? No. Why not? It has such a low resistance... Why is the mains wiring not using more power than, for example, a vacuum cleaner, despite the mains wiring clearly and measurable having a much lower resistance than the vacuum cleaner? Me thinks, you need some more coffee ;-)
Let me simplify. DC power is measured in W (Watt) while AC power in VA (volt amper). Digital electronics is combination of both. Today most of digital is produced in CMOS where most of current is consumed while switching from 0 -> 1 and 1 -> 0.In steady state almost no current flows. Power consumption is almost linear to frequency. But in circuits situation is different cause of trace inductance and capacitance, decoupling capacitors, etc. When CMOS changes state for brief period of time both transistors are conductive and current spike occurs. Trace inductance (traces in chip, bonding wires, pads, PCB traces) tries to reduce current (purpose of inductor) and this lowers voltage. Decoupling caps are used to supply power for this brief period and a bit later when both transistor are closed cap starts to charge from power supply. Result is a lot of oscillations - spurs. Caps (any two separated conductors) on other side destroy change in voltage - smooth voltage change. Inside chip that's not so big problem as is between chips. That's reason why memories are as close as to the CPU. And here comes problem when distance is larger, eg PCIE, SATA. Only way to transmit signal is to switch to sinusoidal currents, lower voltage and use two wires to reduce generated noise (single wire behaves as an antenna). This is called differential signaling. Longer wires require more power to fight cable inductance/capacitance. And amplifiers at both ends. x570 motherboards already have huge problems cause of PCIE4 which is already designed with this purpose but DDR is not. Above I wrote that digital is using digital signal, 0 and 1, but at high speed and a bit longer wire signal becomes analog, sinusoidal. This requires PCIE like signals or very expensive motherboards and for sure some error detection and correction. 8.4GHz single ended is really big task for PCB design. Let's see but it won't be cheap. Today DDR at 3600 has problems and only few motherboards work reliably.
Watts are the unit of measure of power in both AC and DC. The volt ampere is a different unit and is used to measure apparent power. Apparent power has two components, one of which is the real power measured in watts. The other component of apparent power is reactive power, measured in volt-ampere reactive (VAR).
You are confusing Data rate with frequency, the signaling of DDR3200 is 1,6 GHz the Data rate is 3,2GHz. You are correct that signaling of 8.4GHz would be very difficult, luckily we don't need that fast signaling for DDR5.
Who is getting best solutions therefore? Fuzzy layers towards memory connectors (or at least partly integrated into cpu being L4 cache), for interconnecting cores on different sockets and towards peripherals on pcbs? Interesting times on that level, thx
A vacuum cleaner has a High Resistance (at startup, thanks to either a motor start capacitor or a varistor) which prevents it from drawing so much power that it would blow the fuse or melt the windings. Once the motor has spun up a bit the Resistance drops, and instead Back EMF from the rotor's magnetic fields rotating relative to the windings, creates high Impedance, which effectively resists current flow and prevents the motor spinning faster than its rated maximum unloaded RPMs.
At that max rate of rotation, the motor draws the least power thanks to that high impedance. It also has the least torque, which is why the frequency of the vacuum motor climbs so much when you block the nozzle - very little work can be done in that high RPM range. Kinda the opposite of a car engine. Anyhow, voltage drop is very low when the vacuum cleaner has finished spinning up, and it's even lower than that when you block the nozzle. For a big voltage drop - try a K15 kettle.
Consumption (and average power) will differ for task dependent customer profiles, e.g. server or office desktop or mobile usage patterns, having lower and upper limits (for to ask experts therefore). Considering upper limits coming close to steady full power what ringing/settling time and time to (first) reach this state on usage patterns, (secondly) what are safety margins and (thirdly) what effects on aging stability compared to DDR4 are to be expected? (Don't know, if these questions are worked on, outside probably NDA manuals, if at all?)
Those tech specs look really nice! It would be really neat if Zen 3 for the desktop surprises everyone by support DDR4 & DDR5 at launch, although that would be a longshot.
Probably Zen4 will be the first platform to support DDR5, as it's likely going to have a lot of new things with it (i.e. new CPU socket, maybe PCIe 5.0, etc.).
Zen 3 will almost certainly be DDR4 only. Whether or not Zen 4 supports DDR5 depends on market availability and cost. We could see a situation where Zen 4 is released on both AM4 and AM5, supporting DDR4 and DDR5, respectively.
I feel like next year AMD could do something like they did in 2019 with the 7's -- the Ryzen 5000 series, with PCIe 5.0 and DDR5. Release it on May 5th and make a party of it!
It would be quite a departure for AMD if the follow-up to AM4 didn't support DDR5 - after all, platform longevity is an explicit goal of theirs, at least currently. Of course Zen 3 is supposed to launch on AM4, not "AM5"(?), so Zen 3 will in all likelihood be DDR4-only. There's no way of retrofitting AM4 with DDR5 support, nor would AMD launch a single-generation platform - that would be too expensive (and just plain stupid, pissing off customers by giving them no upgrade path).
Still, this is extremely promising for future APUs. A 2021/2022 Zen3/Zen4 "AM5" APU with DDR5-8400 and RDNA 2 would be _amazing_. That is nearly quadruple the memory bandwidth from laptop-standard DDR4-2133, or nearly 3x desktop-"standard" 3200. And if there's one thing limiting APU iGPUs, it's memory bandwidth. Bring on DDR5!
Sure, that's possible, but it also doesn't matter for anyone outside of datacentres. For most tech enthusiasts, what they said is quite likely to be true.
It's possible that AMD will offer a different I/O die with DDR5. Chiplets certainly allow for this. That said, my guess would be that the first DDR5 support will be in Cézanne. I expect laptops of next year to support LPDDR5, and AMD is hopefully getting ready for this.
... LPDDR4X support arrived in late 2019/ early 2020 for PCs after being officially launched in early 2017. What is causing your extreme optimism as to the uptake of LPDDR5 in PC SoCs?
Ok, on-Die ECC. Does that mean that the DRAM DIMMs have the ability to perform their own ECC functions, and just inform the OS about ECC corrections? Would that also mean that you're separating ECC from the motherboards and that it's now only limited by OS support?
If so - that would be amazing - it would simplify things and make them more reliable across the board.
On-die ECC for consumer workloads *actually* has some benefits, particularly for lowering power draw. Micron gave a presentation on this some years ago:
Slightly higher active power usage for far, far, far more efficient standby (i.e., refresh) power usage.
"Because the refresh rates are set very conservatively, a DRAM with ECC can be refreshed at a rate approximately one quarter of what is set forth in the specification."
I also get the impression that on-die ECC is (partly) there to make it easier to run these chips at absurdly high data rates - or is that just the pessimist in me talking?
Also, @mode_13h, couldn't on-die ECC be entirely transparent to the OS? As long as the OS gets the data it needs, does it matter to it whatsoever if ECC corrected anything or not?
Of course the OS doesn't *need* to be involved, but you'd ideally like it to mark the page as bad, in case another bit starts flipping, as well. That's just one example.
But that's not really any different to the situation where the CPU's own memory controller is doing the ECC, which is the situation we have today.
"> which allows the system the ability to use other banks while others are in use" Not JUST refreshing.
The fundamental unit of DDR4 was the 64-bit wide channel which, feeding an 8-beat burst, delivers up 64-bytes aka one cache line on most devices.
How do you double the number of beats while still retaining 64-byte cache lines? By making the fundamental unit a 64-bit wide "physical" channel which is split into two 32-bit wide "logical" and independent channels. So even on a nominally single-channel device (ie something like a normal phone with just a single 64-bit connection to DRAM) both these channels can now operate simultaneously and independently. Which means that, as much as possible, you'd like them to be hitting different "lower-level" structures (ie different banks) which can both simultaneously provide data.
> "In the 4th Industrial Revolution, which is represented by 5G, autonomous vehicle, AI, augmented reality (AR), virtual reality (VR), big data, and other applications, DDR5 DRAM can be utilized for next-gen high-performance computing and AI-based data analysis".
So, did this guy lose a bet, or is he just *really* trying to help anyone playing buzzword bingo? I don't recall the last time I saw so many packed into a single sentence!
TBF, faster & higher density DDR5 RAM really *is* needed for those advancements, honestly. I don't know how Startup.io is doing anything, but DDR5 is absolutely critical (e.g., sine qua non) for advancements in AR, VR GPUs, and AI / neural nets.
As always, industrial/enterprise/commercial drive the bleeding edge of the market. The consumer is the "2nd half" volume customer for new technologies.
Pretty cool about the DFE, though I'm surprised to learn it wasn't present in previous DDR generations. I know there must be some form of trained EQ, or else those data rates simply would not work in those channels. It appears that EQ has been outside the scope of the JEDEC spec, but vendors need to implement it. I think a lot would be gained in adding a side channel in the physical layer for handling these things. Especially when the BER is so stringent (1E-16 is not trivial to validate).
Most consumer may not be excited by any of these but it surely is exciting for servers. We are already running into bandwidth wall for some application and High Density Memory is insanely expensive. Hopefully DDR5 fix both while bringing down TCO with lower energy usage.
Assuming everything is perfectly executed, AMD Zen 4 may hit a home run with PCI-E 5.0 and DDR 5.
At best it'll be year before dimm's are available. i doubt Zen3 will support it. They have to design the support in years in advance. Zen4 maybe, but even that may be a long shot.
I really want to see greater adoption of ECC in system DRAM (as distinct from GDDR used in consumer GPUs). There's a good reason why Apple, Dell etc. deploy ECC in its workstation computers.
There's minimal real-world performance degradation by using slower ECC RAMs compared to non-ECC RAMs. But there are tangible benefits to having one's system not silently corrupting (and eventually giving all sorts of unexplained system errors).
ECC RAMs are more expensive than non-ECC RAMs because ECC RAMs are not currently being produced on a large scale. But for consumers like me who value stability and reliability would not mind the ~20% higher price. The problem is, it's virtually impossible to find anyone who sells ECC RAMs to consumers.
To clarify, I'm only advocating for ECC in system RAM. I'm not advocating for registered memory.
DDR4 ECC RAM is nearly twice the cost and half the speed of DDR4 non-ECC RAM. What’s more, CPUs are heavily, heavily limited by memory throughput. That is why most of the transistors are spent on speeding up the memory subsystem (see: SRAM cache and SMT). And now we’re in the core wars, where main memory throughput requirements have never been higher. We’re still only on 2 channels of main memory but we want to feed 12 4 GHz cores? Look into memory scaling performance on current gen CPUs. Memory performance matters.
"If you absolutely need the fastest system possible and if fractions of a percent actually do make a difference, then ECC might not be right for you. But our testing has only furthered our belief that in any other situation, ECC memory is simply a better choice than non-ECC memory due to its incredible reliability with only a tiny loss in performance."
And perhaps G.Skill et al. will be willing to bin high speed ECC RAM and sell it to consumers. There's nothing to stop manufacturers from "overclocking" ECC RAM past the SPD speeds and timing, and including XMP profiles into ECC RAM.
Weirdly, the dual-ECC often edges out the dual non-ECC, even though they allegedly have the same timings. Interestingly, the quad-channel config only helps in a minority of benchmarks, even on a 10-core @ 3.1 GHz CPU. However, note that the CPU is Haswell-EP, and this could be a artifact of its core topology or the sophistication of its DDR4 controller.
Here are memory scaling tests for the scenario I described (12 core 4 GHz CPU with dual channel DDR4). The difference between 2400 and 3200 is up to 10% in many applications, and that isn’t even testing for the increased latency from ECC.
The link in the post above didn't measure any increased latency from ECC vs. non-ECC (both unbuffered). Moreover, ECC RAM is currently available in speeds as high a DDR4-3200.
Your response does not make any sense. I explicitly stated “that isn’t even testing for the increased latency from ECC” and then you repeat that statement in other words. That doesn’t refute the performance difference between DDR4-2400 and DDR4-3200. I would expect higher latencies to make the performance difference larger and in other applications. Non-ECC DDR4 is available in up to DDR4-5000. Your point can’t be that DDR4 ECC is twice the cost and half the speed of DDR4 non-ECC, since that’s a fact.
> I explicitly stated “that isn’t even testing for the increased latency from ECC”
Your statement presumes there is some increase in latency to be measured.
Now, the point of confusion seems to be that I was referring to the link in the post above yours, not the link in your post, which was above mine. So, to eliminate all potential for confusion, check this link, and note that they found zero difference in latency between the dual-channel ECC and non-ECC setups:
> That doesn’t refute the performance difference between DDR4-2400 and DDR4-3200.
It wasn't meant to. My point about ECC being available at up to DDR4-3200 was a separate point, but I'll concede that I've only found registered memory at those speeds.
> since that’s a fact.
Not one supported by any evidence you've so far provided.
ECC isn't intrinsically slower. It's just that the market for ECC memory doesn't *want* DDR4-5000, because it's too expensive, power-hungry, and/or error-prone.
I agree with your conclusion. At the end of the day, someone still has to put up a lot more money for equivalent performance ECC memory compared to non-ECC. So someone making a system needs to evaluate how much ECC is worth and how much performance is worth in their application.
Why should ECC be half the speed of non-ECC? I don't follow this, at all. The underlying RAM is the same, other than the number of dies. So, ECC just means more data lines to the CPU and that its memory controller needs to to check/correct each "word".
I just don't believe that last part is such a bottleneck. CPU caches often have ECC, as well, and they run much higher throughputs.
You stated "half the speed" and now you seem to be backing away from that. It seems your only case against the speed of ECC was simply that vendors don't offer it in such high speeds and low-latencies as *gaming* RAM, as if that were any kind of surprise.
I'm currently seeing a number of options for Registered DDR4-3200 and Unbuffered DDR4-2666.
In the context of 4-, 6-, and 8- channel memory configurations, this supposed speed penalty of ECC is a non-issue.
Why are you worried about gaming? I am backing away from nothing. Show me a set of DDR4-5000 ECC RAM, please. You should attempt to carefully read comments rather than construct whatever opposing narrative makes the commenter appear like an idiot. You know who does things like that? Self conciois idiots.
If there's anything intrinsically slower about ECC RAM, you have yet to demonstrate that, not least to the degree you stated. The 2x pricing disparity is also something you've not even attempted to justify, either empirically or fundamentally.
And resorting to insults just shows you lack good evidence to support your outlandish claims.
"DDR5-8400 will use on-die ECC (Error Correction) and ECS (Error Check and Scrub)"
Does this mean that (almost?) all DDR5 will be ECC by default?
"with density up to 64 gigabit"
So does this mean 64 GB unbuffered ECC modules will be (relatively) meanstream, without the need for buffered/registered/load-reduced (if I want high-capacity ECC)?
If so, I'll be looking forward to the ability to load up a next(-next?)-gen ThreadRipper with 8x64 GB modules, instead of having to go EPYC or Xeon-W.-
I've been learning a new statistical technique that requires generating a 100,000 by 100,000 matrix of 64 bit doubles. Just that matrix by itself is about 80 GB. And it's only an input into a statistical computation that is estimated with quad precision. I tried to run the computation and got an error that Windows tried but failed to allocate 200 GB of memory.
It feels like you need something other than brute force, though. The obvious first question would be whether you tried virtual memory, on a fast NVMe SSD. Also, try a matrix library designed to use coherent access patterns.
So, do they have a working sample of their DDR5-8400 DRAM? Also, what about other memory makers like Samsung or Micron? Any word on their plans or working samples?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
84 Comments
Back to Article
shabby - Friday, April 3, 2020 - link
1.1v at what amps?mode_13h - Friday, April 3, 2020 - link
All of them.CaedenV - Friday, April 3, 2020 - link
1.1V but with absolutely 0 resistanceshabby - Friday, April 3, 2020 - link
I'm serious, what's the point of touting lower voltage all the time. Intel chips way back when used to run at 3.3volts now they run at 1 volt yet use up 100watts+.willis936 - Friday, April 3, 2020 - link
https://en.wikipedia.org/wiki/Dynamic_frequency_sc...dullard - Friday, April 3, 2020 - link
DDR4 memory has roughly a 0.5 ohm resistance (varies with make, model, speed, etc). Power = Voltage ^2 / resistance. So at 1.2 V it will use roughly (1.2 V)^2/(0.5 ohm) = 2.88 W power.The same memory at 1.1 V would then use (1.1 V)^2/(0.5 ohm) = 2.42 W power. At 1.2 V, it uses 19% more power than at 1.1 V. Of course, that all depends on the load on the memory. It only uses full power when in use.
Newer CPUs run on lower voltage which vastly helps power. But, they are also getting larger in a parallel fashion. Overall resistance drops when in more resistors are in parallel. Thus, a many core chip could have a very low resistance and use even more power than a low core count chip with a higher voltage.
But, since power goes with voltage squared, the most important thing for power reduction is a voltage reduction.
InTheMidstOfTheInBeforeCrowd - Saturday, April 4, 2020 - link
Uh... your simplification with regard to resistance would only hold water if the CPUs would be clocked with DC (or a rather low-frequency clock). But they aren't. The transistors in the circuits switch with a frequency of several hundred or thousand MHz. With regard to power and power losses, frequency-dependent cost such as gate-charge costs supremely dominate over any resistive costs with such high frequencies in mind.So, no. Your assumption that more cores "in parallel" is like wiring resistors in parallel is wrong, unless you would only consider the silly scenario of clocking the CPU/switching the CPU transistors at a few (kilo) Hertz. The more cores, the more transistors. The more transistors, the more gates with their associated gate losses. The higher the switching frequency of the transistors, the higher the gate losses. Etc, etc...
Brane2 - Saturday, April 4, 2020 - link
Dynamic power consumptions scales roughly the same as would pure resistive losses.linuxgeex - Tuesday, April 7, 2020 - link
You mean dynamic power consumption has to live within the limits granted by Ohm's Law. :-) There's definitely moments when dynamic load can temporarily appear to ignore Ohm's Law... ie exhibiting imaginary impedance while a magnetic field collapses. But an AC circuit can never draw more current than a DC one with the same voltage and resistance.linuxgeex - Tuesday, April 7, 2020 - link
Yes, with AC circuits (anything with signalling is AC) the formula is Voltage*Charge*Frequency, not Voltage*Voltage/ResistanceResistance does still become a limiting factor in AC circuits though. If the power supply to the DRAM modules has a 1/2 Ohm resistance and a supply voltage of 1.1V then the maximum possible wattage that can be drawn from it is still V*V/R = 2.42W. For those of you not familiar with basic electronics concepts, google 'impedance matching'.
InTheMidstOfTheInBeforeCrowd - Saturday, April 4, 2020 - link
Also, what's that?> Thus, a many core chip could have a very low resistance and use even more power than a low core count chip with a higher voltage.
You know what too has a very low resistance (relatively speaking)? Mains wiring. So, does mains wiring use a lot of power? No. Why not? It has such a low resistance... Why is the mains wiring not using more power than, for example, a vacuum cleaner, despite the mains wiring clearly and measurable having a much lower resistance than the vacuum cleaner? Me thinks, you need some more coffee ;-)
willis936 - Saturday, April 4, 2020 - link
Because the voltage drop on mains is low. P = V^2/R. A vacuum cleaner has a high voltage drop and low resistance.Anyway, your original point is correct. Dynamic power is dominant in current CPUs. See: the wikipedia page I linked (and ones it links).
Drazen - Saturday, April 4, 2020 - link
Let me simplify. DC power is measured in W (Watt) while AC power in VA (volt amper). Digital electronics is combination of both. Today most of digital is produced in CMOS where most of current is consumed while switching from 0 -> 1 and 1 -> 0.In steady state almost no current flows. Power consumption is almost linear to frequency. But in circuits situation is different cause of trace inductance and capacitance, decoupling capacitors, etc.When CMOS changes state for brief period of time both transistors are conductive and current spike occurs. Trace inductance (traces in chip, bonding wires, pads, PCB traces) tries to reduce current (purpose of inductor) and this lowers voltage. Decoupling caps are used to supply power for this brief period and a bit later when both transistor are closed cap starts to charge from power supply. Result is a lot of oscillations - spurs.
Caps (any two separated conductors) on other side destroy change in voltage - smooth voltage change.
Inside chip that's not so big problem as is between chips. That's reason why memories are as close as to the CPU. And here comes problem when distance is larger, eg PCIE, SATA. Only way to transmit signal is to switch to sinusoidal currents, lower voltage and use two wires to reduce generated noise (single wire behaves as an antenna). This is called differential signaling. Longer wires require more power to fight cable inductance/capacitance. And amplifiers at both ends.
x570 motherboards already have huge problems cause of PCIE4 which is already designed with this purpose but DDR is not.
Above I wrote that digital is using digital signal, 0 and 1, but at high speed and a bit longer wire signal becomes analog, sinusoidal. This requires PCIE like signals or very expensive motherboards and for sure some error detection and correction. 8.4GHz single ended is really big task for PCB design.
Let's see but it won't be cheap. Today DDR at 3600 has problems and only few motherboards work reliably.
supdawgwtfd - Saturday, April 4, 2020 - link
Watts can be used for both DC and AC...It is a measure of power.
Not just electrical power either.
Not sure where you got the idea that AC is volt amps...
Never need anything (other than UPS's outputs) reference volt amps.
jhensjh - Sunday, April 5, 2020 - link
Watts are the unit of measure of power in both AC and DC. The volt ampere is a different unit and is used to measure apparent power. Apparent power has two components, one of which is the real power measured in watts. The other component of apparent power is reactive power, measured in volt-ampere reactive (VAR).Zoolook - Wednesday, May 6, 2020 - link
You are confusing Data rate with frequency, the signaling of DDR3200 is 1,6 GHz the Data rate is 3,2GHz. You are correct that signaling of 8.4GHz would be very difficult, luckily we don't need that fast signaling for DDR5.back2future - Friday, June 12, 2020 - link
Who is getting best solutions therefore? Fuzzy layers towards memory connectors (or at least partly integrated into cpu being L4 cache), for interconnecting cores on different sockets and towards peripherals on pcbs? Interesting times on that level, thxlinuxgeex - Tuesday, April 7, 2020 - link
A vacuum cleaner has a High Resistance (at startup, thanks to either a motor start capacitor or a varistor) which prevents it from drawing so much power that it would blow the fuse or melt the windings. Once the motor has spun up a bit the Resistance drops, and instead Back EMF from the rotor's magnetic fields rotating relative to the windings, creates high Impedance, which effectively resists current flow and prevents the motor spinning faster than its rated maximum unloaded RPMs.At that max rate of rotation, the motor draws the least power thanks to that high impedance. It also has the least torque, which is why the frequency of the vacuum motor climbs so much when you block the nozzle - very little work can be done in that high RPM range. Kinda the opposite of a car engine. Anyhow, voltage drop is very low when the vacuum cleaner has finished spinning up, and it's even lower than that when you block the nozzle. For a big voltage drop - try a K15 kettle.
back2future - Friday, June 12, 2020 - link
Consumption (and average power) will differ for task dependent customer profiles, e.g. server or office desktop or mobile usage patterns, having lower and upper limits (for to ask experts therefore). Considering upper limits coming close to steady full power what ringing/settling time and time to (first) reach this state on usage patterns, (secondly) what are safety margins and (thirdly) what effects on aging stability compared to DDR4 are to be expected?(Don't know, if these questions are worked on, outside probably NDA manuals, if at all?)
S20802 - Friday, April 3, 2020 - link
Lol "source" SK Hynix Link goes back to Anandtechs.yu - Friday, April 3, 2020 - link
And "DDR4-8400", "This is expected to reduce overall cost reduction"...this is one of the more messier write ups here.Slash3 - Friday, April 3, 2020 - link
Looks like they were fixed. Good to see.valinor89 - Sunday, April 5, 2020 - link
Give them a break, after all we have been saying DDR4-XXXX for 6 years now, that muscle memory will not go away that fast. ;)romrunning - Friday, April 3, 2020 - link
Those tech specs look really nice! It would be really neat if Zen 3 for the desktop surprises everyone by support DDR4 & DDR5 at launch, although that would be a longshot.Probably Zen4 will be the first platform to support DDR5, as it's likely going to have a lot of new things with it (i.e. new CPU socket, maybe PCIe 5.0, etc.).
eek2121 - Friday, April 3, 2020 - link
Zen 3 will almost certainly be DDR4 only. Whether or not Zen 4 supports DDR5 depends on market availability and cost. We could see a situation where Zen 4 is released on both AM4 and AM5, supporting DDR4 and DDR5, respectively.Irata - Friday, April 3, 2020 - link
Would certainly be a possibility - use the same cores but a different io die.FreihEitner - Friday, April 3, 2020 - link
I feel like next year AMD could do something like they did in 2019 with the 7's -- the Ryzen 5000 series, with PCIe 5.0 and DDR5. Release it on May 5th and make a party of it!senttoschool - Friday, April 3, 2020 - link
Don't forget 5nm.Valantar - Saturday, April 4, 2020 - link
Should have timed that for their 50th anniversary though. Such a lack of foresight! :PValantar - Saturday, April 4, 2020 - link
It would be quite a departure for AMD if the follow-up to AM4 didn't support DDR5 - after all, platform longevity is an explicit goal of theirs, at least currently. Of course Zen 3 is supposed to launch on AM4, not "AM5"(?), so Zen 3 will in all likelihood be DDR4-only. There's no way of retrofitting AM4 with DDR5 support, nor would AMD launch a single-generation platform - that would be too expensive (and just plain stupid, pissing off customers by giving them no upgrade path).Still, this is extremely promising for future APUs. A 2021/2022 Zen3/Zen4 "AM5" APU with DDR5-8400 and RDNA 2 would be _amazing_. That is nearly quadruple the memory bandwidth from laptop-standard DDR4-2133, or nearly 3x desktop-"standard" 3200. And if there's one thing limiting APU iGPUs, it's memory bandwidth. Bring on DDR5!
mode_13h - Saturday, April 4, 2020 - link
You think only about consumer.With their memory controller(s) on a separate I/O die, AMD could support DDR4 on consumer platforms and DDR5 in EPYC.
Valantar - Sunday, April 5, 2020 - link
Sure, that's possible, but it also doesn't matter for anyone outside of datacentres. For most tech enthusiasts, what they said is quite likely to be true.ET - Sunday, April 5, 2020 - link
It's possible that AMD will offer a different I/O die with DDR5. Chiplets certainly allow for this. That said, my guess would be that the first DDR5 support will be in Cézanne. I expect laptops of next year to support LPDDR5, and AMD is hopefully getting ready for this.Valantar - Sunday, April 5, 2020 - link
... LPDDR4X support arrived in late 2019/ early 2020 for PCs after being officially launched in early 2017. What is causing your extreme optimism as to the uptake of LPDDR5 in PC SoCs?bill.rookard - Friday, April 3, 2020 - link
Ok, on-Die ECC. Does that mean that the DRAM DIMMs have the ability to perform their own ECC functions, and just inform the OS about ECC corrections? Would that also mean that you're separating ECC from the motherboards and that it's now only limited by OS support?If so - that would be amazing - it would simplify things and make them more reliable across the board.
mode_13h - Friday, April 3, 2020 - link
> Does that mean that the DRAM DIMMs have the ability to perform their own ECC functions, and just inform the OS about ECC corrections?That's the way I read it.
> Would that also mean that you're separating ECC from the motherboards and that it's now only limited by OS support?
With history as a guide, I'm guessing that on-die ECC is an optional feature that you won't get in typical consumer memory.
Also, there should be some parity on the link, itself, which probably won't be implemented on mainstream consumer boards.
ikjadoon - Friday, April 3, 2020 - link
On-die ECC for consumer workloads *actually* has some benefits, particularly for lowering power draw. Micron gave a presentation on this some years ago:https://www.micron.com/about/blog/2017/february/th...
Direct download of the white paper: https://www.micron.com/-/media/client/global/docum...
Slightly higher active power usage for far, far, far more efficient standby (i.e., refresh) power usage.
"Because the refresh rates are set very conservatively, a DRAM with ECC can be refreshed
at a rate approximately one quarter of what is set forth in the specification."
Valantar - Saturday, April 4, 2020 - link
I also get the impression that on-die ECC is (partly) there to make it easier to run these chips at absurdly high data rates - or is that just the pessimist in me talking?Also, @mode_13h, couldn't on-die ECC be entirely transparent to the OS? As long as the OS gets the data it needs, does it matter to it whatsoever if ECC corrected anything or not?
mode_13h - Saturday, April 4, 2020 - link
Of course the OS doesn't *need* to be involved, but you'd ideally like it to mark the page as bad, in case another bit starts flipping, as well. That's just one example.But that's not really any different to the situation where the CPU's own memory controller is doing the ECC, which is the situation we have today.
mode_13h - Saturday, April 4, 2020 - link
Okay, so maybe it'll be enabled for DDR5L?I'm still not assuming it'll be the default for consumer desktop memory.
mode_13h - Friday, April 3, 2020 - link
Um...> which allows the system the ability to use other banks while others are in use,
" to use other banks while one is refreshing," ?
> This is expected to reduce overall cost reduction,
" provide overall cost reduction," ?
> with ECS recording any defects present and counts the error count to the host.
" and sends the error count to the host." ?
name99 - Friday, April 3, 2020 - link
"> which allows the system the ability to use other banks while others are in use"Not JUST refreshing.
The fundamental unit of DDR4 was the 64-bit wide channel which, feeding an 8-beat burst, delivers up 64-bytes aka one cache line on most devices.
How do you double the number of beats while still retaining 64-byte cache lines?
By making the fundamental unit a 64-bit wide "physical" channel which is split into two 32-bit wide "logical" and independent channels. So even on a nominally single-channel device (ie something like a normal phone with just a single 64-bit connection to DRAM) both these channels can now operate simultaneously and independently.
Which means that, as much as possible, you'd like them to be hitting different "lower-level" structures (ie different banks) which can both simultaneously provide data.
Great_Scott - Friday, April 3, 2020 - link
Awesome performance! Shame that the overall latency is the same as DDR4... and DDR3... and DDR3...mode_13h - Friday, April 3, 2020 - link
> "In the 4th Industrial Revolution, which is represented by 5G, autonomous vehicle, AI, augmented reality (AR), virtual reality (VR), big data, and other applications, DDR5 DRAM can be utilized for next-gen high-performance computing and AI-based data analysis".So, did this guy lose a bet, or is he just *really* trying to help anyone playing buzzword bingo? I don't recall the last time I saw so many packed into a single sentence!
drexnx - Friday, April 3, 2020 - link
I've heard a similar spiel from a totally different non-PC Tech source.Same buzzwords, similar phrasing
ikjadoon - Friday, April 3, 2020 - link
TBF, faster & higher density DDR5 RAM really *is* needed for those advancements, honestly. I don't know how Startup.io is doing anything, but DDR5 is absolutely critical (e.g., sine qua non) for advancements in AR, VR GPUs, and AI / neural nets.As always, industrial/enterprise/commercial drive the bleeding edge of the market. The consumer is the "2nd half" volume customer for new technologies.
mode_13h - Saturday, April 4, 2020 - link
Yo, I get that DDR5 is a buzzword-enabling technology. That doesn't mean you have to sling them like a 6-shooter at the OK corral.BTW, I think HBM2 is doing more to power AI. Especially training.
willis936 - Friday, April 3, 2020 - link
Pretty cool about the DFE, though I'm surprised to learn it wasn't present in previous DDR generations. I know there must be some form of trained EQ, or else those data rates simply would not work in those channels. It appears that EQ has been outside the scope of the JEDEC spec, but vendors need to implement it. I think a lot would be gained in adding a side channel in the physical layer for handling these things. Especially when the BER is so stringent (1E-16 is not trivial to validate).https://ibis.org/summits/feb18/wolff.pdf
ksec - Friday, April 3, 2020 - link
Most consumer may not be excited by any of these but it surely is exciting for servers. We are already running into bandwidth wall for some application and High Density Memory is insanely expensive. Hopefully DDR5 fix both while bringing down TCO with lower energy usage.Assuming everything is perfectly executed, AMD Zen 4 may hit a home run with PCI-E 5.0 and DDR 5.
rahvin - Friday, April 3, 2020 - link
At best it'll be year before dimm's are available. i doubt Zen3 will support it. They have to design the support in years in advance. Zen4 maybe, but even that may be a long shot.Irata - Friday, April 3, 2020 - link
That‘s the beauty of the io die - the CPU chiplets don‘t have a memory controller.Of course, the interconnect needs to support the increased bandwidth.
Pro-competition - Friday, April 3, 2020 - link
ECC please.I really want to see greater adoption of ECC in system DRAM (as distinct from GDDR used in consumer GPUs). There's a good reason why Apple, Dell etc. deploy ECC in its workstation computers.
There's minimal real-world performance degradation by using slower ECC RAMs compared to non-ECC RAMs. But there are tangible benefits to having one's system not silently corrupting (and eventually giving all sorts of unexplained system errors).
ECC RAMs are more expensive than non-ECC RAMs because ECC RAMs are not currently being produced on a large scale. But for consumers like me who value stability and reliability would not mind the ~20% higher price. The problem is, it's virtually impossible to find anyone who sells ECC RAMs to consumers.
To clarify, I'm only advocating for ECC in system RAM. I'm not advocating for registered memory.
Pro-competition - Friday, April 3, 2020 - link
Oh ECC also requires extra silicon, so that's another reason why it's more expensive.mode_13h - Saturday, April 4, 2020 - link
Meh, 72 bits per 64? That *should* be only 12.5% overhead.PixyMisa - Saturday, April 4, 2020 - link
With DDR5, there are two independent 32-bit channels per module, and each needs 7 bits of ECC.Which doesn't justify the pricing of DDR4 ECC.
willis936 - Saturday, April 4, 2020 - link
DDR4 ECC RAM is nearly twice the cost and half the speed of DDR4 non-ECC RAM. What’s more, CPUs are heavily, heavily limited by memory throughput. That is why most of the transistors are spent on speeding up the memory subsystem (see: SRAM cache and SMT). And now we’re in the core wars, where main memory throughput requirements have never been higher. We’re still only on 2 channels of main memory but we want to feed 12 4 GHz cores? Look into memory scaling performance on current gen CPUs. Memory performance matters.mode_13h - Saturday, April 4, 2020 - link
price != costwillis936 - Saturday, April 4, 2020 - link
It is if you're a consumer rather than a producer.Pro-competition - Saturday, April 4, 2020 - link
Hopefully a similar test as the one below would be conducted on a more modern systemhttps://www.techspot.com/article/845-ddr3-ram-vs-e...
"If you absolutely need the fastest system possible and if fractions of a percent actually do make a difference, then ECC might not be right for you. But our testing has only furthered our belief that in any other situation, ECC memory is simply a better choice than non-ECC memory due to its incredible reliability with only a tiny loss in performance."
Pro-competition - Saturday, April 4, 2020 - link
And perhaps G.Skill et al. will be willing to bin high speed ECC RAM and sell it to consumers. There's nothing to stop manufacturers from "overclocking" ECC RAM past the SPD speeds and timing, and including XMP profiles into ECC RAM.mode_13h - Sunday, April 5, 2020 - link
Thanks for posting. Here are some benchmarks I found for DDR4:https://www.techpowerup.com/forums/threads/worksta...
Weirdly, the dual-ECC often edges out the dual non-ECC, even though they allegedly have the same timings. Interestingly, the quad-channel config only helps in a minority of benchmarks, even on a 10-core @ 3.1 GHz CPU. However, note that the CPU is Haswell-EP, and this could be a artifact of its core topology or the sophistication of its DDR4 controller.
willis936 - Sunday, April 5, 2020 - link
Here are memory scaling tests for the scenario I described (12 core 4 GHz CPU with dual channel DDR4). The difference between 2400 and 3200 is up to 10% in many applications, and that isn’t even testing for the increased latency from ECC.https://www.techpowerup.com/review/amd-zen-2-memor...
mode_13h - Monday, April 6, 2020 - link
The link in the post above didn't measure any increased latency from ECC vs. non-ECC (both unbuffered). Moreover, ECC RAM is currently available in speeds as high a DDR4-3200.willis936 - Monday, April 6, 2020 - link
Your response does not make any sense. I explicitly stated “that isn’t even testing for the increased latency from ECC” and then you repeat that statement in other words. That doesn’t refute the performance difference between DDR4-2400 and DDR4-3200. I would expect higher latencies to make the performance difference larger and in other applications. Non-ECC DDR4 is available in up to DDR4-5000. Your point can’t be that DDR4 ECC is twice the cost and half the speed of DDR4 non-ECC, since that’s a fact.mode_13h - Tuesday, April 7, 2020 - link
> I explicitly stated “that isn’t even testing for the increased latency from ECC”Your statement presumes there is some increase in latency to be measured.
Now, the point of confusion seems to be that I was referring to the link in the post above yours, not the link in your post, which was above mine. So, to eliminate all potential for confusion, check this link, and note that they found zero difference in latency between the dual-channel ECC and non-ECC setups:
https://www.techpowerup.com/forums/threads/worksta...
> That doesn’t refute the performance difference between DDR4-2400 and DDR4-3200.
It wasn't meant to. My point about ECC being available at up to DDR4-3200 was a separate point, but I'll concede that I've only found registered memory at those speeds.
> since that’s a fact.
Not one supported by any evidence you've so far provided.
ECC isn't intrinsically slower. It's just that the market for ECC memory doesn't *want* DDR4-5000, because it's too expensive, power-hungry, and/or error-prone.
willis936 - Tuesday, April 7, 2020 - link
I agree with your conclusion. At the end of the day, someone still has to put up a lot more money for equivalent performance ECC memory compared to non-ECC. So someone making a system needs to evaluate how much ECC is worth and how much performance is worth in their application.mode_13h - Tuesday, April 7, 2020 - link
Thanks for the follow-up. I'm glad that we could converge on a position.mode_13h - Sunday, April 5, 2020 - link
Why should ECC be half the speed of non-ECC? I don't follow this, at all. The underlying RAM is the same, other than the number of dies. So, ECC just means more data lines to the CPU and that its memory controller needs to to check/correct each "word".I just don't believe that last part is such a bottleneck. CPU caches often have ECC, as well, and they run much higher throughputs.
willis936 - Sunday, April 5, 2020 - link
Who’s saying it has to be? I said that it is. Ask the vendors. They’ll likely tel you it’s to keep SER low, and they’d be right.mode_13h - Monday, April 6, 2020 - link
You stated "half the speed" and now you seem to be backing away from that. It seems your only case against the speed of ECC was simply that vendors don't offer it in such high speeds and low-latencies as *gaming* RAM, as if that were any kind of surprise.I'm currently seeing a number of options for Registered DDR4-3200 and Unbuffered DDR4-2666.
In the context of 4-, 6-, and 8- channel memory configurations, this supposed speed penalty of ECC is a non-issue.
willis936 - Monday, April 6, 2020 - link
Why are you worried about gaming? I am backing away from nothing. Show me a set of DDR4-5000 ECC RAM, please. You should attempt to carefully read comments rather than construct whatever opposing narrative makes the commenter appear like an idiot. You know who does things like that? Self conciois idiots.mode_13h - Tuesday, April 7, 2020 - link
Dude, get over yourself.If there's anything intrinsically slower about ECC RAM, you have yet to demonstrate that, not least to the degree you stated. The 2x pricing disparity is also something you've not even attempted to justify, either empirically or fundamentally.
And resorting to insults just shows you lack good evidence to support your outlandish claims.
willis936 - Tuesday, April 7, 2020 - link
Again, not backtracking here, but I never made such a claim. You assigned a claim to my statements.mode_13h - Saturday, April 4, 2020 - link
> ECC RAMs are not currently being produced on a large scale.BS. The cloud runs on ECC RAM. That's scale, for you.
> The problem is, it's virtually impossible to find anyone who sells ECC RAMs to consumers.
BS. You can buy ECC RAM on Newegg, Amazon, or directly from a number of memory vendors. You just have to look for it.
The bigger issue is that you need a CPU and mobo combination that support it. Intel only supports it on select CPUs, for market segmentation reasons:
https://ark.intel.com/content/www/us/en/ark/search...
AMD supports it on non-APU Ryzens, but only Ryzen Pro APUs.
In both cases, you need a mobo that supports it, though. Usually, that puts you into the workstation/server segment, and that'll cost you extra.
Brane2 - Saturday, April 4, 2020 - link
They are citing less power consumption _per_ bandwidth. Which means that those chips will end up churning more than DDR4...DPete27 - Saturday, April 4, 2020 - link
What's the CAS latency though.....?willis936 - Saturday, April 4, 2020 - link
The same as it has always been for DRAM. Literally 50-100 ns since before CPUs could crack 50 MHz.Mikewind Dale - Sunday, April 5, 2020 - link
"DDR5-8400 will use on-die ECC (Error Correction) and ECS (Error Check and Scrub)"Does this mean that (almost?) all DDR5 will be ECC by default?
"with density up to 64 gigabit"
So does this mean 64 GB unbuffered ECC modules will be (relatively) meanstream, without the need for buffered/registered/load-reduced (if I want high-capacity ECC)?
If so, I'll be looking forward to the ability to load up a next(-next?)-gen ThreadRipper with 8x64 GB modules, instead of having to go EPYC or Xeon-W.-
mode_13h - Sunday, April 5, 2020 - link
> Does this mean that (almost?) all DDR5 will be ECC by default?You wish. ...but I doubt.
> I'll be looking forward to the ability to load up ... with 8x64 GB modules
Whatchu gonna do with all that RAM? ...all that RAM? ...all that RAM?
Mikewind Dale - Sunday, April 5, 2020 - link
I've been learning a new statistical technique that requires generating a 100,000 by 100,000 matrix of 64 bit doubles. Just that matrix by itself is about 80 GB. And it's only an input into a statistical computation that is estimated with quad precision. I tried to run the computation and got an error that Windows tried but failed to allocate 200 GB of memory.mode_13h - Monday, April 6, 2020 - link
LOL. Ouch.It feels like you need something other than brute force, though. The obvious first question would be whether you tried virtual memory, on a fast NVMe SSD. Also, try a matrix library designed to use coherent access patterns.
willis936 - Monday, April 6, 2020 - link
Are you part of a research group? If so, you should float thenuse of supercomputer time to your group.mode_13h - Tuesday, April 7, 2020 - link
If he could access a supercomputer that easily, do you seriously think he'd be planning on building a new workstation to run his program?That said, maybe some cloud instances have that much RAM. It'd be worth looking into.
eastcoast_pete - Monday, April 6, 2020 - link
So, do they have a working sample of their DDR5-8400 DRAM?Also, what about other memory makers like Samsung or Micron? Any word on their plans or working samples?