Comments Locked

131 Comments

Back to Article

  • R0H1T - Thursday, March 13, 2014 - link

    "This is actually the same motherboard as our 2014 SSD testbed but with added SATAe functionality."

    Does this mean you're going to test next gen SSD's with this(SATAe) & if so perhaps anytime during the current 2014 calendar year?
  • ddriver - Thursday, March 13, 2014 - link

    So why not use 2 lane PCIE for the SSD instead - it does look like it uses less power and has higher bandwidth than SATAE?
  • DanNeely - Thursday, March 13, 2014 - link

    Mini ITX with a discrete GPU (or any other card) or mATX with dual GPU setups either don't have anywhere to put a PCIe SSD or don't have anywhere good to put one.
  • SirKnobsworth - Saturday, March 15, 2014 - link

    That's what M.2 is for.
  • Bigman397 - Friday, April 4, 2014 - link

    Which is a much better solution than retrofitting controllers and protocols meant for rotational media.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    The motherboard in our 2014 testbed is the normal Z87 Deluxe without SATAe. There aren't any official SATAe products yet so we're not sure how we'll test those but the ASUS board is certainly an option.
  • MrPoletski - Thursday, March 13, 2014 - link

    I wonder what ridiculous speed SSD's we are going to start seeing with this tech. Quite exciting really.
  • nathanddrews - Friday, March 14, 2014 - link

    The Future!

    http://www.tomsitpro.com/articles/intel-silicon-ph...
  • thevoiceofreason - Thursday, March 13, 2014 - link

    "because after all we are using cabling that should add latency"
    Why would you assume that?
  • DiHydro - Thursday, March 13, 2014 - link

    When talking about one nanosecond signals, a charge will travel approximately 30 cm or 1 foot. If you add length onto a signal path, it will delay your transmission speed.
  • frenchy_2001 - Friday, March 14, 2014 - link

    no, it does not. It adds latency, which is the delay before any command is received. Speed stays the same and unless your transmission depends on hand shake and verification and can block, latency is irrelevant.
    See internet as a great example. Satellite gives you fast bandwidth (it can send a lot of data at a time), but awful latency (it takes seconds to send the data).
    As one point of those new technology is to add a lot of queuing, latency becomes irrelevant, as there is always some data to send...
  • nutjob2 - Saturday, March 15, 2014 - link

    You're entirely incorrect. Speed is a combination of both latency and bandwidth and both are important, depending on how the data is being used.

    Your dismissal of latency because "there is always data to send" is delusional. That's just saying that if you're maxing out the bandwidth of your link then latency doesn't matter. Obviously. But in the real world disk requests are small and intermittent and not large enough to fill the link, unless you're running something like a database server doing batch processing. As the link speed gets faster (exactly what we're talking about here) and typical data request sizes stay roughly the same then latency becomes a larger part of the time it takes to process a request.

    Perceived and actual performance on most computers are very sensitive to disk latency since the disk link is the slowest link in the processing chain.
  • MrPoletski - Thursday, March 13, 2014 - link

    wait:
    by Kristian Vättö on March 13, 2014 7:00 AM EST

    It's currently March 13, 2014 6:38 AM EST - You got a time machine over at Anandtech?
  • Ian Cutress - Thursday, March 13, 2014 - link

    I think the webpage is in EDT now, but still says EST.
  • Bobs_Your_Uncle - Saturday, March 15, 2014 - link

    PRECISELY the point of Kristian's post. It's NOT a time machine in play, but rather the dramatic effects of reduced latency. (The other thing that happens is the battery in your laptop actually GAINS charge in such instances.)
  • mwarner1 - Thursday, March 13, 2014 - link

    The cable design, and especially its lack of power transmission, is even more short sighted & hideous than that of the Micro-B USB3.0 cable.
  • 3DoubleD - Thursday, March 13, 2014 - link

    Agreed, what a terrible design. Not only is this cable a monster, but I can already foresee the slow and painful rollout of PCIe2.0 SATAe when we should be skipping directly to PCIe3.0 at this point.

    Also, the reasons given for needing faster SATA SSDs are sorely lacking. Why do we need this hideous connector when we already have PCIe SSDs? Plenty of laptop vendors are having no issue with this SATA bottleneck. I also debate whether a faster, more power hungry interface is actually better on battery life. The SSD doesn't always run at full speed when being accessed, so the battery life saved will be less than the 10 min calculated in the example... if not worse that the reference SATA3 case! And the very small number of people who edit 4k videos can get PCIe SSDs already.
  • DanNeely - Thursday, March 13, 2014 - link

    Blame Intel and AMD for only putting pcie 2.0 on the southbridge chips that everything not called a GPU are connected to in consumer/enthusiast systems.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    A faster SSD does not mean higher power consumption. The current designs could easily go above 550MB/s if SATA 6Gbps wasn't bottlenecking, so a higher power controller is not necessary in order to increase performance.
  • fokka - Thursday, March 13, 2014 - link

    i think what he meant is that while the actual workload may be processed faster and an idle state is reached sooner on a faster interface, the faster interface itself uses more power than sata 6g. so the question now is in what region the savings of the faster ssd are and in what region the additional power consumption of the faster interface.
  • mkozakewich - Friday, March 14, 2014 - link

    Ooh, or what if we had actual M.2 slots on desktop motherboards that could take a ribbon to attach 2.5" PCIe SSDs?
  • phoenix_rizzen - Thursday, March 13, 2014 - link

    Yeah. Seems strange that they wouldn't re-use the M.2 or mSATA connector for this. Why take up 2 complete SATA slots, and add an extra connector? What are they doing with the SATA connectors when running in SATAe mode?

    It amost would have made sense to make a cable that plugged into <whatever> at the drive end, and just slotted into a PCIe x1 or x2 or x4 slot on the mobo. Skipped the dedicated slot entirely. Then they wouldn't need that hokey power dongle off the drive connector.
  • frenchy_2001 - Friday, March 14, 2014 - link

    They were looking for backward compatibility with current storage and in that context, the decision makes sense. No need to think about how to plug it, it just slots right where the rest of the storage goes and can even accept its predecessor.
    It's a desktop/server/storage centric product, not really meant for laptop/portable.

    But I agree its place is becoming squished between full PCIe (used already in data centers) and miniPCIe/M2 used in portables. As the requirement is already 2x PCIe lanes (like the others), it will be hard to use for lots of storage and if you cannot fit 24 of those in a rack (which is how most server use SATA/SAS), as few servers have 48 lanes of PCIe hanging around unused then it seems only reserved to desktop/workstation and those can easily use PCIe storage...
  • phoenix_rizzen - Friday, March 14, 2014 - link

    Yeah, until you try to connect more than 2 of those to a motherboard. And good luck getting that to work on a mini-ATX/micro-ATX board. Why use up two whole SATA ports, and still use an extra port for PCIe side of it?

    How are you going to make add-in controller cards for 4+ drives? There's no room for 4 of those connectors anywhere. And trying to do a multi-lane setup like SFF-8087 for this will be rediculous.

    The connector is dumb, no matter how you look at it. Especially since it doesn't support power.
  • jasonelmore - Saturday, March 15, 2014 - link

    it looks like the only reason to be excited about this connector is for using older Hard Drives 2.5 or 3.5 form factor, and putting them on a faster bus.

    Other than that, other solutions exist and they do it quicker and with less power. its just a solution to let people use old hardware longer.
  • phobos512 - Thursday, March 13, 2014 - link

    It's not an assumption. The cabling adds distance to the signal path, which increases latency. Electrons don't travel at infinite speed; merely the speed of light (in a vacuum; in a cable it is of course reduced).
  • ddriver - Thursday, March 13, 2014 - link

    You might be surprised now negligible the effect of the speed of electrons is for the total overall latency.
  • Khenglish - Thursday, March 13, 2014 - link

    It's negligible.

    The worst cables carry a signal at 66% of the speed of light, with the best over 90%. If we take the worst case scenario of 66% we get this:

    speed of light = 3*10^8 m/s
    1m / (.66 * 3*10^8 m/s) = 5ns per meter

    If we have a really long 5m cable that's 25ns. Kristian says it takes 115us to read a page. You never read less than 1 page at a time.

    25ns/115us = .0217% for a long 5m cable. Completely insignificant latency impact.
  • willis936 - Thursday, March 13, 2014 - link

    The real latency number to look at is the one cited on the nvme page: 2.8us. It's not so negligible then. It does affect control overhead a good deal.

    Also I have a practical concern of channel loss. You can't just slap a pcie lane onto a 1m cable. Pcie is designed to ride a vein of traces straight to a socket, straight to a card. You're now increasing the length of those traces, still putting it through a socket, and now putting it through a long, low cost cable. Asking more than 1.5GB/s might not work as planned going forward.
  • DanNeely - Thursday, March 13, 2014 - link

    Actually you can. Pcie cabling has been part of the spec since 2007; and while there isn't an explicit max length in the spec, at least one vendor is selling pcie2.0 cables that are up to 7m long for passive versions and 25m for active copper cables. Fiberoptic 3.0 cables are available to 300m.
  • Khenglish - Thursday, March 13, 2014 - link

    That 2.8 uS you found is driver interface overhead from an interface that doesn't even exist yet. You need to add this to the access latency of the drive itself to get the real latency.

    Real world SSD read latency for tiny 4K data blocks is roughly 900us on the fastest drives.

    It would take an 18000 meter cable to add even 10% to that.
  • willis936 - Thursday, March 13, 2014 - link

    Show me a consumer phy that can transmit 8Gbps over 100m on cheap copper and I'll eat my hat.
  • Khenglish - Thursday, March 13, 2014 - link

    The problem is long cables is attenuation, not latency. Cables can only be around 50M long before you need a repeater.
  • mutercim - Friday, March 14, 2014 - link

    Electrons have mass, they can't ever travel at the speed of light, no matter the medium. The signal itself would move at the speed of light (in vacuum), but that's a different thing.

    /pedantry
  • Visual - Friday, March 14, 2014 - link

    It's a common misconception, but electrons don't actually need to travel the length of the cable for a signal to travel through it.
    In layman's terms, you don't need to send an electron all the way to the other end of the cable, you just need to make the electrons that are already there react in a certain way as to register a required voltage or current.
    So a signal is a change in voltage, or a change in the electromagnetic fields, and that travels at the speed of light (no, not in vacuum, in that medium).
  • AnnihilatorX - Friday, March 14, 2014 - link

    Just to clarify, it is like pushing a tube full of tennis balls from one end. Assuming the tennis balls are all rigid so deformation is negligible, the 'cause and effect' making the tennis ball on the other end move will travel at speed of light.
  • R3MF - Thursday, March 13, 2014 - link

    having 24x PCIe 3.0 lanes on AMD's Kaveri looks pretty far-sighted right now.
  • jimjamjamie - Thursday, March 13, 2014 - link

    if they got their finger out with a good x86 core the APUs would be such an easy sell
  • MrSpadge - Thursday, March 13, 2014 - link

    Re: "Why Do We Need Faster SSDs"

    You power consumption argument ignores one fact: if you use the same controller, NAND and firmware it costs you x Wh to perform a read or write operation. If you simply increase the interface speed and hence perform more of these operations per time, you also increase the energy required per time, i.e. power consumption. I your example the faster SSD wouldn't continue to draw 3 W with the faster interface: assuming a 30% throughput increase expecting a power draw of 4 W would be reasonable.

    Obviously there are also system components actively waiting for that data. So if the data arrives faster (due to lower latency & higher throughput) they can finish the task quicker and race to sleep. This counterbalances some of the actual NAND power draw increases, but won't negate it completely.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    "If you simply increase the interface speed and hence perform more of these operations per time, you also increase the energy required per time, i.e. power consumption."

    The number of IO operations is a constant here. A faster SSD does not mean that the overall number of operations will increase because ultimately that's up to the workload. Assuming that is the same in both cases, the faster SSD will complete the IO operations faster and will hence spend more time idling, resulting in less power drawn in total.

    Furthermore, a faster SSD does not necessarily mean higher power draw. As the graph on page one shows, PCIe 2.0 increases baseline power consumption by only 2% compared to SATA 6Gbps. Given that SATA 6Gbps is a bottleneck in current SSDs, more processing power (and hence more power) is not required to make a faster SSD. You are right that it may result in higher NAND power draw, though, because the controller will be able to take better advantage of parallelism (more NAND in use = more power consumed).

    I understand the example is not perfect as in real world the number of variables is through the roof. However, the idea was to debunk the claim that PCIe SSDs are just a marketing trick -- they are that too but ultimately there are gains that will reach the average user as well.
  • MrSpadge - Friday, March 14, 2014 - link

    You're right, the increased interface power consumption won't matter much and will be counterbalanced by the quicker execution time. But as far as I understand the bulk of SSD power draw under load, especially for writes, comes from the actual NAND power draw (unless the controller would be really inefficient). If this is true higher performance automatically equates higher SSD power draw under load.
  • willis936 - Thursday, March 13, 2014 - link

    I love SATAe and NVMe but whenever SAS is mentioned as a comparison it would be nice to use 12G numbers. I noticed a Microsoft graph showed the 6G but didn't even label it. A doubling of bandwidth is nothing to sneeze at. That said SAS is expensive and is for a very different market.
  • Flunk - Thursday, March 13, 2014 - link

    I think they're really making a mistake trying to keep the same connector as SATA. Tacking on a new cable that looks so unwieldy just seems silly. And why not just use M.2 slots? Especially if this is for notebooks (and based on the power usage comparisons it seems like it is, otherwise why would it matter?).

    I suspect this will go nowhere. Reminds me of ISA 2.0.
  • Rajinder Gill - Thursday, March 13, 2014 - link

    Backwards compatibility with existing SATA devices was the primary reason for keeping the connector as part of the interface. :)
  • Flunk - Thursday, March 13, 2014 - link

    I understand that, but there isn't much reason not to have 2 sets of ports with the legacy ones slowly disappearing on a desktop. The ports are not large and there is plenty of space. This way we're stuck with a future of badly-designed ports far past the end of SATA's lifetime.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    SATA Express is mainly for desktops -- in mobile M.2 will be the dominant form factor (though SATAe might have some place there too as I mentioned in the article).

    As for power consumption and battery life, that was about PCIe in general.
  • phoenix_rizzen - Thursday, March 13, 2014 - link

    So why not add M.2 slots to the desktop, in a vertical orientation, and just make M.2->M.2 cables? Then add the M.2 connector to desktop drives?
  • TheinsanegamerN - Monday, March 24, 2014 - link

    because, silly, that would mean being progressive and eliminating all backwards compatibility, and we CANT do that! /s

    in all seriousness, that would be much nicer. manufacturers would probably throw a temper tantrum, but aside from that, it would be a great solution.
  • grahaman27 - Thursday, March 13, 2014 - link

    I would like to see USB 3.1 replace sata6. It sounds unusual, but with the combination of 10Gbps speeds, the new two-way small connector, and integrated power, I think it would really be useful for the expandability and tidiness inside my computer.
  • Veramocor - Thursday, March 13, 2014 - link

    Just posted that later on. I have an external usb 3.0 hardrive why can't I have an internal one. Even better would be thunderbolt 2 at 20 Gbps.
  • SirKnobsworth - Thursday, March 13, 2014 - link

    Thunderbolt 2 is really PCIe x4 + DisplayPort in disguise, and you don't need DisplayPort to your SSD.
  • MrSpadge - Thursday, March 13, 2014 - link

    Couldn't you build a nice M.2 to SATAe adapter in a 2.5" form factor and thereby reuse your existing M.2 designs for SATAe?
  • Kristian Vättö - Thursday, March 13, 2014 - link

    Technically yes, but the problem is that M.2 is shaped differently. You could certainly fit a small M.2 drive with only few NAND packages in there but the longer, faster ones don't really fit inside 2.5".
  • Kevin G - Thursday, March 13, 2014 - link

    "At 24 frames per second, uncompressed 4K video (3840x2160, 12-bit RGB color) requires about 450MB/s of bandwidth, which is still (barely) within the limits of SATA 6Gbps."

    This is incorrect:

    3840 * 2160 * 12 bit per channel * 3 channels / 8 bits per byte * 24 fps ~ 896 MByte/s

    And that figure is with with good byte packing. For raw recording, the algorithm may pack the 12 bits into two bytes for speed purposes meaning you'd need about 1.2 Gbyte/s of bandwidth. Jumping to 4096 x 2160 resolution at 12 bit color and 30 fps, the bandwidth need grows to about 1.6 Gbyte/s.

    The other thing worth noting is that uncompressed recording is going to take a lot of storage. A modern phone recording at the highest quality settings with 64 GB of storage would last less than 40 seconds before running out.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    Oh, you're absolutely right. I used the below calculator to calculate the bandwidth but accidentally left "interlaced" box ticked, which screwed up the results. Thanks for the heads up, fixing...
  • Kristian Vättö - Thursday, March 13, 2014 - link

    And the calculator... http://web.forret.com/tools/video_fps.asp?width=38...
  • JarredWalton - Thursday, March 13, 2014 - link

    Aren't there *four* channels, though? RGB and Alpha? Or is Alpha not used with 12-bit?
  • Kevin G - Thursday, March 13, 2014 - link

    No real way to record with an Alpha channel value to my knowledge. Cameras and scanners etc all presume a flattened image as if everything were solid. The only exception to this would be direct frame buffer capture from video memory which can independently process an Alpha channel.

    Input media would generally be 36 bit. During the editing phase an Alpha channel can be added as part of compositing pipeline bringing the total bit depth to 48 bit. Final rendering can be done to a 48 bit RGBA file. Display output on screen will be reduced to 36 bit due to compositing for the frame buffer.
  • Nightraptor - Thursday, March 13, 2014 - link

    When I saw the daughterboard Asus provided my instant thought was actually using this (in pcie 3.0 form) to somehow provide the option to add an external GPU to a tablet. I may be the outlier, but my dream would be to have and 11.6" 16:10 1920 x 1200 tablet with the ability to connect a keyboard dock to function as a laptop, or another dock with a discrete graphics card to function as a desktop for occasional gaming (1080p at high setting would be all I'd ask for - so pcie 3.0 4x should be sufficient). If you could somehow get a SATAe cable on a tablet I think this would do it.
  • vladman - Thursday, March 13, 2014 - link

    If you want speed from storage, get a nice Areca PCIe RAID controller, attach 4 or more fast SSDs, do RAID 0, and you've got anywhere from 1.7 to 2GB/s of data transfer. Done deal.
  • Guspaz - Thursday, March 13, 2014 - link

    The only justification for why anybody might need something faster than SATA6 seems to be "Uncompressed 4K video is big"...

    Except nobody uses uncompressed 4K video. Nobody uses it precisely BECAUSE it's so big. 4K cameras all record to compressed formats. REDCODE, ProRes, XAVC, etc. It's true that these still produce a lot of data (they're all intra-frame codecs, which mean they compress each frame independently, taking no advantage of similarities between frames), but they're still way smaller than uncompressed video.
  • JarredWalton - Thursday, March 13, 2014 - link

    But when you edit videos, you end up working with uncompressed data before recompressing, in order to avoid losing quality.
  • willis936 - Thursday, March 13, 2014 - link

    The case you described (4K, 12bpc, 24fps) would also take an absolutely monumental amount of RAM. I can't think of using a machine with less than 32GB for that and even then I feel like you'd run out regularly.
  • Guspaz - Thursday, March 13, 2014 - link

    Are you rendering from Premiere to uncompressed video as an intermediate format before recompressing in some other tool? If you're working end-to-end with Premiere (or Final Cut) you wouldn't have uncompressed video anywhere in that pipeline. But even if you're rendering to uncompressed 4K video for re-encoding elsewhere, you'd never be doing that to your local SSD, you'd be doing it to big spinning HDDs or file servers. One hour of uncompressed 4K 60FPS video would be ~5TB. Besides, disk transfer rates aren't going to be the bottleneck on rendering and re-encoding uncompressed 4K video.
  • Kevin G - Thursday, March 13, 2014 - link

    That highly depends on the media you're working with. 4K consumes far too much storage to be usable in an uncompressed manner. Upto 1.6 GByte/s is needed for uncompressed recording. A 1 TB drive would fill up in a less than 11 minutes.

    As mentioned by others, losses compression is an option without any reduction in picture quality, though at the expensive of high performance hardware needed for recording and rendering.
  • JlHADJOE - Thursday, March 13, 2014 - link

    You pretty much have to do it during recording.

    Encoding 4k RAW needs a ton of CPU that you might not have inside your camera, not to mention you probably don't want any lossy compression at that point because there's still a lot of processing work to be done.
  • JlHADJOE - Friday, March 14, 2014 - link

    Here's the Red Epic Dragon, a 6k 100fps camera. It uses a proprietary SSD array (likely RAID 0) for storage:

    http://www.red.com/products/epic-dragon#features
  • popej - Thursday, March 13, 2014 - link

    "idling (with minimal <0.05W power consumption)"
    Where did you get this value from? I'm looking at your SSD reviews and clearly see, that idle power consumption is between 0.3 and 1.3W, far away form quoted 0.05W. What is wrong, your assumption here or measurements at reviews? Or maybe you measure some other value?
  • Kristian Vättö - Thursday, March 13, 2014 - link

    <0.05W is normal idle power consumption in a mobile platform with HIPM+DIPM enabled: http://www.anandtech.com/bench/SSD/732

    We can't measure that in every review because only Anand has the equipment for that. (requires a modified laptop).
  • dstarr3 - Thursday, March 13, 2014 - link

    How does the bandwidth of a single SATAe SSD compare to two SSDs on SATA 6GB/s in Raid0? Risk of failure aside.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    In theory two SSDs in RAID 0 should achieve twice the bandwidth of one. In practice that's almost true and you should expect maximum bandwidth of around ~1060MB/s (vs 550MB/s with one SSD).
  • Exodite - Thursday, March 13, 2014 - link

    External power, and we might get it as a *!*%?1* molex?

    Wake me up when we get to a usable revision.
  • grahaman27 - Thursday, March 13, 2014 - link

    My thoughts exactly. I like sata, but this revision looks like a mess!
  • invinciblegod - Thursday, March 13, 2014 - link

    Why did they do translation between pci-express and sata in the first place? Was it because in the event of a pci-super-express SATA can just make new chips and the current hard drives would be compatible (like from PCI to PCI-Express)?
  • npaladin2000 - Thursday, March 13, 2014 - link

    I wonder if M.2 is the better solution here, and could be adapted to serve the enterprise niches that SATAe is aiming for? After all, it provides the direct PCIe interface, and also provides power and is a small connector.
  • SirKnobsworth - Thursday, March 13, 2014 - link

    For bigger, higher performance SSDs I think that actual PCIe cards are going to remain dominant. M.2 provides an x4 interface but the highest performance SSDs will have no trouble maxing that out. I think we may have already seen x8 cards demoed but I'm not sure.
  • Veramocor - Thursday, March 13, 2014 - link

    Could a internal motherboard USB header be used instead? USB 3.1 does about 10 Gpbs (with overhead) and would supply power. The cabling would be much cleaner.

    Or an internal Thunderbolt 2.0 connection does 20 Gbps. Imagine a single internal wire supplying both power and data instead of the mess this looks like. It would beat the speed of everything save PCIe 3.0 x4.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    USB has massive overhead. USB 3.0 manages only around 280MB/s in real world, whereas the theoretical maximum is 625MB/s (5Gbps). That is over 50% overhead! Assuming similar overhead, USB 3.1 would do 560MB/s, which is inline with SATA 6Gbps. However, USB uses CPU a lot more, making it very inefficient.

    As for Thunderbolt, it's basically just cabled PCIe. The difference is that TB requires expensive controllers and cabling to work, whereas PCIe alone is much cheaper.

    I think SATA-IO just needs to get back to the drawing board and get rid of the external power requirement. PCIe supplies power, so there really shouldn't be need for more than that.
  • phoenix_rizzen - Thursday, March 13, 2014 - link

    Just make a PCIe x2 connector, stick it on a cable, and plug it between the drive and the mobo PCIe slot.

    Then it's up to mobo makers to decide whether to just add PCIe x2 slots in the normal space (for use with either PCIe add-in cards or SATAe drives) or to add dedicated PCIe x2 slots over near the normal SATA slots for use with only SATAe drives.
  • SirKnobsworth - Thursday, March 13, 2014 - link

    Why does it need to be attached with a cable? There are already PCIe form factor SSDs. If that takes up too much space then provide M.2 sockets on the motherboard. M.2 cards provide less area than 2.5" devices but that shouldn't be an issue for smaller SSDs, beyond which you really want more lanes anyway.
  • phoenix_rizzen - Friday, March 14, 2014 - link

    I was thinking more for the situation where you replace the current SATA ports on a mobo with PCIe x2 slots.

    So you go from cabling your drives to the SATA ports to cabling your drives to the PCIe ports. Without using up any of the slots on the back of the board/case.
  • SirKnobsworth - Saturday, March 15, 2014 - link

    If you don't want to use actual PCIe slots then have M.2 sockets on the motherboard. There's no reason to have another cabling standard.
  • phoenix_rizzen - Monday, March 17, 2014 - link

    That works too, and is something I mention in another comment above.

    This cable and connector doesn't make sense, any way you look at it.
  • Kracer - Thursday, March 13, 2014 - link

    Are you able to run any sort of PCI-Device over SATAe (GPUs, capture cards, etc.)?
    Two lanes are not enough for GPU use but it would open up much more possibilities.
    Are you able to use it as a boot device?
  • The Von Matrices - Thursday, March 13, 2014 - link

    I understand the desire for faster SSDs, but I still fail to see the purpose of SATA express over competing standards. There's nothing compelling about it over the competition.

    M.2 already provides the PCIe x2 interface and bandwidth (albeit without the ability to use cables).
    Motherboards that support PCIe 3.0 SATA Express without either a high priced PCIe switch or compromising discrete graphics functionality are one to two years away.
    SF3700 is PCIe 2.0 x4, meaning that SATA express can only use half its performance and PCIe x4 cards will still be the enthusiast solution.
    NVMe can already be implemented on other standards.
    The cables are bulky, which is unusual considering that SAS at 12Gb/s (which is available) is using the same small connectors as 6Gb/s.
  • SirKnobsworth - Thursday, March 13, 2014 - link

    M.2 provides a PCIe x4 interface in certain configurations. I think the SATAe specification has the provision for adding another two lanes at some point in the future but that's not going to happen for a long time.
  • Kevin G - Thursday, March 13, 2014 - link

    SATAe and NVMe is fast and important for expandable IO. However I believe that it will be secondary over the long term. I fathom that the NAND controller will simply move on-die for mobile SoCs. The reason for this will be power savings, lower physical area and performance reasons. Some of the NVMe software stack will be used here but things like lane limitations will be entirely by-passed since it all on die. Bandwidth would scale by the number of NAND channels. Power savings will come from a reduction in an external component (SATAe controller and/or external chipset) and the ability to integrate with the SoC's native power management controller. Desktop versions of these chips will put the NAND on a DIMM form factor for expansion.

    The SATAe + NVMe will be huge in the server market though. Here RAS plays a bigger role. Features like redundancy and hotswap are important, even with more reliable SSD's compared to their hard drive predecessors. I eventually see a backplane version of a connector like mSATA or M.2 replacing 2.5" hard drives/SSD in servers. This would be great for 1U servers as they would no longer be limited to 10 drives. The depth required on a 1U server wouldn't be as much either. PCIe NVMe cards will fill the same niche today: radically high storage bandwidth at minimal latencies.

    One other thing worth pointing out is that since Thunderbolt encapsulates PCIe, using external SATAe storage at full speed becomes a possibility. Working in NVMe mode is conceptually possible over Thunderbolt too.
  • xdrol - Thursday, March 13, 2014 - link

    Parallel ATA is back, just look at the cable size..
  • JDG1980 - Thursday, March 13, 2014 - link

    A ribbon cable *plus* a Molex? Oh, goody. This looks like a massive step backward.
  • sheh - Thursday, March 13, 2014 - link

    Who doesn't love them flatcables?
  • SirKnobsworth - Thursday, March 13, 2014 - link

    We have both eSATA and SATAe now. This is going to be fun...
  • sheh - Thursday, March 13, 2014 - link

    Sadly eSATA isn't that common.
  • kwrzesien - Wednesday, April 30, 2014 - link

    SATAe won't be either.
  • fokka - Thursday, March 13, 2014 - link

    this is cable design straight from hell. an ide connector is more attractive than this.
  • tspacie - Thursday, March 13, 2014 - link

    Oh, please let there be a full review of that Plextor M6e in the near future. I have a computer with no 6Gbps SATA ports, but plenty of PCIe slots just desperate for faster storage.
  • Gc - Thursday, March 13, 2014 - link

    TheSSDReview has looked at a Plextor M6e a couple times now with different host cards.
    http://www.thessdreview.com/our-reviews/plextor-m6...
    http://www.thessdreview.com/our-reviews/ioswitch-r...
  • Kristian Vättö - Friday, March 14, 2014 - link

    It's coming ASAP. I've had the drive for ~2 months now but unfortunately there have been issues with testing (it's the first PCIe drive I'm testing). The drive runs in PCIe 1.0 (i.e. ~350MB/s max) if it's connected to a PCIe 2.0 slot and my motherboard doesn't offer the option to force certain PCIe mode, so I've been waiting for a firmware update to fix this. Similarly, some of our benchmarks don't like the combination of PCIe SSDs and our new testbed and we are still in the process of figuring those issues out. As soon as I'm sure the drive is operating as it should, I'll start working on the review :)
  • tspacie - Friday, March 14, 2014 - link

    Thanks for doing all the leg work!
  • bj_murphy - Thursday, March 13, 2014 - link

    Could you clarify what you mean by "half height/length PCIe" when you are speaking about the 4 major form factors of flash storage on the final page? Isn't that the exact same connector as mSATA or am I thinking of something else?
  • sheh - Thursday, March 13, 2014 - link

    http://en.wikipedia.org/wiki/MSATA#mSATA
    Same connector as mPCIe, different signals.
  • SunLord - Thursday, March 13, 2014 - link

    I like the idea but they should of rolled there own custom connector not twist the sata connector to meet there needs it's looks stupid. A custom high density connector and cable designed specifically for its task would make far more sense then this hodgepodge but I guess they need to cut comers to "keep costs down" on something already aimed at the high end which is even stupider. A nice clean high density interface with an sata adpater would of been far better.
  • androticus - Thursday, March 13, 2014 - link

    Ugh. What an immensely cumbersome and kludgy design.
  • asuglax - Thursday, March 13, 2014 - link

    Kristian, I completely agree with your final thoughts. I would actually take it a step further and say that Intel should completely do away with the DMI interface and corresponding PCH; they should limit the I/O off the processor to as many as possible PCI-e lanes, 3 DisplayPort (which can be exposed as dual-mode), and however many memory channels. Enterprise could have QPI, additionally. I would like to see I/O controllers embedded into the physical interconnects where PCI-e could be routed to the interconnects and however many USB, SATA, or other connections could be switched and exposed through the devices (I supposed it could be argued that this could be a PCH in itself, only connected through PCI-e instead of DMI). Security protection measures (such as TPM's functionality) should be built-in to all components and, while being independently operative, be able to communicate with one-another through the presented I/O channels.
  • fteoath64 - Saturday, March 15, 2014 - link

    @asuglax: Intel is known and has done this. Provide small incremental adds to the processor and chipset features so they can provide as many iterations of SKUs as they can over a period of time. If they do a radical change, then they risk not being able to manage the incremental changes they wanted. It is a strategy to allows for a large variety of product units, hence expanding the market for themselves. Lately, you see that they have reduced the number of CPU skus while expanded the mobile mobile skus. This is possible since in both market segments they are the majority leader and allows them to maximise profits with minimal changes to production. It is a different strategy for AMD and a completely different one yet for the Arm SoC vendors. Intel's strategy seems like it is coercing the market to move to a place and pace they wanted. The Arm guys just give their best shot on every product they have so we got a lot more than we paid for.
    You just cannot teach an old dog new tricks.
  • Babar Javied - Thursday, March 13, 2014 - link

    This SATA 3.2 really doesn't make a lot of sense to me and others also seem to agree from when I've read in the comments. Is this supposed to be a temporary thing or the middle man before we get to the good stuff? like SATA 4.0, is that the reason why it's called SATA 3.2?

    So here is a genuine question. Why not just use Thunderbolt? It is owned by intel and they can implement it into their next chipset(s). Also, Thunderbolt uses PCIe lanes so it is plenty fast without wasting lanes. Sure, the controller and cables are expensive but once it starts to be mass produced they should come down as is common with electronics.

    It seems to me that SATA is going though a lot of trouble to bring 3.2 when it is only marginally better. I also get the feeling that SSDs are going to get even faster by using more channels (current standard is 8) and NAND chips (current standard is 16) as they become the new standard in storage. Of course the transition from HDD to SDD is not going to happen overnight but it is going to happen and I get the feeling that the 750MB/s is going to become a bottleneck very quickly.

    And finally, by switching to Thunderbolt, we also help kickstart the adoption of this standard and hopefully see it flourish. Allowing us to daisy chain monitors, storage drives (SSDs and HDDs), external graphic cards and so much more.
  • SirKnobsworth - Thursday, March 13, 2014 - link

    There's no point to implementing Thunderbolt internally, which is what SATAe is for. For external purposes you can already buy Thunderbolt SSDs.
  • SittingBull - Thursday, March 13, 2014 - link

    I don't feel like you have proven that there is any need for these faster hard drive interfaces, as you hoped to in the title of your article. The need for, let alone the desire for, higher resolution video is anything but proven by anyone that I know of. 4k video offers only dubious benefits, as only very large displays can show the difference between it and 1080p, ie., 70 or 80 inches! The wider colour gamut would be nice but is not really compelling, and those are the only benefits I am aware of. I seriously doubt that the TV or electronics industry are going to be able to sell the 4k idea to the public as a whole. Even 720p is not shown to be lacking until we get into displays larger than 50 inches.

    It is always nice to read up on the tech of the future and I thank you for explaining the SATAe and other interfaces that are in the works. Eventually these advances will be implemented but I can't see it happening until there is some sort of substantial demand, and your entire article is built on the premise that we will need the bandwidth to support 4k video quite soon. But we don't ... :(
  • BMNify - Sunday, March 16, 2014 - link

    SittingBull , perhaps you should stick your head out of the Native American Law Students offices and look to your alumni of the Indian Institute of Science for inspiration in the tech world today,

    given that its clear and public knowledge that the NHK/ BBC R&D years of UHD development http://www.bbc.co.uk/rd/blog/2013/06/defining-the-... and now ratified by the International Telecommunication Union are the minimum base for any new Soc design to adhere to and comply with IF they want to actually reuse their current UHD IP for the longest time scales...

    the main point is if the PR are not trying to cover up by acts of omission the fact they don't actually comply with the new Rec. 2020 real colour space is better colour coverage due to using 10bits per pixel for UHD-1 consumer grade panels and later UHD-2 12bit grade panels for the 8192×4320 [8K] consumer in 4 years or so.

    to put it simply, antiquated Rec. 709 (HDTV and below) 8bit pseudocolor color = only 256 bands of usable colour.

    Rec. 2020 real colour space 10bits per pixel= 1000+ bands of usable colour so you get far less banding in lower bit rate encodes/decodes and more compression for a given bit rate so a better higher visual quality at smaller size.

    as it happens, NHK announced they are to give another UHD-2/8K 3840 pixels wide by 2160 pixels high Broadcast Demo at the coming NAB Show,"Japanese public broadcaster NHK is planning to give a demonstration of "8K" resolution content over at single 6MHz bandwidth UHF TV channel at the National Association of Broadcasters (NAB) Show coming up in Las Vegas, Nevada, April 5 to 10."
    In order to transmit the 8K signal, whose raw data requirement is 16 times greater than an HDTV signal, it was necessary to deploy additional technologies These include ultra multi-level orthogonal frequency domain multiplex (OFDM) transmission and dual–polarized multiple input multiple output (MIMO) antennas. This was in addition to image data compression. The broadcast uses 4096-point QAM modulation and MPEG-4 AVC H.264 video coding.

    we could also have a debate about how qualcomm and other cortex vendors might finally provide the needed UHD-2 data throughput and far lower power with ether integrated JEDEC Wide IO2 25.6GBps/51.2GBps or Hybrid Memory Cube 2.5D interposer-based architectures,and using MRAM inline computation etc.

    did you notice how the ARM SoC with its current NoC (network On Chip) can already beat today's QPI real life data throughput (1Tb/s,2Tb/s etc) at far lower power,never mind the slower MCI as above, they only need to bring that NoC capability to the external interconnect to take advantage of it in any number of IO ports
  • Popskalius - Friday, March 14, 2014 - link

    I haven't even taken my Asus z87 Plus out of its shrink wrap and it's becoming obsolete.
  • SittingBull - Friday, March 14, 2014 - link

    I just put together my own system with an Asus Z87 Plus mb, an i7 4770k, 16 GB of RAM and an SSD. It is not and will not be obsolete anytime in the near future, ie., at least 3 years. Worry not. There isn't anything on the horizon our systems won't be able to deal with.
  • willis936 - Friday, March 14, 2014 - link

    A 4.5GHz 4770k doesn't render my video, crunch my matlab, and host my minecraft at arbitrarily amazingly fast speeds, but it's a big step up from a Q6600 :p
  • MrBungle123 - Friday, March 14, 2014 - link

    That cable looks horrible, I'd rather them just move SSD's to a card.
  • TEAMSWITCHER - Friday, March 14, 2014 - link

    Second That! Hardware makers need to abandon SATA Express and start working on new motherboard form factors that would allow for attaching the flash drives directly to the motherboard. SATA Express is another compromised design-by-committee. Just what the struggling PC industry needs right now! Jeepers!!!
  • iwod - Friday, March 14, 2014 - link

    The future is Mobile. Where Laptop already overtook Desktop in numbers. So why another chunky ugly old hack for SSD? Has Apple not taught them a lesson where Design matters?

    And the speed, It is just too slow. We should at least be at 16Gbps, and since any of these standard aren't even coming out fast enough i would have expected the interface to leap to 32Gbps. Plenty of headroom for SSD Controller to improve and work on. And Intel isn't even bundling enough PCIe Lanes direct from CPU.

    Why cant we build something that is a little future proof?
  • willis936 - Friday, March 14, 2014 - link

    Cost matters. The first thing they'll tell you in economics 101 is that we live in a world with finite resources and infinite wants. There's a reason we don't all have i7 processors, 4K displays, and 780 GPUs right now. Thunderbolt completely missed it's window for adoption because the cost vs. benefit wasn't there and OEMs didn't pick it up. The solutions will be made as the market wants it. The reason the connector is an ugly hack is so you can have the option for high bandwidth single drives or multiple slower drives. It's not pretty and I'd personally like to just see it as a phy/protocol stack that uses the PCIe connector with some aneg to figure out if it's a SATAe or PCIe device but that might cause problems if PCIe doesn't handle things like that already.

    Your mobile connector will come, or rather is already here.
  • dszc - Saturday, March 15, 2014 - link

    Thanks Kristian. Great article.
    I vote for PCIe / NVMe / M.2. SATAe seems like a step in the wrong direction. Time to move on. SATA SSDs are great for backward compatibility to help a legacy system, but seem a horrible way to to design a new system. Too big. Too many cables. Too much junk. Too expensive. SATAe seems to be applying old thinking to new technology.
  • watersb - Sunday, March 16, 2014 - link

    I don't get the negative reactions in many of the comments.

    Our scientific workloads are disk-IO bound, rather than CPU-bound. The storage stack is ripe for radical simplification. SATAe is a step in that direction.
  • rs2 - Sunday, March 16, 2014 - link

    This will never fly. For one thing the connectors are too massive. Most high-end mainboards allow 6 to 8 SATA drives to be connected, and some enthusiasts use nearly that many. That will never be possible with the SATAe connector design; there's just not enough space on the board.

    And the consuming 2 PCI-E lanes per connector is the other limiting factor. It's a reasonable solution when you just need one or two ports. But in the 8-drive case you're talking about needing 16 extra lanes. Where are those meant to come from?
  • willis936 - Sunday, March 16, 2014 - link

    How many ssds do you plan to use at once? I can't think of a single use case where more than one ssd is needed, or even wanted if bandwidth isn't an issue. One ssd and several hard drives is certainly plausible. So there are 6 instead of 8 usable ports for hard drives. How terrible.
  • Shiitaki - Monday, March 17, 2014 - link

    So exactly what problem is this fixing? The problem of money, this is a pathetically attempt at licensing fees. SSD manufacturers could simply change the software and have their drives appear to the operation system as a pci-e based sata controller with permanently attached drive TODAY. It would her genius to be able to plug a drive into a slot and be done with it. We don't need anything new. We already have m-pci-e. Moving to a m-pci-ex4 would have been a better idea. The you could construct backplances with the new m-pci-ex4 connectors that aggrate and connect to a motherboard using a lci-ex8/16 slot.

    This article covers the story of a organization fighting desperately to not disappear into the history books of the computer industry.
  • Kristian Vättö - Tuesday, March 18, 2014 - link

    Bear in mind that SATA-IO is not just some random organization that does standards for fun - it consists of all the players in the storage industry. The current board has members from Intel, Marvell, HP, Dell, SanDisk etc...
  • BMNify - Thursday, March 20, 2014 - link

    indeed, and yet its now clear these and the other design by committee organization's are no longer fit for purpose , producing far to little far to late....

    ARM IP =the generic current CoreLink CCN-508 that can deliver up to 1.6 terabits of sustained usable system bandwidth per second with a peak bandwidth of 2 terabits per second (256 GigaBytes/s) at processor speeds scaling all the way up to 32 processor cores total.

    Intel IP QPI = Intel's Knights Landing Xeon Phi due in 2015 with its antiquated QPI interconnect and its expected ultra short-reach (USR) interconnection only up to 500MB/s data throughput seems a little/lot short on real data throughput by then...
  • Hrel - Monday, March 17, 2014 - link

    Cost: Currently PCI-E SSD's are inexplicably expensive. If this is gonna be the same way it won't sell no matter how many PCI-E lanes Intel builds into it's chipset. My main concern with using the PCI-E bus is cost. Can someone explain WHY those cost so much more? Is it just the niche market or is there an actual legitimate reason for it? Like, PCI-E controllers are THAT much harder to create than SATA ones?

    I doubt that's the case very much. If it is then I guess prices will drop as that gets easier but for now they've priced themselves out of competition.

    Why would I buy a 256GB SSD on PCI-E for $700 when I can buy a 256GB SSD on SATA for $120? That shit makes absolutely no sense. I could see like a 10-30% price premium, no more.
  • BMNify - Tuesday, March 18, 2014 - link

    "Can someone explain WHY those cost so much more?"
    greed...
    due mostly to not invented here is the reason we are not yet using a version of everspin's MRAM 240 pin, 64MByte DIMM with x72 configuration with ECC for instance http://www.everspin.com/image-library/Everspin_Spi...

    it can be packaged for any of the above forms M2 etc too rathe than have motherboard vendors put extra ddr3 ram slots decicated to this ddr3 slot compatable everspin MRAM today with the needed extra ddr3 ram controllers included in any CPU/SoC....

    rather than licence this existing (for 5 years ) commercial MRAM product and collaborate together to make and improve the yield and help them shrink it down to 45nm to get it below all of today's dram fastest speeds etc they all want an invented here product and will make the world markets wait for no good reason...
  • Kristian Vättö - Tuesday, March 18, 2014 - link

    Because most PCIe SSDs (the Plextor M6e being an exception) are just two or four SATA SSDs sitting behind a SATA to PCIe bridge. There is added cost from the bridge chip other additional controller, although the main reason are the laws of economics. Retail PCIe SSDs are low volume because SATA is still the dominant interface and that increases production costs for the OEMs. Low order quantities are also more expensive for the retailers.

    In short, OEMs are just trying to milk enthusiasts with PCIe drives but ones we'll see PCIe entering the mainstream market, you'll no longer have to pay extra for them (e.g. SF3700 combines SATA and PCIe in a single chip, so PCIe isn't more expensive with it).
  • Ammohunt - Thursday, March 20, 2014 - link

    Disappointed there wasn't a SAS offering compared 6GB SAS != 6G SATA
  • jseauve - Thursday, March 20, 2014 - link

    Awesome computer
  • westfault - Saturday, March 22, 2014 - link

    "The SandForce, Marvell, and Samsung designs are all 2.0 but at least OCZ is working on a 3.0 controller that is scheduled for next year."

    When you say OCZ is developing on a PCIe 3.0 controller do you mean that they were working on one before they were purchased by Toshiba, or was this announced since they were acquired by Toshiba? I understand that Toshiba has kept the OCZ name, but is it certain that they have continued all R&D from before OCZ's bankruptcy?
  • dabotsonline - Monday, April 28, 2014 - link

    Roll on SATAe with PCIe 4.0, let alone 3.0 next year!
  • MRFS - Tuesday, January 20, 2015 - link

    I've felt the same way about SATAe and PCIe SSDs --
    cludgy and expensive, respectively.

    Given the roadmaps for PCIe 3.0 and 4.0, it makes sense to me, imho,
    to "sync" SATA and SAS storage with 8G and 16G transmission clocks
    and the 128b/130b "jumbo frame" now implemented in the PCIe 3.0 standard.

    Ideally, end users will have a choice of clock speeds, perhaps with pre-sets:
    6G, 8G, 12G and 16G.

    In actual practice now, USB 3.1 uses a 10G clock and 128b/132b jumbo frame:

    max headroom = 10G / 8.25 bits per byte = 1.212 GB/second.

    132 bits / 16 bytes = 8.25 bits per byte, using the USB 3.1 jumbo frame

    To save a lot of PCIe motherboards, which are designed for expansion,
    PCIe 2.0 and 3.0 expansion slots can be populated with cards
    which implement 8G clocks and 128b/130b jumbo frames.

    That one evolutionary change should put pressure on SSD manufacturers
    to offer SSDs with support for both features.

    Why "SATA-IV" does not already sync with PCIe 3.0 is anybody's guess.

    We tried to discuss this with the SATA-IO folks may moons ago,
    but they were quite committed to their new SATAe connector. UGH!
  • aaronhance - Sunday, May 6, 2018 - link

    AHCI controllers are also on the PCIe bus.

Log in

Don't have an account? Sign up now