Comments Locked

44 Comments

Back to Article

  • MajGenRelativity - Thursday, June 21, 2018 - link

    Forgive me if I'm wrong, but I thought the M.2 spec doesn't allow for drives to pull close to 20W of power, at which point they'd need a PCIe connector. This seems like massive overkill
  • The Chill Blueberry - Thursday, June 21, 2018 - link

    It is massive Overkill. I guess it's mostly a joke to laugh about their m.2 thermal armor cover on their motherboards that ended up being thermal traps more than anything else. It just MSI going: "You want m.2 cooling?? REALLY?? Well HERE! Graphics card level thermal solution for you!"
  • eek2121 - Thursday, June 21, 2018 - link

    My Samsung 960 EVO throttles unless I have a fan blowing directly on it. 4 m.2 drives would generate considerable heat and would likely throttle. Also, as I stated above, I believe the PCIE bus as a whole has a max power limit of around 150 watts. As each m.2 drive can consume up to 20 watts by itself, that's 80 watts of power, that leaves you with 70 watts left, now let's say you have 2 additional m.2 drives onboard, that leaves the PCIE bus with only 30 watts of potential power. By including the power connector, they can not take power from the PCIE bus at all, and since they have all that extra power, they included a fan. There are already heatsinks for m.2 drives out there, so I can understand the need for an 'overkill' heatsink to cool 4 drives.
  • notashill - Thursday, June 21, 2018 - link

    The PCIe bus does not have a 150 watt power limit, it has 75 watts which is not enough for 4 NVMe drives drawing the max the spec allows (and the fan power draw, and VRM losses). There was a big outrage back when the RX480 came out because it was drawing 86W from the slot.
  • eek2121 - Thursday, June 21, 2018 - link

    PCIE 2.0 has 75 watts, IIRC PCIE 3.0 doubled that to 150.
  • Jared13000 - Thursday, June 21, 2018 - link

    What!? Then why did people get so upset over the RX480? There can't be that many people still running GPUs in PCI-e 2.0 slots.
  • EpicPlayer - Thursday, June 21, 2018 - link

    No, PCIe 3.0 still only allows 75 watts through the slot. I'm not sure where they got the 150 watt figure. And yeah, I think the whole RX 480 thing was overblown. It never damaged anyone's motherboard drawing more than 75 watts anyway.
  • frenchy_2001 - Thursday, June 21, 2018 - link

    PCIe 3.0 spec says that a 16x slot can provide up to 75W.
    A 8x slot or less can only provide up to 25W.
    So, bundling 4 PCIe x4 slots on one card can pull up to 100W, hence the PCIe 6 pins power (allowing for an additional 75W).
    The card can deliver up to 150W (75 port + 75 power connector), but spec says it should only pull 100W max.
    Some enterprise drives (micron...) had a "Turbo" mode, pulling 35W (from port) for additional performances. Not sure if M.2 drives can do something like that...

    So, their power delivery is on point, their cooling too, as 100W max cannot be expected to just disapear (otherwise Intel would not bundle fans with their processors).
  • EpicPlayer - Thursday, June 21, 2018 - link

    No, PCIe 3.0 still only allows 75 watts through the slot. I'm not sure where they got the 150 watt figure. And yeah, I think the whole RX 480 thing was overblown. It never damaged anyone's motherboard drawing more than 75 watts anyway.
  • DanNeely - Friday, June 22, 2018 - link

    Confusion about slot vs cable power I think. The 8 pin cable allowed 150W vs the 75W from the 6 pin one.
  • PeachNCream - Thursday, June 21, 2018 - link

    Thus the egg and Buick point made in the article.
  • eek2121 - Thursday, June 21, 2018 - link

    The connector is likely to ensure there is enough power for both the fan and all 4 drivers. I actually don't see an issue with this.
  • eek2121 - Thursday, June 21, 2018 - link

    Oh and you also have to realize that the entire PCIE bus has a max power limit. By using this connector, this allows the card, other PCIE devices can be used as well.
  • TheinsanegamerN - Thursday, June 21, 2018 - link

    PCIE has a 75 watt limit. Anandtech shows 6.2W max consumption for a 970 evo drive.

    So 4 drives would 24.8 watt, which would leave more then enough power for the fan. The connector is a bit pointless.
  • FullmetalTitan - Thursday, June 21, 2018 - link

    He means that you can effectively remove those 25W from the bus overhead if you use the 6-pin instead. Only applicable in the corner case where you already have >50W of other add ins running via PCIE bus power.
  • DanNeely - Thursday, June 21, 2018 - link

    PCIe is 75W per x16 slot (25W per x1/4), not for the entire bus as a whole. You will occasionally see high end boards designed around 4 GPUs adding a supplemental PCIe power connector to the mobo in the card area to bring in extra power; but that's only needed in extreme edge cases.
  • vgray35@hotmail.com - Thursday, June 21, 2018 - link

    And therefore all this talk of overkill is itself zealous overkill ranting in itself. Come on people do the math! All high end M.2 PCIe devices throttle, which means all those devices are by definition UNDER KILL by providing insufficient power capability. Are you saying throttling should be tolerated - no we have waited long enough for a proper power handling solution. And here it is properly designed to handle throttling. SORRY BUT THIS IS NOT OVERKILL, it is what was needed from the start on all M.2's - a proper power supply.

    MSI knows power on the PCIe Bus needs to be reserved for other power hungry PCI devices, and they are ensuring they do not steal needed power from those other devices. MSI is being considerate to other vendors who also need that power. Any questions. I do not like MSI as a company because their arrogance has screwed me in the past (insufficient cooling on a gaming laptop which they refused to fix) - BUT they did get this one right.
  • MajGenRelativity - Thursday, June 21, 2018 - link

    M.2 drives throttle because of thermal constraints, NOT power constraints. Hence the well deserved fan. The power adapter is the overkill part, not the fan.
  • Spunjji - Friday, June 22, 2018 - link

    This. TBH with a heatsink that size the fan may be overkill too; it's not going to be doing any real harm in the system they've designed it for though, so why not eh?
  • vgray35@hotmail.com - Friday, June 22, 2018 - link

    Really power and heat are in direct correlation. The more power you draw the greater the heat dissipation, which upon reaching the thermal limit causes power to be throttled back. There is no difference here. Never suggested the fan was overkill, as it is needed to abate thermal dissipation. But likewise power adapter is not overkill because it is already established each M.2 will be able to reach 20W, and 4 of them at 80W exceeds the PCIe power spec, so one needs the power adapter. A 4 pin adopter is too small. As memory chips get more dense M.2 module power draw increases. What about next years crop of M.2's - likely to be higher power devices. Power connector guarantees it will work over the next 3 or 4 years of M.2 upgrades. No brainer really.
  • philehidiot - Thursday, June 21, 2018 - link

    Am I being an idiot here... The drives are mounted on the back with the fan on the other side?

    What's conducting the heat to the fan? The PCB?
  • FullmetalTitan - Thursday, June 21, 2018 - link

    Looked to me like the shroud is removed in that image showing the actual M.2 connectors
  • 29a - Thursday, June 21, 2018 - link

    Yes, you're being an idiot. The drives are clearly on the same side of the PCB as the fan.
  • philehidiot - Thursday, June 21, 2018 - link

    Ah yeh, you're right. The way I read the article it sounded like they were fitted on the opposite side... plus I was supposed to be with a patient when I was reading it. Ahem.

    Part of me wants one of these for ABSOLUTELY NO GOOD REASON WHATSOEVER.

    I am an idiot.
  • LauRoman - Thursday, June 21, 2018 - link

    The picture showing the m.2 connections has the heatsink removed.
  • eddman - Thursday, June 21, 2018 - link

    This is going to seem nitpicky but perhaps the correct term would be graphics card or video card. A GPU is simply a chip, right?
  • jrs77 - Thursday, June 21, 2018 - link

    And again I raise the question, if this card can be run in the PEG-slot of a mITX-board, as most cards like TV-tuners, soundcards, etc aren't running in these slots.
  • DanNeely - Thursday, June 21, 2018 - link

    I'm not sure what you're getting at but, it's a full height dual slot x16 PCIe card. You can use it any board/case combo that can fit a card that large.

    If you have a different card in the single PCIe slot on an mITX board obviously you can't plug this one in instead, otherwise just pick a case that isn't super ultra compact so the card will fit.
  • vgray35@hotmail.com - Friday, June 22, 2018 - link

    I would point out the iTX single x16 PCIe slot is actually a dual slot width by design, so it will fit.
  • DanNeely - Saturday, June 23, 2018 - link

    Only the slot itself is above the mobo and required. There're plenty of non-gaming mITX cases that don't extend the mobo compartment an extra half in to support a dual slot card; just as there are plenty of cases that keep it short enough to only support half height cards.
  • jrs77 - Saturday, June 23, 2018 - link

    Size is not the question. The question is, if the board actually supports anything else than a graphics card in that PEG-slot. PEG = PCI Express for Graphics. I've had no luck so far with anything else like a TV-tuner or soundcard.
  • jrs77 - Saturday, June 23, 2018 - link

    Size of the card is not the question. The question is, if anything other than a graphics card is recognized in that PEG-slot. PEG = PCI Express for Graphics. So far I've had no luck using a TV-tuner or soundcard in that slot for the last 4 or 5 mITX boards I've had.
  • jrs77 - Saturday, June 23, 2018 - link

    I don't use a dedicated graphics card, so I allways have that slot free and thought it would be nice to use it for a TV-tuner or whatever, but all the mITX-boards I've had so far from either ASUS or Gigabyte didn't work with anything else than a graphics card in that PEG slot. PEG = PCI Express for Graphics. Somehow this type of slot is adressed differently than a normal PCIe slot and only accepts graphics cards.
  • danwat1234 - Thursday, June 21, 2018 - link

    No pics of the underside of the heatsink?
  • zodiacfml - Thursday, June 21, 2018 - link

    Lacks RGB
  • koaschten - Friday, June 22, 2018 - link

    Am I the only one thinking:
    Hmmm would have been neat to see a picture of the heatsinks contact area to the SSDs.

    I wonder if it is just one huge thermal pad? or is it 4 stripes? re-usable?

    Are there replacement stripes in the box?

    What do i do when I gradually go from 1 to 4 SSDs?
  • FuzzDad - Monday, June 25, 2018 - link

    Given this it will not be long before EKWB introduces an M2 waterblock and all discussions on overkill w/power will be overwhelmed by a $100 block and assorted fittings, etc.

Log in

Don't have an account? Sign up now