Something funny about the Gigabyte TRX40 Designare is that they go out their way to not include Thunderbolt branding for the bundled card. They only call it "a 40GB/s GC-Titan Ridge add-in card which allows you to take advantage of exceptionally fast transfer speeds!"
Does the lack of Thunderbolt 3 on 11 of the 12 point to it still being too expensive to manufacture? Or something else? Seems odd to me that 8 out of 12 boards has ethernet > 1G, but only a single board has TB3. Doesn't seem very HEDT!
Could TB3 be spec'd-out? I mean, at 12v/60w (max TB3?) asking too much for cabling/hardware in the ever-ending quest for speed/bandwidth in exchange for heat?
Disclaimer, this is going solely off memory and is based off stuff I read somewhere. IIRC The Macbook Pro has 4 thunderbolt 3 ports. More than likely, it's because Intel provides TB3 on the CPU separate from PCIE lanes, whereas AMD only has dedicated PCIE lanes. This means that TB3 uses PCIE lanes on AMD systems.
The MacBooks Pro (and the 2018 Mac Mini) all run 2 Alpine Ridge (or whatever) controllers off 2x x4 PCIe lanes. The 15/16” version connects to the DGPU using only x8.
Thunderbolt, regardless of version number is owned by Intel. I would think that board manufacturers probably don't have to pay a license fee to add it to Intel boards but have to pay a fee for AMD boards they design and sell. It is most likely a cost issue versus a compatible spec issue.
TB3 has not been open sourced. It's been royalty-free from the start, but any TB3 device still needs to be certified by Intel. Thus far the only TB3 devices that exist integrate Intel TB3 controllers, and very few non-Intel platforms have integrated TB3 (basically just a couple of X570 ASRock boards).
In order to integrate Thuberbolt, Intel needs access to microcode which is why very few boards even on AM4 come with it and even those solutions are iffy at best.
Untrue, TB has been open sourced and will be a part of the USB 4.0 standard. The real answer is likely one I provided earlier: Intel CPUs have dedicated bandwidth for TB3, AMD CPUs hang it off the PCIE bus.
I love the TB3 port on my laptop and docking station. It's way convenient. Honestly though I've never understood the use case on a desktop. If you've got an ATX motherboard and a decent sized case what need does it really solve?
According to this video the GIGABYTE TRX40 AORUS XTREME has a Thunderbolt 3 header called THB_C, but on the site the only mention to this i can find is a "GIGABYTE add-in card connector" which the AORUS Master and Wifi Pro have mention of also. I dont know why it is listed differently from the Designare or not mentioned in this article but it appears that all the Gigabyte TRX40 boards support thunderbolt 3 with add in card. https://www.youtube.com/watch?v=o21xINJF1tE&fe...
It might as well be -BetaMax-. Thunderbolt is Intel's baby, and you gotta dance to their tune to get the engineering specs -- Intel doesn't publish 'em. Only well-resourced (i.e., volume) manufacturers can feasibly spend to design and incorporate it, then produce to a scale that justifies the investment. Sure, that's not precisely a licensing fee, but it's one heckuva barrier to entry.
These firms can all afford it, but, since VHS (USB) is good enough, why bother? USB "3.2" is pretty darn close and even uses the same Type-C port. In fact, you can even play your VHS tapes on this BetaMax -- USB devices will run at their native speeds when connected to Thunderbolt.
And with USB 4, there will be no difference in speed. Is there even a practical difference in speed now? Do ya really need more than 10 Gbps? A few of you might, but not enough to pay the piper.
This is a no-brainer for the board makers: USB 3.1 Gen 2 ("3.2") Type-C offers a lot more speed than most devices can hope to keep up with internally. In the instances where somebody wants to daisy-chain video, they're either mining (which just needs the chain, not so much the speed), or they're using a laptop and don't have space for a video card. Well, these are mainboards, folks. You've got a bunch of fat-pipe PCIe 4.0 16-lane slots that your graphics cards won't even make full use of 99.99% of the time they're running, as they throttle down to 2.0 or 1.0.
BetaMax was better, but it died even before S-VHS was a real thing. ThunderBolt just got similarly voted down (massively) by pretty much all of big name manufacturers users trust enough and -might- have paid extra to get a board that has it.
Looks like we're goin' with VHS once again, boys and girls... ;-)
I wish those boards had more Type C ports and dropped some of those A ports. A type C port can easily be turned in an A, but vice versa is against the spec.
Also serious question: what is the reason to keep A 2.0 ports around? Are there any devices that don’t work on modern ports?
Probably much easier to route one old and slow data pair vs 1-4 high speed data pairs.
I have many devices that will likely never need more than USB 2.0 - my Mouse and KB included. USB microphone as well. The best external device I've got that benefits from USB3 speeds is my Bluray burner, and we all know how popular those are. External USB flash drives are usually limited by the cheap NAND inside, and most of my external storage is on my network.
For others, I suppose USB capture cards? Really decent USB 3.0 flash drives? Even if I connected my phone to my PC, it's still limited to USB 2.0. Maybe a decent external card reader? These boards reviewed here are all ATX, so I'll rule out USB NICs. I've got to be missing something in my list.
There is still the odd device that doesn't work on USB 3.0. Also the last 2 machines I've built did not have fully functioning USB 3.0/3.1 ports in Linux, indicating lack of driver support for operating systems other than Linux. In short: USB 3.x is still a WIP despite being out on the market for quite a while.
I'm a little confused by comments on the X570 boards that will probably apply to these also. With these new PCIe4 slots (and M.2 slots), is it the case that they are all completely independent and you can mix/match PCIe2/PCIe3/PCIe4 cards/drives freely at each one's maximum possible negotiated link speed? Or will putting (say) a PCIe2 RAID controller in any slot reduce all slots to the lowest common denominator, PCIe2 speed?
PCIe lanes are wired directly to the PCIe controller on either the CPU or chipset, so link speeds between slots are independent. PCIe 4.0 is backward compatible with previous generations 3.0, 2.0 & 1.0, so running a PCIe 2.0 card in a slot capable of 4.0 will run at 2.0 speeds and not affect adjacent lanes on other slots.
Some motherboards allow you to reduce the maximum speed of PCIe lanes from 4.0 to something lower -- this can help to troubleshoot signal integrity issues. This setting sometimes does affect lanes across multiple slots. But as long as you leave it to Auto, the lanes will run at the highest compatible speed between card and controller.
Yes, in most cases the slots auto-configure to the device connected. Gen 3 devices should happily coexist with Gen 4 devices with each running at spec. In the case of the GPU, you can run it at Gen 3 if you prefer even if it is a Gen 4 GPU natively--there's separate switch for that in the bios, but the slots auto-configure for other devices and the GPU bios switch doesn't affect any other slots.
I was surprised to see that several of the mboards had no rear clear-CMOS button on their backplates, and thought that was an interesting omission from the article--and the article also failed to mention dual-bios mboards--which the GB Aorus Xtreme & Master have (pictured mechanical switches) --one would hope they all might have them. Seems as if both these important features would be worth a mention...
Is there any downsides of going to PCIe 4.0? Maybe something similar to DDR3 and DDR4 memory where the bandwidth increases, but the latency goes up too?
PCIe is a serial point to point topology so each link or "lane" is independent (ignoring things like PCIe switches). This is different to the legacy PCI bus which is a shared parallel bus which would behave as you've described.
Has anyone from ASUS actually thought even for a second about the PCI-Express slots placement? Using dual GPUs, until converted truly to single slot with water cooling, blocks most of the slots. In my case I'd need 4 or 5 slots, which leaves ROG Zenith II Extreme from their linup. And ASRock Creator. As much as I hate Gigabyte I must admit their Aorus line has sensible layouts, and MSI's are mixed bag.
These boards are clearly not designed for Dual GPU purposes, but instead actually offer quite some space for the primary GPU (3 slots is mandatory for many high-end air cooled cards these days), and additional slots for other 1 slot cards.
I noticed you said 3 slots. I have a high end GPU, it takes 2 slots. The 3rd slot is extremely far away from the 2nd slot and could comfortably fit a GPU. Factor in the width of an m.2 drive when looking at the pictures above and you'll realize you are mistaken (many of the boards have m.2 slots in between, That is all the space you need for air cooling a GPU, since most high end hardware only takes up 2 slots, the 3rd 'slot' is actually where an M.2 drive would sit, and the real third slot is below it, leaving plenty of space for cooling fan air circulation).
I don't kow about the "blocking most of the slots" terminology. On my X399 board, only 1 slot is blocked (and technically you still could put a card in that slot, I actually had a low profile x4 card next to my GPU without any heat issues). On many X570 boards, spacing is such that no slots are blocked. In both cases, there are single slot GPUs, just not high end ones. As you've stated, using a custom loop allows for even high end GPUs to use only 1 slot.
Still just a bit bummed .... that 1st/2nd Gen TRs have been left hangin'
As we roll into 2020, we gotta love where AMD is going BUT, here's hoping that Dr Su does not make the same mistakes on HEDTs that Chipzillah has been notorious in making in the past. With DDR5 on the horiZen, could sTRX4 be yet another *2 and Done* in the next 18 months?
I'm all for $800 mobos -- just as long as they don't become $50 moo-boards in January, 2021.
Based on prior experience of AMD processors, it seems more likely that they'd have to offer new boards for DDR5 support but allow the new processors to run in older boards with DDR4.
Chances are that the TRX* series of boards will end in 2021 (or 2022 at the latest), when DDR5 is expected to roll out along with possibly Zen 5 (if 2022). That being said, I have an X399 board and a 1950X. I don't see a need to upgrade yet. I may eventually pick up a 2950X next year, but I'm hanging onto this platform. It games pretty much all current games at 4k, with the majority at maximum or high details (even on a 1080ti), and it's excellent for the development and content creation workloads that i do. Don't let the listed benchmarks fool you, the 1950X is capable of much more. Running Linux brings a rather large performance increase due to better thread scheduling among other things. I have no problems running GTA V or any other games that I play, at full 4k and maximum details.
question, is the amount of phases important when it comes to performance or having more devices on the motherboard ? if so how many is overkill for these motherboards ?
Those power delivery components are only for the CPU package, and take all their power input from the auxiliary CPU power connectors (usually 8-pin, 8+4 or 8+8-pin these days).
The rest of the motherboard get their power thru the 24-pin.
More phases typically means better performance (thermals, quality of power, power limits) from the CPU, unless the vendor cheaps out on VRMs. I'd stay away from any board offering only a single 8-pin, as that can be a sign they are using lower quality VRMs, fewer phases, etc. Contrary to popular belief, phase doublers don't really hurt anything. A few in the youtube community have tested this, both with a CPU and also with a CPU 'emulator' that plugs into the socket and measures power output.
The question was about "devices on the motherboard", which I assume means things other than the CPU. That's why I pointed out that the phases are irrelevant to the question.
just to say such just "cause" the box label as 280w TDP, this does not automatically mean it USES 280w (I am sure Intel or NVDA likely many many others) will lambast the crud out of AMD for this, without giving the "full story"
eg. Intel will say "our product X only is TDP of Y vs this massive 280w number, choose us, save the world" then when the user actually uses said "product X" they find out either A is much much slower than all review sites list it is and/or B, it shoots ACTUAL power use through the roof therefore not matching the "claims" of said product X TDP being "better" than TR gen 3 280w "listed" TDP
Intel, NVDA have far more proven themselves on "fibbing" their numbers to make the sales than AMD has "overall" over the many years I have been involved with (consumer or otherwise) in computing
............
Thanks for the review overall, at least it seems the various "partners" are not being overly foolish in terms of pricing and feature set, MSI IMO even "better" than some of the others (such as ASUS)
I truly hope these turn out to be the "cat's meow" for those whom can afford and use them, it helps AMD, helps their partners, the long run, helps us all
In the power testing, our chips hit 280w without issues, especially the 32-core. Which the definition of TDP is up for question, the CPUs seem bang on the power figures we saw
At least one reviewer got ~285 - 295 W power consumption testing Threadripper 3rd at stock, until they realized they had memory overclocked to 3600 MT/s.
With the RAM also at stock (3200 MT/s), the power consumption ended up between 279 - 280 W, so just within the given TDP.
TDP != power consumed. TDP is thermal design power. The type of cooler itself can change the TDP formula in some cases (due to being part of the formula), and AMD, NVIDIA, and Intel all have different ways of calculating TDP.
Thanks Gavin, interesting article. Question: Your initial mentioning of the chipset says it's made on GloFo's 12 nm node, but it's 14 nm a bit later in the article. Can you clarify? Thanks!
Since the last page has a picture of the chipset saying Made in Taiwan, it's probably either TSMC or UMC... unless if packaging somehow counts as "made in."
Good spotting & there may be more to t than u think.
Dunno, but others may?
I recall reading that the exciting new IO chip on Zen 2, & the TR chipset, are ~"cut an pastes" of each other - one is made by tsmc & the other by glofo.
Thanks for this writeup. I'm currently drawn to Gigabytes TRX40 Designare and TRX40 Aorus Xtreme. Does the "40GB/s GC-Titan Ridge add-in card" work on any board?
Any info on bifurcation support? Gigabyte is quite clear about that and offers x4x4x4x4 for the x16 slots and x4x4 for the x8 slots. Sadly no 8x4x4 or x8x8. MSIs manual explains the BIOS option "PCIe SlotX Lanes Configuration" with the sentence "PCIe lanes configuration for MSI M.2 XPANDER series cards/ Other M.2 PCIe storage card." which sounds like x4x4x4x4 bifurcation to me, but is quite vague. Is x8x8 and x8x4x4 supported on any board?
I can't speak to the current MSI offerings, but my x399 Gaming Carbon (off the top of my head, I don't use this feature, however) supports x4x4x4x4 and x8x8. Other modes may be possible, but I haven't looked.
Making a CPU that fits in a socket but doesn't work in it is idiotic. Especially considering the target market, did AMD really need to save a few pennies on getting Lotes to make slight modifications to their TR3 tooling?
All of them have fans. Bleh. I remember chipset fans. No thanks. X570 is a piece of shit to me for the same reason (apart from that one gigabyte board that costs way too much).
all be cause of a chipset fan ?? thats borderline crazy, have you even heard them ? chances are, the other fans in your case would drown it out and you wouldnt even hear it
They fail, they're usually a weird size or fitment, and they whine.. case fans are usually much larger and have a far different (and much more pleasant) tone
Yeah they do fail sometimes (or used to anyway), and it kinda silly that they they nowadays have these weird shapes because of aesthetics making them hard to replace. Not everyone use windowed cases.
With that said it shouldn't be a big problem to strap a casefan on in case of failure.
I have the X570 with chipset fan and do wish they would have solved it with a beefier heatsink instead. Seems like a cost issue (in fact I think there is at least one X570 board w/o chipset fan)
It's not how 'beefy' a chipset is, but rather, the size of it. PCIE 4.0 is pushing the chipset, on the current node, to it's limits. A die shrink might fix this, or it might actually make the problem worse.
Chances are IF they fail, they are under warranty. If not, you can replace them. However, I've had (non-chipset) fans last for decades. I still have a fan from an old 386 system that works just fine and dandy.
My case fans are Noctua NF-S12A running at max 500rpm. CPU and GPU are watercooled with an external pump and radiator sitting a few meters away with acoustic isolation. So I'm pretty sure I would hear the chipset fans. I was expecting to shell out ~$1000 for a completly passive Gigabyte board, or even more if it had a PEX chip to use even more PCIe cards, and am very dissapointed that that doesn't exist. Any suggestions for a DIY mod?
You are nuts if you think a tiny little low RPM chipset fan is bad. Chipset fans are inevitable (though a die shrink may temporarily make this go away until PCIE5), and the fact is, the fan on your PSU, GPU, or case fans, even at low levels, will drown out any noise from a chipset fan. Even if the PSU fan is off and you have water cooling, the case fans, at even 400 rpm, make more noise than the chipset fan. Note that it's not currently possible to have every fan in a system shut off on high end platforms, except the chipset fan itself might shut off. Even with an AIO, there must be some airflow for the radiator.
It's really more a matter of long-term reliability based on my past experience. If a 120mm CPU fan starts to die, get loud, burns out due to dust, or otherwise becomes damaged, it isn't an issue to replace it even 5 years from now. With a proprietary motherboard CPU/heatsink, we are at the mercy of the vendor's long-term support.
"the TRX40 chipset, and offers 24 PCIe 4.0 lanes to the system. That being said, eight of those are used for the CPU-to-chipset connection, leaving 16 for ports and other devices. This is on top of the 64 PCIe 4.0 lanes for the CPU: 64 + 24 = 88 PCIe 4.0 lanes total, but the x8 link in each direction between CPU and chipset gives a usable 72 PCIe 4.0 lanes for the platform."
WHAT???
howsabout?:
The chipset uses 8 of the 64 lanes to create (multiplex?) 24x lanes - 8 of which are used for chipset usb & sata ports, leaving 16 lanes for various configurations of additional IO, at the discretion of the mobo maker.
The last thing an audio creator wants is some McGyvered / red-necked USB bridge hack-job of a motherboard. In this regard, the S1220 codec models are the only ones having my attention -the ASUS TRX40-Pro in particular since any Real content creator is going to stick their nose up at Wi-Fi. Does it have a secondary codec though? Thank you very much for the timely post which will, hopefully, prompt much discussion regarding the audio peculiarities.
Can you explain further? Why would an audio creator pay attention to the onboard audio if he will use his own audio interface? Even if it's only a cheap Focusrite Scarlett, why does the S1220 matter?
Your point is valid and I'm not ready to switch from external interfaces to an internal RME unit yet. However, the performance and quality of the peculiar on-board audio arrangement is still of great interest. Experiencing AMD's AM3 FX chipset USB implementation (See: "Silicon Errata for SB950") was rather eye opening and very helpful in understanding why running USB audio across that implementation was less than optimal. The USB arrangement of AM4 seems to be an improvement over the AM3 and AM3+. But AMD's TRX40 seems to reveal a non-satisfactory level of concern for PC audio -suggesting that AM4 might be more appropriate. This motherboard review is a great start but there are still many holes to fill in regarding this, in particular with the S1220.
Selecting an appropriate motherboard upfront before throwing thousands of dollars worth of audio software and hardware at it is critical. I did note compatibility issues between earlier AM4 systems and some Universal Audio cards and the desired RME card is around $900. So, I'm just not ready to ride the bleeding edge with these new boards but will eagerly listen to the experiences of others and cheer them along.
As a side note, I did recommend to my favored audio repair software vendor that they contact AnandTech to provide, or work out, some audio benchmarking tests or packages.
I still don't get your point. I agree that the USB implementation is important, that AMD messed that up in the past (thanks for the ref to the errata list) and that the way onboard audio works on TRX40 is maybe more error prone. But why is that different / better with the S1220? And how do you define an audio creator? I was thinking of an audio engineer, someone who does tracking/mixing/mastering/sound design. I can't imagine someone in that field would ever use onboard audio, except maybe for mobility reasons on a notebook.
I will probably use my RME Madiface XT with a StarTech USB card (PEXUSB3S44V) as I don't trust any onboard USB.
Regarding the compatibilty issues, do you have links/detailed information? The only thing I found was an issue, where the card wasn't detected in PCIe slots connected to the chipset. Which is a shame, but less of a problem with TRX40 as most slots are directly connected to the CPU.
I'm no expert here but could perhaps say this because of the audio problems of some cpus (crackling cutting) because of the high latency of Ryzen and the first Threadrippers? Or perhaps power issues (delivery to PCIE ports because the big power consumption of the new TR chips?)
looks like the two ASRock and at least two, if not three of the three MSI use LOTES sockets. i expect FOXCONN to be the same trash that freaked me the f out trying to screw down the CPU cover on my X399 Designare EX (see HardOCP's Kyle having the same difficulty tightening his down, but mine seemed even worse).
Although ASRock TRX40 Creator is classified as ATX, the last PCI slot probably cannot take a double-width card (even though the manual on page 41 talks about installing 4 SLI double-width GPUs). ATX size specification says 7 PCI slots; but 4x2=8. Am I right?
do you guys think that the MSI TRX40 Pro would be a good candidate for a 3960X with some light overclocking? i've had multiple really bad experiences with gigafail, and if you check gigbayte x399 reviews on newegg, amazon and other places, other people did too. so gigabyte is a hard pass for me.
Prices are going up and up with AMD, much more than anything Intel had ever priced. Zenith II is $850 and TR flagship model is expected to be at least $4000. AMD "wins again" but will AMD fans win yet?
dwade123, and look what intel did to the prices before they released zen. how much were you paying for QUAD core cpus, where the performance increase over the previous gen was 10% or less. intel cant complete with amd in multi thread performance, the ONLY way intel has any performance advantage over amd, is due to clock speed, thats all..
imagine where intel cpu's would be if there was NO Zen....
dwade123 " Prices are going up and up with AMD, much more than anything Intel had ever priced " you sure about that ?? intels top end i9 chips were VERY expensive here, most of their i7 line was also expensive... but yet.. HOW was intel able to drop the prices like they HAD to do with the 10xxx series over the 9xxx series ??? people now complain that amd is priced to high, where were all of these people when intel was priced just as high, if not higher ??? it seems.. its ok for intel to do something.. but when amd does it.. all of a sudden its wrong and its a crime?? come on...intel cant compete with amd in almost everything now, the ONLY thing intel has left, is single thread performance, and even that, isnt by that much, and its ONLY cause of clock speed.. its about time amd was able to charge what they are for some of their chips, because the performance is there.. when intel catches up, intel will probably charge the same.. dwade123, you better be complaining about intels prices then as well....
Although the Designare TRX40 is the only Gigabyte mobo that supports TB3 out of the box, I noticed the Auros WiFi has a THB-C port, same as the designare which uses to connect to the titan ridge. Does anybody know of the titan ridge card works with the Auros WiFi as well?
REF: Page 4 ASRock TRX40 Taichi, last paragraph, first sentence
"The ASRock TRX40 Taichi is the premier board for enthusiasts in its line-up with each of the four full-length PCIe 4.0 slots supporting x16 across the board"
The ASRock TRX40 Taichi only has three (3) full-length x16 slots.
@gavbon could you check if you guys have access to a block diagram for the ASRock TRX40 Taichi? Now that the CPUs are slowly becoming available and should be in-stock shortly I've been considering this board to upgrade. My use case is for 2x 2080Ti NVLINK with an Quad x4 NVMe SSD AIB so the Taichi is one of the only boards that can actually support this with its PCIe slot configuration.
I also have 2x U.2 NVMe SSDs and I'm trying to figure out if the two on-board M.2 KeyM sockets are coming from the CPU or the chipset and the ASRock manual doesn't include a block diagram.
has anyone had an issue with the XL size of this trx40 Designare board fitting into atx cases. There doesn't seem to be to many out there and they are all terribly bland or built for custom loops. I plan to use a aio and would love to put it all in a Lancool 2 when they ship later this month. Any Case recommendations here?
There will definitely be compatibility issues with the length of it. Most cases designed for E-ATX should be ok for the width. I have an Enthoo Evolv X case that I would absolutely recommend, however, the TRX40 Designare board definitely wouldn't fit as I have an SSI-CEB spec'd board and it is a sliver away from the bottom case shroud. Based on the dimensions and spec of the Lancool 2 I'd say you'd have the same issue with the TRX40 Designare fitting in that case, e.g. it won't "vertically" fit. Something like the older HAF-X case would fit it
Will any of these TRX40 motherboards permit bifurcation of one of the gen4x16 slots into gen4 x8x8? Based on current motherboard users guides, some allow gen4x16 -> gen4 x4x4x4x4 but none seem to do gen4 x8x8 (unlike the Aorus X570 for example). Thanks for any pointers.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
109 Comments
Back to Article
Arsenica - Thursday, November 28, 2019 - link
Something funny about the Gigabyte TRX40 Designare is that they go out their way to not include Thunderbolt branding for the bundled card. They only call it "a 40GB/s GC-Titan Ridge add-in card which allows you to take advantage of exceptionally fast transfer speeds!"YB1064 - Thursday, November 28, 2019 - link
$800 for a motherboard? I don't think any number of Xtreme XXX in the name justifies such a ridiculous price tag.colonelclaw - Thursday, November 28, 2019 - link
Does the lack of Thunderbolt 3 on 11 of the 12 point to it still being too expensive to manufacture? Or something else? Seems odd to me that 8 out of 12 boards has ethernet > 1G, but only a single board has TB3. Doesn't seem very HEDT!gavbon - Thursday, November 28, 2019 - link
Not to mention the single option is via an add-on card. I will reach out and see what I can find outSmell This - Thursday, November 28, 2019 - link
Could TB3 be spec'd-out?
I mean, at 12v/60w (max TB3?) asking too much for cabling/hardware in the ever-ending quest for speed/bandwidth in exchange for heat?
Is the add-on proprietary to AsRock?
eek2121 - Friday, November 29, 2019 - link
Disclaimer, this is going solely off memory and is based off stuff I read somewhere. IIRC The Macbook Pro has 4 thunderbolt 3 ports. More than likely, it's because Intel provides TB3 on the CPU separate from PCIE lanes, whereas AMD only has dedicated PCIE lanes. This means that TB3 uses PCIE lanes on AMD systems.phildj - Sunday, December 8, 2019 - link
The MacBooks Pro (and the 2018 Mac Mini) all run 2 Alpine Ridge (or whatever) controllers off 2x x4 PCIe lanes. The 15/16” version connects to the DGPU using only x8.Digispa - Thursday, November 28, 2019 - link
Thunderbolt, regardless of version number is owned by Intel. I would think that board manufacturers probably don't have to pay a license fee to add it to Intel boards but have to pay a fee for AMD boards they design and sell. It is most likely a cost issue versus a compatible spec issue.eek2121 - Friday, November 29, 2019 - link
Untrue, TB3 has been open sourced. It will be a part of the USB 4.0 standard.dotes12 - Saturday, November 30, 2019 - link
Is it actually going to be called USB 4.0? They were really getting on a roll with USB 3.2 Gen 2×2 SuperSpeed+.amb9800 - Saturday, November 30, 2019 - link
TB3 has not been open sourced. It's been royalty-free from the start, but any TB3 device still needs to be certified by Intel. Thus far the only TB3 devices that exist integrate Intel TB3 controllers, and very few non-Intel platforms have integrated TB3 (basically just a couple of X570 ASRock boards).Chaitanya - Friday, November 29, 2019 - link
In order to integrate Thuberbolt, Intel needs access to microcode which is why very few boards even on AM4 come with it and even those solutions are iffy at best.eek2121 - Friday, November 29, 2019 - link
Untrue, TB has been open sourced and will be a part of the USB 4.0 standard. The real answer is likely one I provided earlier: Intel CPUs have dedicated bandwidth for TB3, AMD CPUs hang it off the PCIE bus.amb9800 - Saturday, November 30, 2019 - link
TB3 being incorporated into USB 4.0 definitely does not mean it has been "open sourced." Every TB3 device must still be certified by Intel.ender8282 - Saturday, November 30, 2019 - link
I love the TB3 port on my laptop and docking station. It's way convenient. Honestly though I've never understood the use case on a desktop. If you've got an ATX motherboard and a decent sized case what need does it really solve?TechKnowbabble - Friday, December 20, 2019 - link
According to this video the GIGABYTE TRX40 AORUS XTREME has a Thunderbolt 3 header called THB_C, but on the site the only mention to this i can find is a "GIGABYTE add-in card connector" which the AORUS Master and Wifi Pro have mention of also. I dont know why it is listed differently from the Designare or not mentioned in this article but it appears that all the Gigabyte TRX40 boards support thunderbolt 3 with add in card.https://www.youtube.com/watch?v=o21xINJF1tE&fe...
NelsonK - Saturday, January 18, 2020 - link
It might as well be -BetaMax-. Thunderbolt is Intel's baby, and you gotta dance to their tune to get the engineering specs -- Intel doesn't publish 'em. Only well-resourced (i.e., volume) manufacturers can feasibly spend to design and incorporate it, then produce to a scale that justifies the investment. Sure, that's not precisely a licensing fee, but it's one heckuva barrier to entry.These firms can all afford it, but, since VHS (USB) is good enough, why bother? USB "3.2" is pretty darn close and even uses the same Type-C port. In fact, you can even play your VHS tapes on this BetaMax -- USB devices will run at their native speeds when connected to Thunderbolt.
And with USB 4, there will be no difference in speed. Is there even a practical difference in speed now? Do ya really need more than 10 Gbps? A few of you might, but not enough to pay the piper.
This is a no-brainer for the board makers: USB 3.1 Gen 2 ("3.2") Type-C offers a lot more speed than most devices can hope to keep up with internally. In the instances where somebody wants to daisy-chain video, they're either mining (which just needs the chain, not so much the speed), or they're using a laptop and don't have space for a video card. Well, these are mainboards, folks. You've got a bunch of fat-pipe PCIe 4.0 16-lane slots that your graphics cards won't even make full use of 99.99% of the time they're running, as they throttle down to 2.0 or 1.0.
BetaMax was better, but it died even before S-VHS was a real thing. ThunderBolt just got similarly voted down (massively) by pretty much all of big name manufacturers users trust enough and -might- have paid extra to get a board that has it.
Looks like we're goin' with VHS once again, boys and girls... ;-)
wilsonkf - Thursday, November 28, 2019 - link
Check your last page. Do you really mean "ASUS X570" Product Stack? Also other brands...gavbon - Thursday, November 28, 2019 - link
Good spot Wilson, I really appreciate it. I've been neck-deep in X570, I must have been in AM4 mode!tamalero - Thursday, November 28, 2019 - link
hey Anana, any chance you could build a full comparison table between number of ports, pci-e slots, wifi, ethernet..etc..?dan82 - Thursday, November 28, 2019 - link
I wish those boards had more Type C ports and dropped some of those A ports. A type C port can easily be turned in an A, but vice versa is against the spec.Also serious question: what is the reason to keep A 2.0 ports around? Are there any devices that don’t work on modern ports?
jeremyshaw - Thursday, November 28, 2019 - link
Probably much easier to route one old and slow data pair vs 1-4 high speed data pairs.I have many devices that will likely never need more than USB 2.0 - my Mouse and KB included. USB microphone as well. The best external device I've got that benefits from USB3 speeds is my Bluray burner, and we all know how popular those are. External USB flash drives are usually limited by the cheap NAND inside, and most of my external storage is on my network.
For others, I suppose USB capture cards? Really decent USB 3.0 flash drives? Even if I connected my phone to my PC, it's still limited to USB 2.0. Maybe a decent external card reader? These boards reviewed here are all ATX, so I'll rule out USB NICs. I've got to be missing something in my list.
eek2121 - Friday, November 29, 2019 - link
There is still the odd device that doesn't work on USB 3.0. Also the last 2 machines I've built did not have fully functioning USB 3.0/3.1 ports in Linux, indicating lack of driver support for operating systems other than Linux. In short: USB 3.x is still a WIP despite being out on the market for quite a while.Llawehtdliub - Saturday, November 30, 2019 - link
Plz no. Dont do an Apple. Just beczus you cant think of a reason to use doesnt mean others cant.There is a reason to leave them.
dotes12 - Saturday, November 30, 2019 - link
Bring back PS/2 ports too? /sasmian - Thursday, November 28, 2019 - link
I'm a little confused by comments on the X570 boards that will probably apply to these also. With these new PCIe4 slots (and M.2 slots), is it the case that they are all completely independent and you can mix/match PCIe2/PCIe3/PCIe4 cards/drives freely at each one's maximum possible negotiated link speed? Or will putting (say) a PCIe2 RAID controller in any slot reduce all slots to the lowest common denominator, PCIe2 speed?voicequal - Thursday, November 28, 2019 - link
PCIe lanes are wired directly to the PCIe controller on either the CPU or chipset, so link speeds between slots are independent. PCIe 4.0 is backward compatible with previous generations 3.0, 2.0 & 1.0, so running a PCIe 2.0 card in a slot capable of 4.0 will run at 2.0 speeds and not affect adjacent lanes on other slots.Some motherboards allow you to reduce the maximum speed of PCIe lanes from 4.0 to something lower -- this can help to troubleshoot signal integrity issues. This setting sometimes does affect lanes across multiple slots. But as long as you leave it to Auto, the lanes will run at the highest compatible speed between card and controller.
WaltC - Friday, November 29, 2019 - link
Yes, in most cases the slots auto-configure to the device connected. Gen 3 devices should happily coexist with Gen 4 devices with each running at spec. In the case of the GPU, you can run it at Gen 3 if you prefer even if it is a Gen 4 GPU natively--there's separate switch for that in the bios, but the slots auto-configure for other devices and the GPU bios switch doesn't affect any other slots.I was surprised to see that several of the mboards had no rear clear-CMOS button on their backplates, and thought that was an interesting omission from the article--and the article also failed to mention dual-bios mboards--which the GB Aorus Xtreme & Master have (pictured mechanical switches) --one would hope they all might have them. Seems as if both these important features would be worth a mention...
dotes12 - Saturday, November 30, 2019 - link
Is there any downsides of going to PCIe 4.0? Maybe something similar to DDR3 and DDR4 memory where the bandwidth increases, but the latency goes up too?PopinFRESH007 - Sunday, December 29, 2019 - link
not really noPopinFRESH007 - Sunday, December 29, 2019 - link
PCIe is a serial point to point topology so each link or "lane" is independent (ignoring things like PCIe switches). This is different to the legacy PCI bus which is a shared parallel bus which would behave as you've described.Dionysos1234 - Thursday, November 28, 2019 - link
Any information on what memory is supported? ECC?Llawehtdliub - Saturday, November 30, 2019 - link
Yes ECC is supportedVatharian - Thursday, November 28, 2019 - link
Has anyone from ASUS actually thought even for a second about the PCI-Express slots placement? Using dual GPUs, until converted truly to single slot with water cooling, blocks most of the slots. In my case I'd need 4 or 5 slots, which leaves ROG Zenith II Extreme from their linup. And ASRock Creator. As much as I hate Gigabyte I must admit their Aorus line has sensible layouts, and MSI's are mixed bag.nevcairiel - Thursday, November 28, 2019 - link
These boards are clearly not designed for Dual GPU purposes, but instead actually offer quite some space for the primary GPU (3 slots is mandatory for many high-end air cooled cards these days), and additional slots for other 1 slot cards.eek2121 - Friday, November 29, 2019 - link
Nearly every board I looked at in the article has spacing for multiple GPUs.eek2121 - Friday, November 29, 2019 - link
I noticed you said 3 slots. I have a high end GPU, it takes 2 slots. The 3rd slot is extremely far away from the 2nd slot and could comfortably fit a GPU. Factor in the width of an m.2 drive when looking at the pictures above and you'll realize you are mistaken (many of the boards have m.2 slots in between, That is all the space you need for air cooling a GPU, since most high end hardware only takes up 2 slots, the 3rd 'slot' is actually where an M.2 drive would sit, and the real third slot is below it, leaving plenty of space for cooling fan air circulation).Spunjji - Friday, November 29, 2019 - link
Serious question - are dual-GPUs even used these days?I know they're out for gaming, but I don't know the state of play regarding GPU compute.
Bccc1 - Friday, November 29, 2019 - link
For GPU rendering (e.g. Redshift, Octane and VRay Next) dual GPUs are quite common and even quad GPUs can be used quite efficiently.eek2121 - Friday, November 29, 2019 - link
I don't kow about the "blocking most of the slots" terminology. On my X399 board, only 1 slot is blocked (and technically you still could put a card in that slot, I actually had a low profile x4 card next to my GPU without any heat issues). On many X570 boards, spacing is such that no slots are blocked. In both cases, there are single slot GPUs, just not high end ones. As you've stated, using a custom loop allows for even high end GPUs to use only 1 slot.Smell This - Thursday, November 28, 2019 - link
Still just a bit bummed .... that 1st/2nd Gen TRs have been left hangin'
As we roll into 2020, we gotta love where AMD is going BUT, here's hoping that Dr Su does not make the same mistakes on HEDTs that Chipzillah has been notorious in making in the past. With DDR5 on the horiZen, could sTRX4 be yet another *2 and Done* in the next 18 months?
I'm all for $800 mobos -- just as long as they don't become $50 moo-boards in January, 2021.
Spunjji - Friday, November 29, 2019 - link
Based on prior experience of AMD processors, it seems more likely that they'd have to offer new boards for DDR5 support but allow the new processors to run in older boards with DDR4.eek2121 - Friday, November 29, 2019 - link
Chances are that the TRX* series of boards will end in 2021 (or 2022 at the latest), when DDR5 is expected to roll out along with possibly Zen 5 (if 2022). That being said, I have an X399 board and a 1950X. I don't see a need to upgrade yet. I may eventually pick up a 2950X next year, but I'm hanging onto this platform. It games pretty much all current games at 4k, with the majority at maximum or high details (even on a 1080ti), and it's excellent for the development and content creation workloads that i do. Don't let the listed benchmarks fool you, the 1950X is capable of much more. Running Linux brings a rather large performance increase due to better thread scheduling among other things. I have no problems running GTA V or any other games that I play, at full 4k and maximum details.Llawehtdliub - Saturday, November 30, 2019 - link
At 30fpsscineram - Wednesday, December 4, 2019 - link
300.masmosmeaso - Thursday, November 28, 2019 - link
question,is the amount of phases important when it comes to performance or having more devices on the motherboard ? if so how many is overkill for these motherboards ?
Hul8 - Thursday, November 28, 2019 - link
Those power delivery components are only for the CPU package, and take all their power input from the auxiliary CPU power connectors (usually 8-pin, 8+4 or 8+8-pin these days).The rest of the motherboard get their power thru the 24-pin.
eek2121 - Friday, November 29, 2019 - link
More phases typically means better performance (thermals, quality of power, power limits) from the CPU, unless the vendor cheaps out on VRMs. I'd stay away from any board offering only a single 8-pin, as that can be a sign they are using lower quality VRMs, fewer phases, etc. Contrary to popular belief, phase doublers don't really hurt anything. A few in the youtube community have tested this, both with a CPU and also with a CPU 'emulator' that plugs into the socket and measures power output.Hul8 - Tuesday, December 10, 2019 - link
The question was about "devices on the motherboard", which I assume means things other than the CPU. That's why I pointed out that the phases are irrelevant to the question.Dragonstongue - Thursday, November 28, 2019 - link
just to say suchjust "cause" the box label as 280w TDP, this does not automatically mean it USES 280w (I am sure Intel or NVDA likely many many others) will lambast the crud out of AMD for this, without giving the "full story"
eg. Intel will say "our product X only is TDP of Y vs this massive 280w number, choose us, save the world" then when the user actually uses said "product X" they find out either A is much much slower than all review sites list it is and/or B, it shoots ACTUAL power use through the roof therefore not matching the "claims" of said product X TDP being "better" than TR gen 3 280w "listed" TDP
Intel, NVDA have far more proven themselves on "fibbing" their numbers to make the sales than AMD has "overall" over the many years I have been involved with (consumer or otherwise) in computing
............
Thanks for the review overall, at least it seems the various "partners" are not being overly foolish in terms of pricing and feature set, MSI IMO even "better" than some of the others (such as ASUS)
I truly hope these turn out to be the "cat's meow" for those whom can afford and use them, it helps AMD, helps their partners, the long run, helps us all
(^.^)
gavbon - Thursday, November 28, 2019 - link
We tested the 3970X and 3960X in our review (https://www.anandtech.com/show/15044/the-amd-ryzen...In the power testing, our chips hit 280w without issues, especially the 32-core. Which the definition of TDP is up for question, the CPUs seem bang on the power figures we saw
Hul8 - Thursday, November 28, 2019 - link
At least one reviewer got ~285 - 295 W power consumption testing Threadripper 3rd at stock, until they realized they had memory overclocked to 3600 MT/s.With the RAM also at stock (3200 MT/s), the power consumption ended up between 279 - 280 W, so just within the given TDP.
tamalero - Saturday, November 30, 2019 - link
Also, doesn't some motherboards (Particularly ASUS and Gigabyte) do minimal overclock by default on the "recommended settings" ?eek2121 - Friday, November 29, 2019 - link
TDP != power consumed. TDP is thermal design power. The type of cooler itself can change the TDP formula in some cases (due to being part of the formula), and AMD, NVIDIA, and Intel all have different ways of calculating TDP.eastcoast_pete - Thursday, November 28, 2019 - link
Thanks Gavin, interesting article. Question: Your initial mentioning of the chipset says it's made on GloFo's 12 nm node, but it's 14 nm a bit later in the article. Can you clarify? Thanks!jeremyshaw - Friday, November 29, 2019 - link
Since the last page has a picture of the chipset saying Made in Taiwan, it's probably either TSMC or UMC... unless if packaging somehow counts as "made in."msroadkill612 - Friday, November 29, 2019 - link
Good spotting & there may be more to t than u think.Dunno, but others may?
I recall reading that the exciting new IO chip on Zen 2, & the TR chipset, are ~"cut an pastes" of each other - one is made by tsmc & the other by glofo.
This may be the source of the confusion?
Bccc1 - Thursday, November 28, 2019 - link
Thanks for this writeup. I'm currently drawn to Gigabytes TRX40 Designare and TRX40 Aorus Xtreme. Does the "40GB/s GC-Titan Ridge add-in card" work on any board?Any info on bifurcation support? Gigabyte is quite clear about that and offers x4x4x4x4 for the x16 slots and x4x4 for the x8 slots. Sadly no 8x4x4 or x8x8. MSIs manual explains the BIOS option "PCIe SlotX Lanes Configuration" with the sentence "PCIe lanes configuration for MSI M.2 XPANDER series cards/ Other M.2 PCIe
storage card." which sounds like x4x4x4x4 bifurcation to me, but is quite vague.
Is x8x8 and x8x4x4 supported on any board?
msroadkill612 - Friday, November 29, 2019 - link
Bifurcation obfuscation?eek2121 - Friday, November 29, 2019 - link
I can't speak to the current MSI offerings, but my x399 Gaming Carbon (off the top of my head, I don't use this feature, however) supports x4x4x4x4 and x8x8. Other modes may be possible, but I haven't looked.The_Assimilator - Thursday, November 28, 2019 - link
Making a CPU that fits in a socket but doesn't work in it is idiotic. Especially considering the target market, did AMD really need to save a few pennies on getting Lotes to make slight modifications to their TR3 tooling?Spunjji - Friday, November 29, 2019 - link
"Especially considering their target market"System integrators, enthusiasts and experts?
yetanotherhuman - Friday, November 29, 2019 - link
All of them have fans. Bleh. I remember chipset fans. No thanks. X570 is a piece of shit to me for the same reason (apart from that one gigabyte board that costs way too much).jeremyshaw - Friday, November 29, 2019 - link
Also one Asrock board which costs even more!Korguz - Friday, November 29, 2019 - link
all be cause of a chipset fan ?? thats borderline crazy, have you even heard them ? chances are, the other fans in your case would drown it out and you wouldnt even hear ityetanotherhuman - Friday, November 29, 2019 - link
They fail, they're usually a weird size or fitment, and they whine.. case fans are usually much larger and have a far different (and much more pleasant) toneKorguz - Friday, November 29, 2019 - link
i have an Athlon 64 board, with a fan on the chipset, still works just fine, no issues.. nothing.. so those who are wining about these, are unfounded.Larch - Friday, November 29, 2019 - link
Yeah they do fail sometimes (or used to anyway), and it kinda silly that they they nowadays have these weird shapes because of aesthetics making them hard to replace. Not everyone use windowed cases.With that said it shouldn't be a big problem to strap a casefan on in case of failure.
I have the X570 with chipset fan and do wish they would have solved it with a beefier heatsink instead. Seems like a cost issue (in fact I think there is at least one X570 board w/o chipset fan)
eek2121 - Friday, November 29, 2019 - link
It's not how 'beefy' a chipset is, but rather, the size of it. PCIE 4.0 is pushing the chipset, on the current node, to it's limits. A die shrink might fix this, or it might actually make the problem worse.eek2121 - Friday, November 29, 2019 - link
Chances are IF they fail, they are under warranty. If not, you can replace them. However, I've had (non-chipset) fans last for decades. I still have a fan from an old 386 system that works just fine and dandy.Bccc1 - Friday, November 29, 2019 - link
My case fans are Noctua NF-S12A running at max 500rpm. CPU and GPU are watercooled with an external pump and radiator sitting a few meters away with acoustic isolation. So I'm pretty sure I would hear the chipset fans.I was expecting to shell out ~$1000 for a completly passive Gigabyte board, or even more if it had a PEX chip to use even more PCIe cards, and am very dissapointed that that doesn't exist. Any suggestions for a DIY mod?
eek2121 - Friday, November 29, 2019 - link
You are nuts if you think a tiny little low RPM chipset fan is bad. Chipset fans are inevitable (though a die shrink may temporarily make this go away until PCIE5), and the fact is, the fan on your PSU, GPU, or case fans, even at low levels, will drown out any noise from a chipset fan. Even if the PSU fan is off and you have water cooling, the case fans, at even 400 rpm, make more noise than the chipset fan. Note that it's not currently possible to have every fan in a system shut off on high end platforms, except the chipset fan itself might shut off. Even with an AIO, there must be some airflow for the radiator.Sivar - Monday, December 2, 2019 - link
It's really more a matter of long-term reliability based on my past experience.If a 120mm CPU fan starts to die, get loud, burns out due to dust, or otherwise becomes damaged, it isn't an issue to replace it even 5 years from now. With a proprietary motherboard CPU/heatsink, we are at the mercy of the vendor's long-term support.
realbabilu - Friday, November 29, 2019 - link
Any motherboard s TRX with ipmi? I mean it would be a workstation or a server, a nice ipmi remote will be nice.msroadkill612 - Friday, November 29, 2019 - link
"the TRX40 chipset, and offers 24 PCIe 4.0 lanes to the system. That being said, eight of those are used for the CPU-to-chipset connection, leaving 16 for ports and other devices. This is on top of the 64 PCIe 4.0 lanes for the CPU: 64 + 24 = 88 PCIe 4.0 lanes total, but the x8 link in each direction between CPU and chipset gives a usable 72 PCIe 4.0 lanes for the platform."WHAT???
howsabout?:
The chipset uses 8 of the 64 lanes to create (multiplex?) 24x lanes - 8 of which are used for chipset usb & sata ports, leaving 16 lanes for various configurations of additional IO, at the discretion of the mobo maker.
sailorchou - Friday, November 29, 2019 - link
As I know, some boards have the type-c USB Gen3.2 x2 (20Gbps aggregation). Totally ignored?HJay - Friday, November 29, 2019 - link
The last thing an audio creator wants is some McGyvered / red-necked USB bridge hack-job of a motherboard. In this regard, the S1220 codec models are the only ones having my attention -the ASUS TRX40-Pro in particular since any Real content creator is going to stick their nose up at Wi-Fi. Does it have a secondary codec though? Thank you very much for the timely post which will, hopefully, prompt much discussion regarding the audio peculiarities.HJay - Friday, November 29, 2019 - link
I suppose audio creators will want to pay close attention to which socket is better suited to their work: AM4 or TR.Bccc1 - Friday, November 29, 2019 - link
Can you explain further? Why would an audio creator pay attention to the onboard audio if he will use his own audio interface? Even if it's only a cheap Focusrite Scarlett, why does the S1220 matter?Llawehtdliub - Saturday, November 30, 2019 - link
Because he's young and ignorant but highly opinionated.HJay - Saturday, November 30, 2019 - link
Life really does begin at 50.HJay - Saturday, November 30, 2019 - link
Your point is valid and I'm not ready to switch from external interfaces to an internal RME unit yet. However, the performance and quality of the peculiar on-board audio arrangement is still of great interest. Experiencing AMD's AM3 FX chipset USB implementation (See: "Silicon Errata for SB950") was rather eye opening and very helpful in understanding why running USB audio across that implementation was less than optimal. The USB arrangement of AM4 seems to be an improvement over the AM3 and AM3+. But AMD's TRX40 seems to reveal a non-satisfactory level of concern for PC audio -suggesting that AM4 might be more appropriate. This motherboard review is a great start but there are still many holes to fill in regarding this, in particular with the S1220.Selecting an appropriate motherboard upfront before throwing thousands of dollars worth of audio software and hardware at it is critical. I did note compatibility issues between earlier AM4 systems and some Universal Audio cards and the desired RME card is around $900. So, I'm just not ready to ride the bleeding edge with these new boards but will eagerly listen to the experiences of others and cheer them along.
As a side note, I did recommend to my favored audio repair software vendor that they contact AnandTech to provide, or work out, some audio benchmarking tests or packages.
Bccc1 - Saturday, November 30, 2019 - link
I still don't get your point. I agree that the USB implementation is important, that AMD messed that up in the past (thanks for the ref to the errata list) and that the way onboard audio works on TRX40 is maybe more error prone.But why is that different / better with the S1220? And how do you define an audio creator? I was thinking of an audio engineer, someone who does tracking/mixing/mastering/sound design. I can't imagine someone in that field would ever use onboard audio, except maybe for mobility reasons on a notebook.
I will probably use my RME Madiface XT with a StarTech USB card (PEXUSB3S44V) as I don't trust any onboard USB.
Regarding the compatibilty issues, do you have links/detailed information? The only thing I found was an issue, where the card wasn't detected in PCIe slots connected to the chipset. Which is a shame, but less of a problem with TRX40 as most slots are directly connected to the CPU.
tamalero - Saturday, November 30, 2019 - link
I'm no expert here but could perhaps say this because of the audio problems of some cpus (crackling cutting) because of the high latency of Ryzen and the first Threadrippers?Or perhaps power issues (delivery to PCIE ports because the big power consumption of the new TR chips?)
Dug - Saturday, November 30, 2019 - link
I think it's time to move past USB if you are a "Real content creator"valinor89 - Friday, November 29, 2019 - link
"The TRX40 chipset is based on the 14 nm process node from Global Foundries""AMD leveraged GlobalFoundries 12nm to build the TRX40 chipset"
Is it 12 or 14?
tamalero - Saturday, November 30, 2019 - link
I remember that Global Foundries is 14nm while TSCM is 14+ (12nm)gavbon - Monday, December 2, 2019 - link
I have corrected it, it is Global Foundries 14 nm process. Thank you for the heads upscineram - Wednesday, December 4, 2019 - link
That's not what Ian said.PopinFRESH007 - Sunday, December 29, 2019 - link
I believe what Ian was referring to is the IO chip on the CPU package which is 12nm.plonk420 - Friday, November 29, 2019 - link
looks like the two ASRock and at least two, if not three of the three MSI use LOTES sockets. i expect FOXCONN to be the same trash that freaked me the f out trying to screw down the CPU cover on my X399 Designare EX (see HardOCP's Kyle having the same difficulty tightening his down, but mine seemed even worse).omasoud - Friday, November 29, 2019 - link
Although ASRock TRX40 Creator is classified as ATX, the last PCI slot probably cannot take a double-width card (even though the manual on page 41 talks about installing 4 SLI double-width GPUs). ATX size specification says 7 PCI slots; but 4x2=8. Am I right?Llawehtdliub - Saturday, November 30, 2019 - link
Good review, thank you.solomonshv - Saturday, November 30, 2019 - link
do you guys think that the MSI TRX40 Pro would be a good candidate for a 3960X with some light overclocking? i've had multiple really bad experiences with gigafail, and if you check gigbayte x399 reviews on newegg, amazon and other places, other people did too. so gigabyte is a hard pass for me.dwade123 - Sunday, December 1, 2019 - link
Prices are going up and up with AMD, much more than anything Intel had ever priced. Zenith II is $850 and TR flagship model is expected to be at least $4000. AMD "wins again" but will AMD fans win yet?Qasar - Sunday, December 1, 2019 - link
dwade123, and look what intel did to the prices before they released zen. how much were you paying for QUAD core cpus, where the performance increase over the previous gen was 10% or less. intel cant complete with amd in multi thread performance, the ONLY way intel has any performance advantage over amd, is due to clock speed, thats all..imagine where intel cpu's would be if there was NO Zen....
scineram - Wednesday, December 4, 2019 - link
Yes.Korguz - Wednesday, December 4, 2019 - link
dwade123 " Prices are going up and up with AMD, much more than anything Intel had ever priced " you sure about that ?? intels top end i9 chips were VERY expensive here, most of their i7 line was also expensive... but yet.. HOW was intel able to drop the prices like they HAD to do with the 10xxx series over the 9xxx series ??? people now complain that amd is priced to high, where were all of these people when intel was priced just as high, if not higher ??? it seems.. its ok for intel to do something.. but when amd does it.. all of a sudden its wrong and its a crime?? come on...intel cant compete with amd in almost everything now, the ONLY thing intel has left, is single thread performance, and even that, isnt by that much, and its ONLY cause of clock speed.. its about time amd was able to charge what they are for some of their chips, because the performance is there.. when intel catches up, intel will probably charge the same.. dwade123, you better be complaining about intels prices then as well....prophet001 - Monday, December 2, 2019 - link
Why the heck would you use the same physical socket keying.Korguz - Monday, December 2, 2019 - link
the socket is the same.. but the pins i think are differentMemo.Ray - Sunday, December 8, 2019 - link
Three tables on page 14 have headers that refer to X570 instead of TRX40.heimo - Wednesday, December 11, 2019 - link
passthough audio in the chipet.passthrough audio in the chipset.
mzo - Friday, December 13, 2019 - link
Although the Designare TRX40 is the only Gigabyte mobo that supports TB3 out of the box, I noticed the Auros WiFi has a THB-C port, same as the designare which uses to connect to the titan ridge. Does anybody know of the titan ridge card works with the Auros WiFi as well?PopinFRESH007 - Sunday, December 29, 2019 - link
REF: Page 4 ASRock TRX40 Taichi, last paragraph, first sentence"The ASRock TRX40 Taichi is the premier board for enthusiasts in its line-up with each of the four full-length PCIe 4.0 slots supporting x16 across the board"
The ASRock TRX40 Taichi only has three (3) full-length x16 slots.
PopinFRESH007 - Sunday, December 29, 2019 - link
@gavbon could you check if you guys have access to a block diagram for the ASRock TRX40 Taichi? Now that the CPUs are slowly becoming available and should be in-stock shortly I've been considering this board to upgrade. My use case is for 2x 2080Ti NVLINK with an Quad x4 NVMe SSD AIB so the Taichi is one of the only boards that can actually support this with its PCIe slot configuration.I also have 2x U.2 NVMe SSDs and I'm trying to figure out if the two on-board M.2 KeyM sockets are coming from the CPU or the chipset and the ASRock manual doesn't include a block diagram.
oc3ddesign - Wednesday, January 8, 2020 - link
has anyone had an issue with the XL size of this trx40 Designare board fitting into atx cases. There doesn't seem to be to many out there and they are all terribly bland or built for custom loops. I plan to use a aio and would love to put it all in a Lancool 2 when they ship later this month. Any Case recommendations here?PopinFRESH007 - Saturday, January 18, 2020 - link
There will definitely be compatibility issues with the length of it. Most cases designed for E-ATX should be ok for the width. I have an Enthoo Evolv X case that I would absolutely recommend, however, the TRX40 Designare board definitely wouldn't fit as I have an SSI-CEB spec'd board and it is a sliver away from the bottom case shroud. Based on the dimensions and spec of the Lancool 2 I'd say you'd have the same issue with the TRX40 Designare fitting in that case, e.g. it won't "vertically" fit. Something like the older HAF-X case would fit itaCuria - Thursday, January 30, 2020 - link
There is an error: "ASRock TRX40 Taichi ... four full-length PCIe 4.0 slots"... This board only has 3 full length PCIe 4.0 slots, not 4jangray - Friday, February 14, 2020 - link
Will any of these TRX40 motherboards permit bifurcation of one of the gen4x16 slots into gen4 x8x8? Based on current motherboard users guides, some allow gen4x16 -> gen4 x4x4x4x4 but none seem to do gen4 x8x8 (unlike the Aorus X570 for example). Thanks for any pointers.