10ns is far from insignificant on PCIe5. This is 40 bits worth of delay. With one retimer on every couple of inches, this is becoming the futile excercise. It looks like whole point of PCIe is for industry to push retimers by the shovel.
Maybe they will use more PCIe 5.0 to 2 or more PCIe 4.0 to make use of the bandwidth & extend distance to extra devices. PCIe 5.0 for closest slot & then PCIe 4.0/3.0 for the rest.
The 10ns latency looks sufficient to capture a whole packet with checksum/ECC & then retransmit with accurate clock & cleaner signals. Analog amps would just multiply the blur.
Yes, cables help a lot—even copper rather than optical. A server designed to provide a few dozen front drive bays running at gen5 speeds really should put OCuLink or similar connectors as close to the CPU as possible, and spread out on the backplane so that each drive (or switch) is within a few inches of its cable. But motherboard layout doesn't always allow for optimal positioning of PCIe connectors, what with power delivery and DRAM slots taking up so much prime real estate.
It'll be interesting to see what kind of system layouts are used in PCIe gen5 OCP servers, where all the storage and NICs are in front EDSFF slots, and no traditional PCIe add-in card slots in the rear.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
7 Comments
Back to Article
mdriftmeyer - Wednesday, November 11, 2020 - link
Any way to reduce legacy on Audio Interfaces/Video Interfaces for rack mount studio equpiment would be welcomed.mode_13h - Wednesday, November 11, 2020 - link
Sorry, but what does that have to do with anything?And I assume you meant to say "reduce latency"? That depends a lot on where the latency is coming from.
Brane2 - Wednesday, November 11, 2020 - link
10ns is far from insignificant on PCIe5. This is 40 bits worth of delay. With one retimer on every couple of inches, this is becoming the futile excercise.It looks like whole point of PCIe is for industry to push retimers by the shovel.
tygrus - Wednesday, November 11, 2020 - link
Maybe they will use more PCIe 5.0 to 2 or more PCIe 4.0 to make use of the bandwidth & extend distance to extra devices. PCIe 5.0 for closest slot & then PCIe 4.0/3.0 for the rest.The 10ns latency looks sufficient to capture a whole packet with checksum/ECC & then retransmit with accurate clock & cleaner signals. Analog amps would just multiply the blur.
Jorgp2 - Thursday, November 12, 2020 - link
Wat?azfacea - Wednesday, November 11, 2020 - link
cables and not PCB is needed inside servers. preferably optics, w/ transmitters built into the silicon.Billy Tallis - Wednesday, November 11, 2020 - link
Yes, cables help a lot—even copper rather than optical. A server designed to provide a few dozen front drive bays running at gen5 speeds really should put OCuLink or similar connectors as close to the CPU as possible, and spread out on the backplane so that each drive (or switch) is within a few inches of its cable. But motherboard layout doesn't always allow for optimal positioning of PCIe connectors, what with power delivery and DRAM slots taking up so much prime real estate.It'll be interesting to see what kind of system layouts are used in PCIe gen5 OCP servers, where all the storage and NICs are in front EDSFF slots, and no traditional PCIe add-in card slots in the rear.