Final Words

We can't wait to get our hands on a board. Now that NVIDIA has announced this type of scaling I/O with HyperTransport connections, we wonder why we haven't been pushing it all along. It seems rather obvious in hindsight that using the extra HT connections processors would be advantageous and relatively simple in an Opteron environment. This is especially true when all the core logic fits on a single chip. Kudos to NVIDIA for bringing the 2200 and 2050 combination to market.

Though much of the nForce Professional series is very similar to the nForce 4, NVIDIA has likely made good use of those two million extra transistors. Though, we can't be exactly sure what went in there - it's likely the TCP/IP offload Engine, and possibly some server level error reporting routines. But for this, nForce Pro is exactly the same as the nForce 4.

The creativity that the nForce Pro 2050 MCP will offer vendors is unfathomable. We've already seen what everyone has tried with the NF4 Ultra and SLI chipsets, and now that we have something made for scalability and multiple configurations, we are sure to see some ingenious designs spring forth.

NVIDIA mentioned that many of their partners wanted a launch in December. NVIDIA also told us that IWill and Tyan are already shipping boards, but we aren't sure how widespread availability is yet. We will have to speak with IWill and Tyan about these matters. As far as we are concerned, the faster that NVIDIA can get nForce Professional out the door, the better.

The last thing to look at is how the new NVIDIA solution compares to its competition from Intel. Well, here's a handy comparison chart for those who wish to know what they can get in terms of I/O from NVIDIA and from Intel on their server and workstation boards.


 Server/Worstation Platform Comparison
   NVIDIA nForce Pro (single)  NVIDIA nForce Pro (quad)  Intel E7525/E7520
PCI Express Lanes 20 Lanes 80 lanes 24
SATA 4 SATA II 16 SATA II 2 SATA 1.0
Gigabit Ethernet MAC 1 4 1
USB 2.0 10 10 4
PCI-X Support No No Yes
DDR/DDR2 DDR DDR DDR2

Opteron boards with NFPro can have PCI-X support when combined with the proper AMD-8000 series chips, but NVIDIA didn't build in PCI-X support. It's obvious how well beyond Lindenhurst and Tumwater (E7520 and E7525) that the nForce Pro will scale with dual and quad Opteron solutions. Even in a single MCP configuration, NVIDIA has a lot of flexibility with its configurable PCI Express controller. Intel's solutions are locked into either 1 x16 slot + 1 x8 (E7525) or 3 x8 (E7520). The x8 connections to the MCH can run 2 physical devices instead (up to 2 x4). Also, if the motherboard vendor includes Intel's additional PCI hub for more PCI-X slots, either 4 or 8 of those PCI Express lanes go away.

Unfortunately, there isn't a whole lot more that we can say until we get our hands on it for testing. Professional series products can take longer to get into our lab, so it may be some time before we can get a review out, but we will try our best to get product as soon as possible. Of course, boards will cost a lot, and the more exciting the board, the less affordable it will be. But that won't stop us from reviewing them. On paper, this is definitely one of the most intriguing advancements that we've seen in AMD-centered core logic, and could be one of the best things ever to happen to high end AMD servers.

On the workstation side, we are very interested in testing a full 2 x16 PCI Express SLI setup, as well as the multiple display possibilities of such a system. It's an exciting time for the AMD workstation market, and we're really looking forward to getting our hands on systems.


The Kicker(s)
Comments Locked

55 Comments

View All Comments

  • smn198 - Friday, January 28, 2005 - link

    It does do RAID-5!
    http://www.nvidia.com/object/IO_18137.html

    w00t!
  • smn198 - Friday, January 28, 2005 - link

    #18
    It can do RAID-5 according to http://www.legitreviews.com/article.php?aid=152

    Near bottom of page:
    "Update: NVIDIA contacted us to let us know that RAID 5 is also supported on the 2200 and 2050. They also didn't hesitate to point out that when the 2200 is matched with three 2050's, the RAID array can be spanned across 16 drives!"

    However, nidia's site does not mention it! http://www.nvidia.com/object/feature_raid.html

    I wonder. would be nice!
  • DerekWilson - Friday, January 28, 2005 - link

    #50,

    each lane in PCIe consists of a serial up link and down link. this means that x16 actually has 4Gb/s up and down at the same time (thus the 8Gb/s number everyone always quotes). Saying 8Gb/s bandwidth without saying 4 up and 4 down is a lil misleading because that bandwidth can't move in one direction when needed.

    #53,

    4x SATA 3Gb/s -> 12Gb/s -> 1.5GB/s + 2GbE -> 0.25GB/s + USB 2.0 ~-> .5GB/s = 2.25 GB/s ... so this is really manageable bandwidht. Especially as its unlikely for all this to be moving while all 5 gig up and down of the 20 PCIe lanes are moving at the same time.

    It's more likely that we'll see video cards setting aside 30% of the PCI Express b/w to nearly idle (as, again, upload is often not used). Unless using the 2 x16 SLI ... We're still not quite sure how much bandwidth this will use over the top and through the PCIe bus. But one card is definitely going to send data back up stream.

    Each MCP has a 16x16 HT link @ 1GHz to the system... Bandwidth is 8GB/s (4 up and 4 down) ...
  • guyr - Thursday, January 27, 2005 - link

    Can anyone explain how these MCPs work regarding throughput? What kind of clock rate do they have? 4 SATA II drives alone is 12 Gbps. Add 2 GigE and that is 14. Throw in 8 USB 2.0 and that almost an additional 4 Gbps. So if you add everything up, it looks to be over 20 Gbps! Oops, sorry, forgot about 20 lanes of PCIe. Anyway, has anyone identified a realistic throughput that can be expected? These specs are wonderful, but if the chip can only pass 100 MB/s, it doesn't mean anything.
  • jeromechiu - Thursday, January 27, 2005 - link

    #12, if you have a gigabit switch that supports port trunking, then you could use BOTH of the gigabit ports for faster intranet file-transfer. Hell! Perhaps you could add another two 4-port gigabit adaptors and give your PC a sort-of-10Gbps connection to the switch! ;)
  • philpoe - Wednesday, January 26, 2005 - link

    Being a newbie to PCI-E, if I read a PCI-Express FAQ correctly, aren't the x16 slots in use for graphics cards today 1 way only? Too bad the lanes can't be combined, or you could get to a 1-way x32 slot (apparently in the PCI-E spec). In any case, 4 x8 full duplex cards would be just the ticket for Infiniband (making all that Gbe worthless?) and 4 x2 slots for good measure :). Just think of 16x SATA-300 drives attached and RAID. Talk about a throughput monster.
    Imagine Sun, with the corporate-credible Solaris OS selling such a machine.
  • DerekWilson - Tuesday, January 25, 2005 - link

    #32 henry, and anyone who saw my wrong math :-)

    You were right in your setup even though you only mentioned hooking up 4 x1 lanes -- 2 more could have been connected. Oops. I've corrected the article to reflect a configuration that actually can't be done (for real this time, I promise). Check my math again to be sure:

    1 x16, 2 x4, 6 x1

    that's 9 slots with only 8 physical connections. still with 10 lanes left over. In the extreme I could have said you can't do 9 x1 connectios on one board, but I wanted to maintain some semblance of reality.

    Again, it looks like the nForce Pro is able to throw out a good deal of firepower ....
  • ceefka - Tuesday, January 25, 2005 - link

    Plus I can't wait to see a rig like this doing benchies :-)
  • ceefka - Tuesday, January 25, 2005 - link

    In one word: amazing!

    Some of this logic eludes me, however.

    There's no board that can fully exploit the theoretical connectivity of a 4-way opteron config with these chipsets?
  • SunLord - Tuesday, January 25, 2005 - link

    I'd pay upto $450 for a dual cpu/chipset board as long as it gave me 2x16 1x4 and 1-3x1 connectors... as I see no use for pci-x when pci-e cards are coming out... Would make for one hell of a workstation to replace my aging athlon mp using tyan thunder k7 pro board. Even if the onbaord raid doesn't do raid 5 I can use the 4x slot for a sata2 raid card with little to no impact! Though 2 gigabit ports is kinda overkill. mmm 8x74GB(136GB) raptor raid 0/1 and 12x500GB(6TB) Raid 5 3Ware/AMCC controller.

    I can dream can't I? No clue what I would do with that much diskspace though... and still have enough room for 4 dvd-+rw dual layer burners hehe

Log in

Don't have an account? Sign up now