Power Consumption

AMD did list a slight increase in power consumption for the 5870 Eyefinity 6 cards. In real world usage it amounts to a 6 - 7W increase in power consumption at idle and under load. Hardly anything to be too concerned about.

It's worth mentioning that these power numbers were obtained in a benchmark that showed no real advantage to the extra 1GB of frame buffer. It is possible that under a more memory intensive workload (say for example, driving 6 displays) the 5870 E6 would draw much more power than a hypothetical 6-display 1GB 5870.

Power Consumption Comparison
Total System Power
Radeon HD 5870 1GB
Radeon HD 5870 E6 2GB
Idle 179.1W 186.0W
Load (Crysis Warhead) 290.0W 296.0W

If you are power conscious however, then an Eyefinity 6 setup may not be right for you. Our six 22" Dell displays consumed 114W of power by themselves while playing Crysis. That's the power consumption of an entire Core i5 system under load, just for your displays!

Final Words

I spoke with Carrell Killebrew a few days ago (no, not for the RV970 story) and our conversation drifted over to future applications for GPUs. When Carrell first introduced me to Eyefinity he told me that this was the first step towards enabling a Holodeck-like environment in about 6 years. 

Carrell envisions a world where when you want to watch a football game with your friends, or just hang out and play video games you'll do so in a virtual room inside of your home. You'll have a display occupying the majority if not all of your vision. Being displayed will be fully rendered, lifelike models of your friends, which you can interact with in real time. After all, sending model data requires far less bandwidth than exchanging high resolution encoded video between a dozen people in a room.

Sound will have to be calculated on a per person basis. Existing surround sound setups work well for a single user, but not for multiple people spread out all over a virtual room. The GPU will not only have the task of rendering the characters in the room, but also calculating the phase and position accurate sound for everyone.

Today we play games like Rockband or Guitar Hero facing a screen. In Carrell's world, 6 years from now we'll be facing a crowd of fans and it'll look, feel and sound like we're on stage performing. There's a lot that has to be solved between now and then, but in Carrell's eyes this is the beginning. And like most beginnings, this one has its rough patches. 

The good news is that a single Radeon HD 5870 Eyefinity 6 Edition card can drive a total of six displays. That's something that we couldn't have imagined from a consumer card even just a couple of years ago. If you've ever found yourself wanting 6 monitors for a particular application, workload or even game - this is your solution. 

As a general gaming card however, there are definite issues. In existing titles, with 3 or fewer screens, we just didn't see a tremendous performance advantage to the 5870 E6. The larger frame buffer did help raise minimum frame rates, but not enough to positively impact the average frame rates in our tests. Even in triple display setups we didn't see any reason to get the E6 card.

If you are looking to make the jump to six displays however, the issues then stop being with the card itself and are more about what you want to do with the setup. Having two 3x1 groups makes sense. It's a bit pricey, but it makes sense if you like mixing work and pleasure on your desktop. The single 3x2 group is the problematic configuration. For games you play in the third person, it's great. For first person shooters however, playing on an Eyefinity 6 setup puts you at a disadvantage due to crosshair problem. What AMD really needs to do here is enable a 5x1 configuration for folks serious about FPSes.

The bigger problem is simply the state of game support for Eyefinity. The majority of titles, even new ones coming out today, often ship with gross incompatibilities with Eyefinity setups. AMD is hard at work to make this better, but it means that you can't plop down $1500 for six monitors and two stands, drop another $900 on a pair of video cards and have it work perfectly in everything you'd ever want to play. It's a tough pill to swallow.

If you want to have an immersive gaming experience and if you've got the wall space you're better off buying a 720p projector, building a screen (or painting one on the wall) and calling it a day. On the other hand, of you just need more desktop resolution then a 30" monitor is probably in your future. If you must combine the two needs and have them serviced by a single setup, that's where Eyefinity 6 can offer some value.

Six Display Performance
Comments Locked

78 Comments

View All Comments

  • BoFox - Wednesday, March 31, 2010 - link

    If I wanted huge screen real estate, I'd definitely go for a 1080p projector that can do anywhere from 100" to 20'. Of course, a top-of-the-line one would cost upwards of $10000, but a really nice one would only be a bit over $1000. Give me this over "jail" bars of bezels anytime!

    I'm a bit puzzled at why ATI is doing a 2GB version to counter the GTX 480, and not a slightly faster version. Right now is AMD/ATI's real chance to seize the bull's horns with a death grip. By all means they should release a 950-1000MHz version of 5870, named 5890! Even if the power consumption is 25-50W more, it would still be considerably lower than the GTX 480, and actually pwning it in nearly all of game benchmarks. Even better would be to release a 512-bit version just like they did 4 generations ago with HD2900XT. With up to 100% greater memory bandwidth, there would be roughly 20% more performance at 1000MHz core clock across all benchmarks, if not more.

    I say this with mercy.. if AMD does not truly seize the moment with a death grip by the horns, AMD will regret it for a long time, if not forever.



  • bunnyfubbles - Wednesday, March 31, 2010 - link

    Why not go with 3 cheaper projectors and use them with eyefinity? One of the oft neglected advantages to Eyefinity is a properly supported game can actually provide a player with a FOV advantage - they can actually see more of the game world than other players without distorting the image.

    This was never a counter to the GTX480, the E6 edition card had been planned long before we knew anything concrete about Fermi. And considering the benches, its quite obvious that 2GB is not needed for today's games. If ATI was going to introduce a counter to Fermi it would simply be a higher clocked 5870, but even that's not necessary save for bragging rights.

    And a 512bit memory interface is the last thing I'd expect. It's actually bizarre you bring up the HD 2900XT as if it was something ATI should look back on for inspiration. If anything the HD 2900XT was ATI's own GTX480 debacle.
  • 1reader - Wednesday, March 31, 2010 - link

    I've hadn't thought about it that way, but the 2900XT situation was very similar to nVidia's 480GTX situation now. Like you said, definitely something ATI doesn't need to look back on for inspiration. That's why (I believe) ATI switched to GDDR5 as quickly as possible, to get as much throughput through that 256 bit memory interface.

    On the other hand though, I have a 2600XT with GDDR3 that makes a perfectly satisfying backup card. It definitely wouldn't have enough power to drive 6 displays though.

    Also, what's up with AnandTech? I don't check back for two days, and the site disappears, only to be replaced by this sexy tech website. ;)
  • brysoncg - Wednesday, March 31, 2010 - link

    Here's a thought: get a theater room with 6 hi-def projectors, and set them up in the eyefinity 6 setup. if you spent a little bit of time with it, you would be able to perfectly line up the edges of the projections from each projector, and you then have the eyefinity 6 setup, without the need for bezel correction (no bezel!), and therefore no crosshair problem. The only problem would be the cost....
  • erple2 - Friday, April 2, 2010 - link

    I think that the other problem would be the space. If a 1080p can comfortably drive a 100" screen, having a large enough wall to put 3x2 surfaces on it would become problematic, I'd think. I don't know too many people that have a 21' wide by 8' tall room where they could reasonably project onto...

    Plus the screen for that would be ... pricey.

    However, some cheaper 720p projectors would be an interesting proposition, particularly projecting on a smaller wall - maybe 1/2 the size? so about 11' wide by 4' tall?
  • Xpl1c1t - Wednesday, March 31, 2010 - link

    the ring bus is definitely worth looking back upon
  • Calin - Thursday, April 1, 2010 - link

    Also, unlike the computing units (which you can mostly disable at will in a finished product), any bad transistor in that ring bus would brick the entire chip
  • BoFox - Thursday, April 1, 2010 - link

    True.. 3 cheaper projectors with eyefinity would be an ideal solution.. and the screen could be a bit curved like at many cinema movie theaters today!

    On the same day Nvidia released GTX 480, AMD released this 2GB version to counter Nvidia's offering. Of course, AMD promised this 2GB version a long while ago, so it's about time. Perhaps it won't be long before AMD releases the faster 5890.

    About the 512-bit bus: It is certainly do-able on a 40nm process, compared to when it was done on 80nm process with a 1024-bit ringbus a while ago on that HD2900XT (I will agree with you here in that it was redundant for the 2900XT)..

    ____
    ""Does a 512-bit bus require a die size that's going to be in the neighbourhood (or bigger) of R600 going forward?"

    No, through multiple layers of pads, or through distributed pads or even through stacked dies, large memory bit widths are certainly possible. Certainly a certain size and a minimum number of consumers is required to enable this technology, but it's not required to have a large die."
    -(Sir Eric Demers, architecture lead on R600 which is the still the basis of 5870's today)
    http://www.beyond3d.com/content/interviews/39/5

    If a 4890 simply performs around 19% better overall than a 5770 in all games except when using DX11, what shall we point at as the cause of the difference? The GPU cores are nearly identical in terms of clock speed, shaders, ROP's, etc.. with perhaps a slightly better optimization in the R800 architecture and better drivers. The main "obvious" difference is a 62.5% increase in memory bandwidth over the 5770. A 5870 is basically 2x 5770's in one GPU with everything doubled. It has been shown that a 5870 certainly does benefit from greater memory bandwidth.. let's say about 0.2% increase in performance per 1% increase in bandwidth.

    By the way, Nvidia made quite an interesting statement on the memory bus a short while ago:

    "With 3-D interconnects, it can vertically connect two much smaller die. Graphics performance depends in part on the bandwidth for uploading from a buffer to a DRAM. "If we could put the DRAM on top of the GPU, that would be wonderful," Chen said. "Instead of by-32 or by-64 bandwidth, we could increase the bandwidth to more than a thousand and load the buffer in one shot."

    Based on any defect density model, yield is a strong function of die size for a complicated manufacturing process, Chen said. A larger die normally yields much worse than the combined yield of two die with each at one-half of the large die size. "Assuming a 3-D die stacking process can yield reasonably well, the net yield and the associated cost can be a significant advantage," he said. "This is particularly true in the case of hybrid integration of different chips such as DRAM and logic, which are manufactured by very different processes.""
    http://www.semiconductor.net/article/print/438968-...

    Nvidia's own John Chen mentioned increasing the bandwidth from "by-32 or by-64" per chip to "more than a thousand". This translates to 8x1024, which is an 8192-bit bus. Hopefully vertically stacked dies are the future. It would effectively reduce the need for increasingly larger buffer size, and act just like embedded RAM that can instantly load the buffer in one shot. ..a bit like SSD's today (small, but "instant"), and thought to be a pipe-dream a few years ago.
  • Ramon Zarat - Wednesday, March 31, 2010 - link

    The 2900XT used a dual 512bit ring bus topology. The fact ATI or Nvidia don't use this technology today is a hint that it was not efficient enough or too complex to be commercially viable. In that sense, it was not a classic 512bit wide bus, as used by Nvidia previous generation or the 256/384/448bit bus in use today.

    A 512bit bus would be impossible to implement on the 5000 series simply because the memory controller is physically limited in hardware to "talk" to a 256bit bus. You need twice the traces on the PCB to go from 256 to 512bit and those traces must be, oner way or the other, physically linked to the GPU. The only way to speed up the memory access on the 5000 series would be to use faster DDR5 chip.
  • SoCalBoomer - Wednesday, March 31, 2010 - link

    Unfortunately, a 1080p projector just won't get you the pixels that this thing will.

    I use a 2x2 setup on my desk at work and it has far more pixels (at far FAR less price) than a 1080p projector has (which is what? 1920x1080? something like that? - I'm working on 2560x2048)

    My question would be if you can set these up as individual monitors just extending the desktop of if you HAVE to use eyefinity? I'd love to be able to do this instead of running dual cards, with the limitations on the motherboard that brings. . .

Log in

Don't have an account? Sign up now