At this point, I doubt there's much difference between vendors. Since the card is still relatively new and supply still low, I would think most vendors are sticking to the reference design.
Warranties and support aside, they're all identical.
I'd investigate the warranties offered, any trade-up programs (not that they'll be much to trade up from but still good to know) and any info like that.
I find it rather peculiar that even though Crysis Warhead runs better on ATI single GPUs (4890 faster than GTX275, 4870 faster than GTX260, 4850 almost as fast as GTX 260) it scales well enough in SLI to take the performance lead from ATI, as shown by the 5850 CF losing to GTX285 SLI (SLI scaling almost 100%!!). It makes one wonder just what the heck all the hype is about ATI releasing a new driver once a month if they don't seem to do make much of a difference in the performance department.
sorry the f-bomb quote was in the previous artical. But kinda adds to the point. Please no silly fan boy comments as I have stock in amd and nvidia :) (ya, in know probably silly to have stock in these two companies)
Probably get locked out of the site for saying this. But it seems like there is always an agression toward NVIDIA. Kinda like ATI gives free samples and NVIDIA does not, or not in a timely manner. I mean why would an F-Bomb be quoted on a professional site. It's not just this one statement and not just this one artical that makes me wonder.
otherwise, as usual, great artical, with great content.
I wish you guys would put together Company of Heroes, World in Conflict and Supreme Commander so we could see clearly the differences between the cards. Today we see only reviews based on FPS, I like FPS, but my main games are all RTS.... I bought the HD4890 based on reviews but it didnt run as good as my GTX 275 for RTS titles....
From what I've heard at other sites the Vapor-x 4890 is significantly quieter than the ATI cooler on the 5850. Thats not a knock on the 5850, it's just that the Vapor-x cooler on the 4890 is dead quiet. I love mine. It even has HDMI, VGA, Displayport and DVI on the back of the card!
I'm in the process of getting a new graphics card, and I'm trying to find the most silent one that doesn't nerf performance from the top cards that much. The 3 cards that I'm considering is:
This 5850 from ATI (probably Sapphire or Asus)
Sapphire HD4890 Vapor-X
MSI N275GTX Twin Frozr OC
I was looking at the the exact same cards. Both the Twin Forzr GTX275 and the Radeon 4890 vapor-x are VERY QUIET. I would pick based on which games you play. If you have a home theater and audio passthrough is a concern I would lean towards the ATI card. Both of them come pre-overclocked and both have stellar performance. The Twin Frozr is slightly longer in length; however it's power connectors face up (Not out the back of the card like usual).
Yes. That's the noise of our test rig that we measure at idle when using an entirely passive video card; we can get our rigs down to about 36dB, but that requires removing all the fans other than the CPU fan. Bear in mind that a dead quiet room is 30dB and that we're measuring roughly 6" away from the card, so our methodology may differ from how other people do it.
Thank you Ryan!
So what do you think is the most silent? This 5850 with a reference cooler or the 4890 vapor-x? I find them very hard to compare because of different methodology...
Right now, they only thing in my case that is making noise is the fan on my old 8800GTS, and I'd like my new card to make it completely silent if possible.
I didn't read all the comments yet so perhaps this has been mentioned, but where are all the 5850 Xfire results? Hopefully the Crossfire benches weren't done exclusively at 2560x1600? From some of Ryan's text in the article, it seems there were Crossfire results that are accidently missing or got pulled. Stuff like this quote from the Battleforge page 6 text:
quote: The 5850 Crossfire on the other hand loses once again to the GTX 285 SLI in spite of beating the GTX 285 in a single card matchup.
Yet there are no Xfire results on any of the Battleforge graphs. Hopefully they'll be added later. In the meantime guru3d and HardOCP have Xfire results for those interested.
Had a chance to go through all the comments now, seems TheOne pointed out the same thing on page 6 of the comments and Ryan attempted to fix it. The fix still ain't showing up in my browser though.
HD5800 series is bandwidth limited. The 5850 being less severe than 5870. 5850 has about 77% computing power and about 83% of memory bandwidth of 5870. So normally, 5850 should perform about 77% as fast as 5870 but it wasnt the case here. If you calculate all the benchmarks performance at all resolution, its surprisingly consistent, 5850 is always around 82-85% performance of 5870. Never did it drop to below 80% performance level, let alone coming close to 77% which is where it should be. Different game has different bandwidth requirement and there's fluctuation in percentage improvement from 4800 series to 5800 series. It unstable but 5870 for eg rarely doubles 4870, let alone 4890. So in the end, its not parallelization or scaling problems, nor was it geometry or vertex limitation (possible but less likely), it is indeed the 5800 series being limited in performance due to restricted memory bandwidth. Those of you who has 5800 cards, overclock the memory and check the scaling and you'll see wht i mean.
Overclocking the RAM is one idea, adding more RAM is another, however it remains to be seen whether ATI will introduce a wider bus for any higher spec models.
The situation is a little different to the 4830 - 4850 comparison whereby the 4830 had slightly lower clocks but only 640 SPs enabled instead of the full 800, however in the end the performance difference wasn't very large so the lack of shaders didn't cripple the 4830 too much.
I'm not convinced about that, many games run on 3 30" lcd's with no issues.
More likely is the games aren't pushing the cards to their maximums, that is why you aren't getting the full effect. We will find out for sure when the 2gb version is released.
If u cant see it thats ur problem, to say games arent pushing it is noobish, 1gb is still plentiful for today's games, memory buffer was nv an issue as long as its 1gb. well in the end u will see faster ram outperforming ur 2gb version. Very amazing to see many people still cant figure out the main reasons of 5870's underperformance.
It is still a decent card and offer many features and definitely a better performance to price ratio card than GTX 285. But it is underperforming. Not living up to its nex gen architecture prowess. Unless GT300 screw up, it can easily outperform 5870 when its out. If AMD came out quickly with 5890, they will be wise to significantly bump up the GDDR5 speed as it is unlikely they will go with higher than 256bit design due to their "sweet spot" small die strategy.
Wait...you really think that you have it figured and ATI didn't realise it? You truly believe that ATI would lower the performance of the card instead of just strapping on a 384 bus?
No. Any bandwidth issues only exist in your head. Didn't you say that different games have different bandwith requirements?
I would really love to know how these games run with this added xtra.
One of my main reasons for upgrading would be to play WOW on three screens (most likely in window mode).
Would it be possible to add this benchmark in the future, with the most obvious config being 3 screens in 2560/1920/1680
I'm looking to pick one of these up relatively quickly. My question is there really a difference in which vendor I purchase from (HIS, Powercolor, Diamond, XFX, etc). I know many offer varying warranties, but if they offer the same clock speeds, what else is there? I guess I'm looking for the most reputable brand since I won't be waiting for too many specific reviews before purchasing one. Any help is appreciated.
Where and under what conditions/server load do you test the frame rate in WoW? I've played for years and with my 4850 i can get 100fps in the game if i am in the right spot when no one else is in the zone. Knowing when and where you do you frame rate tests for WoW would help to put it into context.
"...our test isn't representative of worst case performance - it uses a very, very light server load, unfortunately in my testing I found it nearly impossible to get a repeatable worst case test scenario while testing multiple graphics cards.
I've also found that frame rate on WoW is actually more a function of server load than GPU load, it doesn't have to do with the number of people on the screen, rather the number of people on the server :)
What our test does is simply measures which GPU (or CPU) is going to be best for WoW performance. The overall performance in the game is going to be determined by a number of factors and when it comes to WoW, server load is a huge component."
A good repeatable test would be to have a RAID group in an instance and have them all cast a set of spells at once. the instance server separates from the rest of the server load and allows for a bit better testing. While it's true that the game is generally more CPU/RAM limited than GPU limited, especially if you have a lot of add-ons doing post processing on all the information that is shooting around. However, having been in raids with and without add-ons and such, i can tell you that i can get 45-50fps when we are just standing there waiting to attack, and then as soon as the spell effects start going off my frame rate drops like a rock. The spell effects are particle effects that overlap and mix and are all transparent to one degree or another. All those effects going off on a single target creates a lot of overlap that the GPU has to sort out in order to render correctly.
What you might try is to see if you can get Blizz to put a target dummy in an instance to isolate it from the rest of the masses, and allow for sustained testing with spell effects going off in a predictable manner. (not having every testing go balls to the wall, but simply repeat a set rotation in a timed manner so that you can get an accurate gauge.
I second your question
And also just want to say that with a very heavily volt modded and overclocked 8800GTS 512MB the performance in WoW at maximum settings with 2xAA will totally kill my card
For example in heavily populated areas it will use more than 512MB video ram (confirmed using rivatuner)
And in heavily populated areas I get like 20FPS, for example in Dalaran at peak hours (like, when I play :P)
The numbers you provide for WoW are welcome, very few sites do these tests
But more realistic numbers would be nice, representing what a big guild would see in a 20 or 40 man RAID...
Perhaps you could setup a more realistic test with private servers, or if you are unwilling to go that route ask Blizzard if they could setup a testserver for you to use so you can get reproducible tests?
I can't wait till we find out if those extra SIMD engines can be unlocked like the good ol' ati2mtag softmod for the 9500 -> 9700 :) Even if not, this looks like the card for my new HTPC :)
The charts are a bit confusing. My main focus is at the 2560x1600 and the review references 5850 CF and 285 CF but they are not to be found in any of the charts. Same for 285 SLI
"With and without ambient occlusion, the 5850 comes in right where we expect it. The 5850 Crossfire on the other hand loses once again to the GTX 285 SLI in spite of beating the GTX 285 in a single card matchup."
I understand you want a consistent platform to test all the video cards, but is there any possibility of testing the 5850 on a more realistic platform?
Maybe something like a dual core 2.8GHz machine? I have to think the bulk of the potential buyers for this card won't have a machine anywhere near as powerful as the one you are testing on.
Ryan, I know it's not the focus of this article, but it would be great to get a small paragraph (or a blog post or whatever) on what ATI has said in reference to the lower-spec 40nm DX11 parts. I simply don't need 4850 power in a SOHO-box, but the low 40nm idle power consumption and DX11 future-proofing are tempting me away from a 4870/90 card. What kind of prices and performance scaling are we likely to see before the end of 2009? Thanks for any info!
Should have done HAWX in DX 10.1 mode then the HD5850 > GTX 285 sweep would have been complete. Or to flip it around, enable PhysX (lol) on some games.
I have to point this out because it's something I've now seen on two websites and it irks me a little bit just like 'solid state capacitors' does. In the last sentence on page one the plural of die in this case is dies not dice. Someone didn't edit this carefully!
btw, this card is powerless against "the way is meant to be played"
nvidia keeps bribing developers left and right, ATI does nothing
(except boring sideshow penis wars), meanwhile the poor ATI users cant seem to play NFS SHIFT 640 x 480, all set to low, ( my -rebranded- 9800 does it great btw) + there is no in-game selective AA available to any ATI Radeon user in Batman, (another TWIMTBP game) + it looks like empty crap with all the shit nvidia - removed ( yes, really no smoke? no papers? no flags? not even static flags? what about GRAW? it used to work fine on reg cpus...)
I suppose we probably have to wait for the consumer driver release to know for sure, but how is the stability of these? The only two AMD cards I have direct experience with have both had driver issues, so that is the one factor that would keep me from considering one of the lower-powered versions of this architecture once they are released.
AMD's graphics forums and the number of bugfixes and known issues posted for each driver release say otherwise.
NVIDIA is not doing so well lately with their drivers either though, especially where Vista/7 is concerned.
Both companies can't seem to get proper fixed aspect ratio GPU scaling working in Vista/7. This has been broken since Forceware 169.04, and my friend tells me broken in a recent Catalyst release. What the hell is going on?
I remeber a few years back when Green was on top and Red was dieing. Looks like tables have turned.
For the gamer/consumer, IMHO its a win-win.
My new PC next year, just might have an AMD card. I could care less about brand loyality. I buy what ever gives me the most bang for buck at the time I build.
Instead of just marginally reducing performance by dropping clockspeeds and available bandwidth, they artificially neutered their parts by cutting out a few SIMD clusters similar to Nvidia's MO of cutting TPC units.
Your conclusion doesn't seem to draw this parallel, that the cut SIMD probably don't factor much into the overall performance because 1580 or whatever is left is enough and the full 1600 aren't being fully utilized in most games today. So instead the 5850 scales more closely to the 15% decrease in clockspeeds compared to the combined 23% for clockspeeds and SIMD units.
The 5870 soft launch followed by today's 5850 paper launch also says quite a bit about 40nm yields in light of their artificial die neutering approach. Reports of AMD shipping *FOUR* 5870 for every *ONE* 5850 for a 4:1 ratio indicates 40nm yields are quite good. Given the high demand and apparently inadquate supply, it makes absolutely no sense whatsoever for AMD to ship these perfectly capable die for a $100 discount when they can sell them for that much more on the 5870.
with beta driver, it can beat nvidia fastest single gpu single card, this card would be an awesome at some moment in the future without breaking the bank for its cost and power consumption.
i'm falling in love with this new babe already. :D
I think the current beta drivers are holding the performance of these cards and it can improve by another 10-15%. Maybe ATI is holding it back just in case nVidia brings in some surprise (really doubt it).
Strangely the temps mentioned for both the cards are inconsistent with other reviews on the web with Anandtech's being the lowest. Maybe it would be better to post ambient-to-idle/load temps in all your reviews.
Nvidia's tech is soon to be so out dated that they will not be a deal at any price. They cannot even do all DX10 spec let alone any DX11 which I do believe ATI has been able to do some DX11 functions since the X1900. I hope Nvidia gets their act together and survives but unlike when 3D was new and Nvidia pushed new tech envolpe they are have been holding progress to a stand still. Nvidia should put up and play the game or get out of the game and make PhysX cards.
I do hope they create a 5850X2. These new RV870 gpu's look like they will work well in a 2GB version. Ive heard the 5870X2 will be a 4GB card, lets just hope. I know I would pay $600-$700 for that baby without a thought.
I am with you, I think NVIDIA needs to go out of business. I think they will.
They are at a huge disadvantage without a CPU. Intel is moving CPU/GPU soon, and AMD had this planned for a long time. With Intel already precluding NVIDIA from making chipsets for Nehalem based computers, and ATI making far better GPUs, NVIDIA is running on momentum now, and that runs out over time.
NVIDIA might shirk Intel and make a chipset for Nehalem. While most us wouldn't even consider a crappy NVIDIA chipset, the general market has no idea how problematic they are. They buy from HP and Dell, and they use NVIDIA. I am surprised at how many of these that I see, so it's a good business for NVIDIA.
Right now, the Lynnfield is essentially irrelevant, and the Bloomfield is a niche product. Neither are particularly important products as far as the market is concerned, so NVIDIA isn't really paying a price. Core 2 is still the most attractive platform for mainstream America, or an AMD platform. Clarkdale, with all its flaws, should sell especially well, and even if NVIDIA does decide they want to make a chipset, it won't sell. No one who knows much about computers will buy an NVIDIA chipset, so they sell mainly through HP and Dell, or similar companies. HP and Dell are not going to want to pay extra for an NVIDIA GPU, since the processor comes with one, and really it's only the southbridge that's up for grabs now. This would make a much smaller contribution to their bottom line. It's all bad for them.
Yes, they can sell into the Bloomfield space, if they come up with a good discrete card. But, how big is this market? Lynnfield should be even smaller, being brain-damaged and second-best, but, not particularly cheap like the Clarksdale. Also, it's unlikely someone will want a high priced video card, or two, and pair it with anything but the best platform.
So, where does NVIDIA sell into? Core 2 will go away, Bloomfield and Lynnfield will have relatively small market shares, and Clarksdale should sell especially well in the markets where NVIDIA chipsets sell well now.
Anand said the Clarksdale was the replacement for the Conroe, which caught a lot of FlAK, because he worded it poorly. But, in a way, he's right with respect to the Clarksdale replacing the Core 2 as the platform for the mainstream market. In this respect, the Clarksdale is better in almost all respects. It's dual core, but runs four threads. Sure, they put the MMU in the wrong place, but it's still better than being outside the processor, and the GPU is better than the G45. On top of this, it should be cheaper. Core 2 duals, with Pentiums, etc..., sell the best, still. Clarksdale is better, and should be cheaper, so it's going to dominate the market in the same way. Bloomfield is king of performance, and will have a place. It's not a big one though. Lynnfield is a good combination of power and decent performance. It's also not a big space, although the i5 750 might do well and shouldn't be discounted. The big space will be the Clarksdale. NVIDIA is going to be hurt by it. Hopefully, fatally.
I disagree with you on ALL points. I buy 2 videocards per year and I own an almost equal number of ATI/Nvidia cards. I just bought a 4890 and next will be a 5850.
Nvidia should definitely NOT go out of business. Competition drives creativity and reduces prices for consumers. I would hardly say Nvidia is doing badly at the moment. The bulk of aftermarket videocards still come from Nvidia. They are still ahead of ATI in marketshare. They are also a marketing juggernaught; "The Way its Meant to be Played" is a very powerful marketing tool. That being said I expect a firm advantage for ATI over the next six months.
I have owned several Nvidia chipset motherboards and they have all been exceptionally reliable and great overclockers. I've never had driver issues with them. I find Clarksdale underwhelming. G45 didn't live up to all its promises (Bitstreaming) and I seriously doubt that it's successor will either. G45 has the least features and customizability of any onboard solution I have used yet. Intel has a long way to go on integrated video if they ever want to capture the enthusiast market.
ATI has been doing a stellar job lately. The 5850 is every hometheater guys dream. Inexpensive, Bitstreaming HD content and it will fit in most HTPC cases. The latest GTX cards and the 5870 are too long. Videocards should be less than 10 inches long!
nVidia shouldn't "get out of the game" at all. True, ATI may just have 3 or 4 months of technical superiority, but nVidia's next cards may be superior as well as offering plenty of revolutionary features.
nVidia can also chop prices for their current lineup but not too much, otherwise they may undercut their new cards.
I'm impressed by the 5850's frugal (as compared to the 5870) power requirements. Coupled with a relatively low price, it should sell very nicely indeed (and spawn some overclocked versions very quickly).
Did you have the August DirectX Redist installed on your test system? I think I've read somewhere that this is the update that brings 'full' DirectX11 functionality to Windows 7, and perhaps this is the reason you didn't see the results you were expecting.
well, since it's a centrifugal fan, it sucks air through the center and exhausts it in a radial direction, being the shroud's job to redirect the air afterwards. now looking at how restricted the openings appear on the 5850 and the cooling performance. I'm curious if there is connection, say fresh air may be escaping through those openings and back into the case instead of passing through the heatsink and out the back of the case.
With 5870 being basically a doubled up 4870 architecture and (still SIMD), i am interested to see how Nvidia's new MIMD architecture will compete, especially with the ridiculous memory bandwidth it will have with GDDR5 and a 512 bit bus.(if the 512 bus isnt just a rumor and hopefully they are not plagued by driver issues) I am glad AMD/ATI is doing better the competition is great, but i feel the new NV cards are going to be good (least if any of the rumors are true). I am still trying to find a reason to replace my 9800GTX SLI, they burn thru about any game as long as you stay away from over 4X AA due to the 512 MB frame buffer.
BTW, Not a NV fanboi here, hope i dont sound like one, its late and i just dropped my friend off at the ER, brain is tired. the fiancee's PC has a ATI card and its great, no complaints other than a few driver issues, but nothing i could complain about really. HD4850 512MB
keep up the competition, we have AMD to thank for under 400 fast cards
I wonder how close the 5850 will be to the 5870 once they have similar memory speeds and thus bandiwidth. Are those extra shaders/core frequency wasted due to the limited memory bandiwidth.
An excellent question! This isn't something we had a chance to put in the article, but I'm working on something else for later this week to take a look at exactly that. The 5850 gives us more of an ability to test that, since Overdrive isn't capped as low on a percentage basis.
You could make some raw shader tests that doesnt depend on memory bandwith to see if the gpu internal bandwith is somehow limited or the external bandwith. And maybe try out some older games(quake3 or 3dmark2001).
In DX11 games will use more shader power for other things which hawe litle impact on bandwith. Maybe they tested those heawy dx11 scenarios and ended with much less costly 256bit interface as a compromis.
Up to 80watts lower consumption in load
120$ less
Quieter
Cooler
Shorter
Performance hit around 10-15% against 5870 (that means far better perf/watt and perf/$)
~12% More performance than GT285
Overclocks to 5870 perf easily
Ok, this is an absolute killer for the lower performance market segment. Its 4870vs4850 all over again. Only this time, they get the performance crown for single cards too.
Another thing to remember, is that nvidia does not currently have a countermeasure for this card. The GT380 will be priced for the enthusiast segment, and we can only hope for the architecture to be flexible enough to provide a 360 for the upper performance segment without killing profits due to diesize constraints. Things will get even more messy as soon as Juniper lands, the greens have to act now (thats our interest as consumers too)! And I don't think that GT200 respins will cut it.
My guess is - GT300 wont compare to 5850 or 5870.
It will compare with the 5870X2 and be in the price bracket. (Too much for most of us.)
When the GT300 eventually gets released that is.... Then a few months later again nvidia will bring out the scaled down versions in the same price brackets as the 5850/5870 that will probably compete pretty well.
Only question is - can you wait?
You could wait for the 6870 as well:P
I really think enthusiast that spends hundreds on the MB alone isn't the regular enthusiast. So price wouldn't be an issue. I love building PCs and testing them but I'm not going to spend $200+ of a MB knowing that I will be building another system in few months with better performance parts and pricing. Unless I'm really keeping the system for a long time then I'll pour my hard earn money into the high end parts. But then if you're doing this I don't think you're really an enthusiast as it's really a one shot deal?
Ya it already sounds like the 5870X2 and 5850X2 are being positioned in the media to compete with just a single GT300 with rumors of $500 price points. I think the combination of poor scaling compared to RV770/RV790 in addition to some of the 5850/5870 CF scaling problems seen in today's review are major contributing factors. Really makes you wonder how much of these scaling issues are driver problems, CPU/platform limitations, or RV870 design limitiations.
My best guess for GT300 pricing will be:
$500-$550 for a GTX 380 (full GT300 die) including OC variants
$380-$420 for a GTX 360 (cut down GT300) including OC variants
$250 and lower GTX 285 followed by GT210 40nm GT200 refresh with DX10.1
So you'd have the 5870X2 competing with GTX 380 in the $500-600 range. Maybe the 5850X2 in the $400-$500 range competing with the GTX 360. 5870 already looks poised for a price cut given X2 price leaks, maybe they introduce a 2GB part and keep it at the $380 range and drop the 1GB part. Then at some point I'd expect Nvidia to roll out their GT300 GX2 part as needed somewhere in the $650-700+ range.....
Nah. They won't get enough sales at those prices. They need to slot in under $399 and $299 unless they put out 50% more performance than the 5870 and 5850 respectively.
Or the heck with them, I'll just wait six months for the refresh on a smaller die, better board layout with better cooling, lower power, and a better price tag.
It's not like i NEED DX11 now, and i certainly don't need more GPU performance than I already have.
How would it need to be 50% faster? It'd only need to be ~33% faster when comparing the GTX 380 to the 5870 or GTX 360 to the 5850. That would put the 5870 and 360 in direct competition in both price and performance, which is right on and similar to past market segments. The 380 would then be competing with the 5870X2 at the high-end, which would be just about right if the 5870X2 scales to ~30% over the 5870 similar to 5870CF performance in reviews.
"It's not like i NEED DX11 now, and i certainly don't need more GPU performance than I already have. "
As of today I am limping along on a GTX275 (LOL) and I really cannot tell any differences between the cards at 1920x1080. Considering the majority of PC games coming for the next year are console ports with a few DX10/11 highlights thrown in for marketing purposes, I am really wondering what is going to happen to the high-end GPU market. That said, I bought a 5850 anyway. ;)
I'm running GTX 280 SLI right now and have found most modern games run extremely well with at least 4xTrMSAA enabled. But that's starting to change somewhat, especially once you throw in peripheral features like Ambient Occlusion, PhysX, Eyefinity, 3D Vision, 120Hz monitors or whatever else is next on the checkbox horizon.
While some people may think these features are useless, it only really takes 1 killer app to make what you thought was plenty good enough completely insufficient. For me right now, its Batman Arkham Asylum with PhysX. Parts of the game still crawl with AA + PhysX enabled.
Same for anyone looking at Eyefinity as a viable gaming option. Increasing GPU load three-fold is going to quickly eat into the 5850/5870's increase over last-gen parts to the point a single card isn't suitable.
And with Win7's launch and the rollout of DX11 and DirectCompute, we may finally start to see developers embrace GPU accelerated physics, which will again, raise the bar in terms of performance requirements.
There's no doubt the IHVs are looking at peripheral features to justify additional hardware costs, but I think the high-end GPU market will be safe at least through this round even without them. Maybe next round as some of these features take hold, they'll help justify the next round of high-end GPU.
With PC gaming seemingly going towards MMO like WoW/Aion/Warhammer (and later on Diablo 3) and far less emphasis on other genre(besides FPS, which is more or less the same every year), and as you said most new games being console ports, I really doubt we'll need anything more powerful than the 4890, let alone a 5850 or 5870 for the coming couple of years. Maybe we've enter the era where PC games will forever be just console ports + MMO, or just MMO, and there'd be little incentive to buy any card that cost 100+.
I was told by a Microcenter employee the current pre-order retail price for the top end GT300 card was $579, an EVGA card, btw. And reportedly the next model down is the GT350. Dunno if this is fact or not, but he didn't have any reason to lie.
The GT300 will need 512bit gddr5 to make memory faster than GT200 and it will hawe even more masive GPGPU bloat than last gen. So in folding it will be surely much faster but in graphic it will cost much more for the same(at least for nvidia depending how close they want to bring it to radeon 5k). And of course they can sell the same gt300 in tesla cards for several thousand(like they did with gt200).
The 5850 price with disabled units is still win for ati or else they wouldnt sell the defect gpu at all.
Well, you meant YOUR HTPC case. Not all HTPCs are limited to half-cards. Although, I thik that these fan blower cooling solutions are horrible for HTPC applications with their horrible whine. It may be awhile until aftermarket solutions are out for this new line.
I'm surprised it performs so closely with the 5870 yet cooler running and doesn't demand so much from your power supply. I think this is my next card come Christmas time, unless Nvidia releases some details about their next generation GPU's along with expected prices before then.
I've read rumors that we will not see any Nvidia cards for sale until next year...ouch, but I'm betting they release before Christmas. Missing the holiday buying season would be a real stupid move for Nvidia.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
95 Comments
Back to Article
merin - Monday, October 12, 2009 - link
Any idea what producer I should choose if I'm getting a 5850?Is there even any difference between them?
ambientmf - Sunday, December 6, 2009 - link
At this point, I doubt there's much difference between vendors. Since the card is still relatively new and supply still low, I would think most vendors are sticking to the reference design.Warranties and support aside, they're all identical.
I'd investigate the warranties offered, any trade-up programs (not that they'll be much to trade up from but still good to know) and any info like that.
hsew - Sunday, October 11, 2009 - link
I find it rather peculiar that even though Crysis Warhead runs better on ATI single GPUs (4890 faster than GTX275, 4870 faster than GTX260, 4850 almost as fast as GTX 260) it scales well enough in SLI to take the performance lead from ATI, as shown by the 5850 CF losing to GTX285 SLI (SLI scaling almost 100%!!). It makes one wonder just what the heck all the hype is about ATI releasing a new driver once a month if they don't seem to do make much of a difference in the performance department.maomao0000 - Sunday, October 11, 2009 - link
http://www.myyshop.com">http://www.myyshop.comQuality is our Dignity; Service is our Lift.
Myyshop.com commodity is credit guarantee, you can rest assured of purchase, myyshop will
provide service for you all, welcome to myyshop.com
Air Jordan 7 Retro Size 10 Blk/Red Raptor - $34
100% Authentic Brand New in Box DS Air Jordan 7 Retro Raptor colorway
Never Worn, only been tried on the day I bought them back in 2002
$35Firm; no trades
http://www.myyshop.com/productlist.asp?id=s14">http://www.myyshop.com/productlist.asp?id=s14 (Jordan)
http://www.myyshop.com/productlist.asp?id=s29">http://www.myyshop.com/productlist.asp?id=s29 (Nike shox)
cosminliteanu - Tuesday, October 6, 2009 - link
Thanks for this kind of article, I would also to see some Fallout 3
testing.....
Thank you ,again :)
bhougha10 - Monday, October 5, 2009 - link
sorry the f-bomb quote was in the previous artical. But kinda adds to the point. Please no silly fan boy comments as I have stock in amd and nvidia :) (ya, in know probably silly to have stock in these two companies)bhougha10 - Monday, October 5, 2009 - link
Probably get locked out of the site for saying this. But it seems like there is always an agression toward NVIDIA. Kinda like ATI gives free samples and NVIDIA does not, or not in a timely manner. I mean why would an F-Bomb be quoted on a professional site. It's not just this one statement and not just this one artical that makes me wonder.otherwise, as usual, great artical, with great content.
Malachite - Monday, October 5, 2009 - link
I wish you guys would put together Company of Heroes, World in Conflict and Supreme Commander so we could see clearly the differences between the cards. Today we see only reviews based on FPS, I like FPS, but my main games are all RTS.... I bought the HD4890 based on reviews but it didnt run as good as my GTX 275 for RTS titles....merin - Monday, October 5, 2009 - link
I'm wondering what the text that says "Floor @ 40.6dB" on the noise diagram means. Is it the background noise or something?I'm wondering if it can compete with the Sapphire HD4890 Vapor-X in the noise-department...
"The dual slot Vapor-Chamber Cooler has a noise level under 20 dbA in 2D operation and is still under 30 dbA in 3D operation before it reaches 85 degrees Celsius." ( http://www.legitreviews.com/article/1056/1/">http://www.legitreviews.com/article/1056/1/ )
Pastuch - Monday, October 5, 2009 - link
From what I've heard at other sites the Vapor-x 4890 is significantly quieter than the ATI cooler on the 5850. Thats not a knock on the 5850, it's just that the Vapor-x cooler on the 4890 is dead quiet. I love mine. It even has HDMI, VGA, Displayport and DVI on the back of the card!merin - Tuesday, October 6, 2009 - link
Thanks Pastuch :)I'm in the process of getting a new graphics card, and I'm trying to find the most silent one that doesn't nerf performance from the top cards that much. The 3 cards that I'm considering is:
This 5850 from ATI (probably Sapphire or Asus)
Sapphire HD4890 Vapor-X
MSI N275GTX Twin Frozr OC
Any advice?
Pastuch - Wednesday, October 7, 2009 - link
I was looking at the the exact same cards. Both the Twin Forzr GTX275 and the Radeon 4890 vapor-x are VERY QUIET. I would pick based on which games you play. If you have a home theater and audio passthrough is a concern I would lean towards the ATI card. Both of them come pre-overclocked and both have stellar performance. The Twin Frozr is slightly longer in length; however it's power connectors face up (Not out the back of the card like usual).Ryan Smith - Monday, October 5, 2009 - link
Yes. That's the noise of our test rig that we measure at idle when using an entirely passive video card; we can get our rigs down to about 36dB, but that requires removing all the fans other than the CPU fan. Bear in mind that a dead quiet room is 30dB and that we're measuring roughly 6" away from the card, so our methodology may differ from how other people do it.merin - Tuesday, October 6, 2009 - link
Thank you Ryan!So what do you think is the most silent? This 5850 with a reference cooler or the 4890 vapor-x? I find them very hard to compare because of different methodology...
Right now, they only thing in my case that is making noise is the fan on my old 8800GTS, and I'd like my new card to make it completely silent if possible.
Ryan Smith - Tuesday, October 6, 2009 - link
Since we haven't tested the Vapor-X, I have no way of knowing.MamiyaOtaru - Friday, October 2, 2009 - link
260 core 216 is using more power yet running cooler with a quieter fan.. would love to see a 5850 with a more efficient coolerAnnonymousCoward - Thursday, October 1, 2009 - link
DisplayPort 1.1 vs 1.2 is the difference between PCIe 1.0 vs 2.0. The bandwidth is doubled. Please start stating the version.coconutboy - Thursday, October 1, 2009 - link
I didn't read all the comments yet so perhaps this has been mentioned, but where are all the 5850 Xfire results? Hopefully the Crossfire benches weren't done exclusively at 2560x1600? From some of Ryan's text in the article, it seems there were Crossfire results that are accidently missing or got pulled. Stuff like this quote from the Battleforge page 6 text:Yet there are no Xfire results on any of the Battleforge graphs. Hopefully they'll be added later. In the meantime guru3d and HardOCP have Xfire results for those interested.
coconutboy - Thursday, October 1, 2009 - link
Had a chance to go through all the comments now, seems TheOne pointed out the same thing on page 6 of the comments and Ryan attempted to fix it. The fix still ain't showing up in my browser though.Ryan Smith - Thursday, October 1, 2009 - link
It's really fixed this time. I had two sets of graphs, one with the CF/SLI cards and one without. I put up the wrong one.Seramics - Thursday, October 1, 2009 - link
HD5800 series is bandwidth limited. The 5850 being less severe than 5870. 5850 has about 77% computing power and about 83% of memory bandwidth of 5870. So normally, 5850 should perform about 77% as fast as 5870 but it wasnt the case here. If you calculate all the benchmarks performance at all resolution, its surprisingly consistent, 5850 is always around 82-85% performance of 5870. Never did it drop to below 80% performance level, let alone coming close to 77% which is where it should be. Different game has different bandwidth requirement and there's fluctuation in percentage improvement from 4800 series to 5800 series. It unstable but 5870 for eg rarely doubles 4870, let alone 4890. So in the end, its not parallelization or scaling problems, nor was it geometry or vertex limitation (possible but less likely), it is indeed the 5800 series being limited in performance due to restricted memory bandwidth. Those of you who has 5800 cards, overclock the memory and check the scaling and you'll see wht i mean.silverblue - Thursday, October 1, 2009 - link
Overclocking the RAM is one idea, adding more RAM is another, however it remains to be seen whether ATI will introduce a wider bus for any higher spec models.The situation is a little different to the 4830 - 4850 comparison whereby the 4830 had slightly lower clocks but only 640 SPs enabled instead of the full 800, however in the end the performance difference wasn't very large so the lack of shaders didn't cripple the 4830 too much.
Jamahl - Thursday, October 1, 2009 - link
I'm not convinced about that, many games run on 3 30" lcd's with no issues.More likely is the games aren't pushing the cards to their maximums, that is why you aren't getting the full effect. We will find out for sure when the 2gb version is released.
Seramics - Thursday, October 1, 2009 - link
If u cant see it thats ur problem, to say games arent pushing it is noobish, 1gb is still plentiful for today's games, memory buffer was nv an issue as long as its 1gb. well in the end u will see faster ram outperforming ur 2gb version. Very amazing to see many people still cant figure out the main reasons of 5870's underperformance.It is still a decent card and offer many features and definitely a better performance to price ratio card than GTX 285. But it is underperforming. Not living up to its nex gen architecture prowess. Unless GT300 screw up, it can easily outperform 5870 when its out. If AMD came out quickly with 5890, they will be wise to significantly bump up the GDDR5 speed as it is unlikely they will go with higher than 256bit design due to their "sweet spot" small die strategy.
Jamahl - Thursday, October 1, 2009 - link
Wait...you really think that you have it figured and ATI didn't realise it? You truly believe that ATI would lower the performance of the card instead of just strapping on a 384 bus?No. Any bandwidth issues only exist in your head. Didn't you say that different games have different bandwith requirements?
Jamahl - Thursday, October 1, 2009 - link
Any ideas what is going on here with that?loverboy - Thursday, October 1, 2009 - link
I would really love to know how these games run with this added xtra.One of my main reasons for upgrading would be to play WOW on three screens (most likely in window mode).
Would it be possible to add this benchmark in the future, with the most obvious config being 3 screens in 2560/1920/1680
yolt - Wednesday, September 30, 2009 - link
I'm looking to pick one of these up relatively quickly. My question is there really a difference in which vendor I purchase from (HIS, Powercolor, Diamond, XFX, etc). I know many offer varying warranties, but if they offer the same clock speeds, what else is there? I guess I'm looking for the most reputable brand since I won't be waiting for too many specific reviews before purchasing one. Any help is appreciated.ThePooBurner - Wednesday, September 30, 2009 - link
Where and under what conditions/server load do you test the frame rate in WoW? I've played for years and with my 4850 i can get 100fps in the game if i am in the right spot when no one else is in the zone. Knowing when and where you do you frame rate tests for WoW would help to put it into context.Ryan Smith - Thursday, October 1, 2009 - link
This is from Anand:"...our test isn't representative of worst case performance - it uses a very, very light server load, unfortunately in my testing I found it nearly impossible to get a repeatable worst case test scenario while testing multiple graphics cards.
I've also found that frame rate on WoW is actually more a function of server load than GPU load, it doesn't have to do with the number of people on the screen, rather the number of people on the server :)
What our test does is simply measures which GPU (or CPU) is going to be best for WoW performance. The overall performance in the game is going to be determined by a number of factors and when it comes to WoW, server load is a huge component."
ThePooBurner - Wednesday, October 7, 2009 - link
A good repeatable test would be to have a RAID group in an instance and have them all cast a set of spells at once. the instance server separates from the rest of the server load and allows for a bit better testing. While it's true that the game is generally more CPU/RAM limited than GPU limited, especially if you have a lot of add-ons doing post processing on all the information that is shooting around. However, having been in raids with and without add-ons and such, i can tell you that i can get 45-50fps when we are just standing there waiting to attack, and then as soon as the spell effects start going off my frame rate drops like a rock. The spell effects are particle effects that overlap and mix and are all transparent to one degree or another. All those effects going off on a single target creates a lot of overlap that the GPU has to sort out in order to render correctly.What you might try is to see if you can get Blizz to put a target dummy in an instance to isolate it from the rest of the masses, and allow for sustained testing with spell effects going off in a predictable manner. (not having every testing go balls to the wall, but simply repeat a set rotation in a timed manner so that you can get an accurate gauge.
Per Hansson - Thursday, October 1, 2009 - link
I second your questionAnd also just want to say that with a very heavily volt modded and overclocked 8800GTS 512MB the performance in WoW at maximum settings with 2xAA will totally kill my card
For example in heavily populated areas it will use more than 512MB video ram (confirmed using rivatuner)
And in heavily populated areas I get like 20FPS, for example in Dalaran at peak hours (like, when I play :P)
The numbers you provide for WoW are welcome, very few sites do these tests
But more realistic numbers would be nice, representing what a big guild would see in a 20 or 40 man RAID...
Perhaps you could setup a more realistic test with private servers, or if you are unwilling to go that route ask Blizzard if they could setup a testserver for you to use so you can get reproducible tests?
biigfoot - Wednesday, September 30, 2009 - link
I can't wait till we find out if those extra SIMD engines can be unlocked like the good ol' ati2mtag softmod for the 9500 -> 9700 :) Even if not, this looks like the card for my new HTPC :)The0ne - Wednesday, September 30, 2009 - link
The charts are a bit confusing. My main focus is at the 2560x1600 and the review references 5850 CF and 285 CF but they are not to be found in any of the charts. Same for 285 SLI"With and without ambient occlusion, the 5850 comes in right where we expect it. The 5850 Crossfire on the other hand loses once again to the GTX 285 SLI in spite of beating the GTX 285 in a single card matchup."
Ryan Smith - Wednesday, September 30, 2009 - link
Whoops. The full 2560 w/o SSAO chart went AWOL. Fixed.giantpandaman2 - Wednesday, September 30, 2009 - link
Should be heels, not heals. Unless you're referring to some MMORPG priest. :)KeithP - Wednesday, September 30, 2009 - link
I understand you want a consistent platform to test all the video cards, but is there any possibility of testing the 5850 on a more realistic platform?Maybe something like a dual core 2.8GHz machine? I have to think the bulk of the potential buyers for this card won't have a machine anywhere near as powerful as the one you are testing on.
-KeithP
v1001 - Wednesday, September 30, 2009 - link
I wonder if we'll see a single slot card. I think this is the card for me to get. I need a lower watt single slot card to work in my tiny case.RDaneel - Wednesday, September 30, 2009 - link
Ryan, I know it's not the focus of this article, but it would be great to get a small paragraph (or a blog post or whatever) on what ATI has said in reference to the lower-spec 40nm DX11 parts. I simply don't need 4850 power in a SOHO-box, but the low 40nm idle power consumption and DX11 future-proofing are tempting me away from a 4870/90 card. What kind of prices and performance scaling are we likely to see before the end of 2009? Thanks for any info!Ryan Smith - Wednesday, September 30, 2009 - link
We covered this as much as we can in our 5870 article. If it's not there, it's either not something we know, or not something we can comment on.http://www.anandtech.com/video/showdoc.aspx?i=3643...">http://www.anandtech.com/video/showdoc.aspx?i=3643...
MadMan007 - Wednesday, September 30, 2009 - link
Should have done HAWX in DX 10.1 mode then the HD5850 > GTX 285 sweep would have been complete. Or to flip it around, enable PhysX (lol) on some games.MadMan007 - Wednesday, September 30, 2009 - link
I have to point this out because it's something I've now seen on two websites and it irks me a little bit just like 'solid state capacitors' does. In the last sentence on page one the plural of die in this case is dies not dice. Someone didn't edit this carefully!Ryan Smith - Wednesday, September 30, 2009 - link
If you search our archives, Anand has discussed this in depth. It's dice.papapapapapapapababy - Wednesday, September 30, 2009 - link
btw, this card is powerless against "the way is meant to be played"nvidia keeps bribing developers left and right, ATI does nothing
(except boring sideshow penis wars), meanwhile the poor ATI users cant seem to play NFS SHIFT 640 x 480, all set to low, ( my -rebranded- 9800 does it great btw) + there is no in-game selective AA available to any ATI Radeon user in Batman, (another TWIMTBP game) + it looks like empty crap with all the shit nvidia - removed ( yes, really no smoke? no papers? no flags? not even static flags? what about GRAW? it used to work fine on reg cpus...)
papapapapapapapababy - Wednesday, September 30, 2009 - link
my "prototype" RV740 @ AC-S1 is better...
Core Clock: 875
Idle temp: 29C
Noise: 0db
TDP: 80W(+)
+ it doesn't look like the 60's Batmobile
AnnonymousCoward - Thursday, October 1, 2009 - link
What, like this one?http://student.dcu.ie/~lawlesc4/Batmobile%20%28TV%...">http://student.dcu.ie/~lawlesc4/Batmobile%20%28TV%...
ipay - Wednesday, September 30, 2009 - link
Except it doesn't hav DX11, so it's not better. Go troll somewhere else.strikeback03 - Wednesday, September 30, 2009 - link
I suppose we probably have to wait for the consumer driver release to know for sure, but how is the stability of these? The only two AMD cards I have direct experience with have both had driver issues, so that is the one factor that would keep me from considering one of the lower-powered versions of this architecture once they are released.Jamahl - Wednesday, September 30, 2009 - link
It is you who had the issue not the drivers. That is why millions of others have ati's without driver issues.strikeback03 - Wednesday, September 30, 2009 - link
Yeah, I definitely think closing my laptop lid and expecting it to have the same resolution set when I reopen is my fault *rolls eyes*.Mills - Wednesday, September 30, 2009 - link
AMD's graphics forums and the number of bugfixes and known issues posted for each driver release say otherwise.NVIDIA is not doing so well lately with their drivers either though, especially where Vista/7 is concerned.
Both companies can't seem to get proper fixed aspect ratio GPU scaling working in Vista/7. This has been broken since Forceware 169.04, and my friend tells me broken in a recent Catalyst release. What the hell is going on?
michal1980 - Wednesday, September 30, 2009 - link
I remeber a few years back when Green was on top and Red was dieing. Looks like tables have turned.For the gamer/consumer, IMHO its a win-win.
My new PC next year, just might have an AMD card. I could care less about brand loyality. I buy what ever gives me the most bang for buck at the time I build.
Lavacon - Wednesday, September 30, 2009 - link
I would love to see this card added to the X58 vs P55 GPU article from yesterday. Although I suspect the results would be much the same.vailr - Wednesday, September 30, 2009 - link
Please consider adding a Radeon 4770 card to your comparison chart. I believe it's also "TSMC 40nm".chizow - Wednesday, September 30, 2009 - link
Instead of just marginally reducing performance by dropping clockspeeds and available bandwidth, they artificially neutered their parts by cutting out a few SIMD clusters similar to Nvidia's MO of cutting TPC units.Your conclusion doesn't seem to draw this parallel, that the cut SIMD probably don't factor much into the overall performance because 1580 or whatever is left is enough and the full 1600 aren't being fully utilized in most games today. So instead the 5850 scales more closely to the 15% decrease in clockspeeds compared to the combined 23% for clockspeeds and SIMD units.
The 5870 soft launch followed by today's 5850 paper launch also says quite a bit about 40nm yields in light of their artificial die neutering approach. Reports of AMD shipping *FOUR* 5870 for every *ONE* 5850 for a 4:1 ratio indicates 40nm yields are quite good. Given the high demand and apparently inadquate supply, it makes absolutely no sense whatsoever for AMD to ship these perfectly capable die for a $100 discount when they can sell them for that much more on the 5870.
chrone - Wednesday, September 30, 2009 - link
with beta driver, it can beat nvidia fastest single gpu single card, this card would be an awesome at some moment in the future without breaking the bank for its cost and power consumption.i'm falling in love with this new babe already. :D
Razer2911 - Wednesday, September 30, 2009 - link
I think the current beta drivers are holding the performance of these cards and it can improve by another 10-15%. Maybe ATI is holding it back just in case nVidia brings in some surprise (really doubt it).Strangely the temps mentioned for both the cards are inconsistent with other reviews on the web with Anandtech's being the lowest. Maybe it would be better to post ambient-to-idle/load temps in all your reviews.
LeadSled - Wednesday, September 30, 2009 - link
Nvidia's tech is soon to be so out dated that they will not be a deal at any price. They cannot even do all DX10 spec let alone any DX11 which I do believe ATI has been able to do some DX11 functions since the X1900. I hope Nvidia gets their act together and survives but unlike when 3D was new and Nvidia pushed new tech envolpe they are have been holding progress to a stand still. Nvidia should put up and play the game or get out of the game and make PhysX cards.I do hope they create a 5850X2. These new RV870 gpu's look like they will work well in a 2GB version. Ive heard the 5870X2 will be a 4GB card, lets just hope. I know I would pay $600-$700 for that baby without a thought.
TA152H - Wednesday, September 30, 2009 - link
I am with you, I think NVIDIA needs to go out of business. I think they will.They are at a huge disadvantage without a CPU. Intel is moving CPU/GPU soon, and AMD had this planned for a long time. With Intel already precluding NVIDIA from making chipsets for Nehalem based computers, and ATI making far better GPUs, NVIDIA is running on momentum now, and that runs out over time.
NVIDIA might shirk Intel and make a chipset for Nehalem. While most us wouldn't even consider a crappy NVIDIA chipset, the general market has no idea how problematic they are. They buy from HP and Dell, and they use NVIDIA. I am surprised at how many of these that I see, so it's a good business for NVIDIA.
Right now, the Lynnfield is essentially irrelevant, and the Bloomfield is a niche product. Neither are particularly important products as far as the market is concerned, so NVIDIA isn't really paying a price. Core 2 is still the most attractive platform for mainstream America, or an AMD platform. Clarkdale, with all its flaws, should sell especially well, and even if NVIDIA does decide they want to make a chipset, it won't sell. No one who knows much about computers will buy an NVIDIA chipset, so they sell mainly through HP and Dell, or similar companies. HP and Dell are not going to want to pay extra for an NVIDIA GPU, since the processor comes with one, and really it's only the southbridge that's up for grabs now. This would make a much smaller contribution to their bottom line. It's all bad for them.
Yes, they can sell into the Bloomfield space, if they come up with a good discrete card. But, how big is this market? Lynnfield should be even smaller, being brain-damaged and second-best, but, not particularly cheap like the Clarksdale. Also, it's unlikely someone will want a high priced video card, or two, and pair it with anything but the best platform.
So, where does NVIDIA sell into? Core 2 will go away, Bloomfield and Lynnfield will have relatively small market shares, and Clarksdale should sell especially well in the markets where NVIDIA chipsets sell well now.
Anand said the Clarksdale was the replacement for the Conroe, which caught a lot of FlAK, because he worded it poorly. But, in a way, he's right with respect to the Clarksdale replacing the Core 2 as the platform for the mainstream market. In this respect, the Clarksdale is better in almost all respects. It's dual core, but runs four threads. Sure, they put the MMU in the wrong place, but it's still better than being outside the processor, and the GPU is better than the G45. On top of this, it should be cheaper. Core 2 duals, with Pentiums, etc..., sell the best, still. Clarksdale is better, and should be cheaper, so it's going to dominate the market in the same way. Bloomfield is king of performance, and will have a place. It's not a big one though. Lynnfield is a good combination of power and decent performance. It's also not a big space, although the i5 750 might do well and shouldn't be discounted. The big space will be the Clarksdale. NVIDIA is going to be hurt by it. Hopefully, fatally.
Pastuch - Monday, October 5, 2009 - link
I disagree with you on ALL points. I buy 2 videocards per year and I own an almost equal number of ATI/Nvidia cards. I just bought a 4890 and next will be a 5850.Nvidia should definitely NOT go out of business. Competition drives creativity and reduces prices for consumers. I would hardly say Nvidia is doing badly at the moment. The bulk of aftermarket videocards still come from Nvidia. They are still ahead of ATI in marketshare. They are also a marketing juggernaught; "The Way its Meant to be Played" is a very powerful marketing tool. That being said I expect a firm advantage for ATI over the next six months.
I have owned several Nvidia chipset motherboards and they have all been exceptionally reliable and great overclockers. I've never had driver issues with them. I find Clarksdale underwhelming. G45 didn't live up to all its promises (Bitstreaming) and I seriously doubt that it's successor will either. G45 has the least features and customizability of any onboard solution I have used yet. Intel has a long way to go on integrated video if they ever want to capture the enthusiast market.
ATI has been doing a stellar job lately. The 5850 is every hometheater guys dream. Inexpensive, Bitstreaming HD content and it will fit in most HTPC cases. The latest GTX cards and the 5870 are too long. Videocards should be less than 10 inches long!
silverblue - Wednesday, September 30, 2009 - link
nVidia shouldn't "get out of the game" at all. True, ATI may just have 3 or 4 months of technical superiority, but nVidia's next cards may be superior as well as offering plenty of revolutionary features.nVidia can also chop prices for their current lineup but not too much, otherwise they may undercut their new cards.
I'm impressed by the 5850's frugal (as compared to the 5870) power requirements. Coupled with a relatively low price, it should sell very nicely indeed (and spawn some overclocked versions very quickly).
SJD - Wednesday, September 30, 2009 - link
Did you have the August DirectX Redist installed on your test system? I think I've read somewhere that this is the update that brings 'full' DirectX11 functionality to Windows 7, and perhaps this is the reason you didn't see the results you were expecting.Otherwise, awesome card!
Ryan Smith - Wednesday, September 30, 2009 - link
Based on AMD's internal data I suspect it's just the fact that we have everything cranked up. We're taking a look at it ASAP.Totally - Wednesday, September 30, 2009 - link
Ryan, could you retest the 5870 again but this time with a piece of tape running across the two 'vents' on the back of the card.Ryan Smith - Wednesday, September 30, 2009 - link
I'm curious what makes you think that will have any impact.Totally - Wednesday, September 30, 2009 - link
well, since it's a centrifugal fan, it sucks air through the center and exhausts it in a radial direction, being the shroud's job to redirect the air afterwards. now looking at how restricted the openings appear on the 5850 and the cooling performance. I'm curious if there is connection, say fresh air may be escaping through those openings and back into the case instead of passing through the heatsink and out the back of the case.Ryan Smith - Wednesday, September 30, 2009 - link
I don't have a good front-shot, but it's a sealed shroud. There's no air escaping.toast70 - Wednesday, September 30, 2009 - link
With 5870 being basically a doubled up 4870 architecture and (still SIMD), i am interested to see how Nvidia's new MIMD architecture will compete, especially with the ridiculous memory bandwidth it will have with GDDR5 and a 512 bit bus.(if the 512 bus isnt just a rumor and hopefully they are not plagued by driver issues) I am glad AMD/ATI is doing better the competition is great, but i feel the new NV cards are going to be good (least if any of the rumors are true). I am still trying to find a reason to replace my 9800GTX SLI, they burn thru about any game as long as you stay away from over 4X AA due to the 512 MB frame buffer.BTW, Not a NV fanboi here, hope i dont sound like one, its late and i just dropped my friend off at the ER, brain is tired. the fiancee's PC has a ATI card and its great, no complaints other than a few driver issues, but nothing i could complain about really. HD4850 512MB
keep up the competition, we have AMD to thank for under 400 fast cards
poohbear - Wednesday, September 30, 2009 - link
very well done review and the conclusion was spot on! gonna get one of these this fall, Nvidia better hurry up b4 shiat really hits the fan.:0Roland00 - Wednesday, September 30, 2009 - link
I wonder how close the 5850 will be to the 5870 once they have similar memory speeds and thus bandiwidth. Are those extra shaders/core frequency wasted due to the limited memory bandiwidth.Ryan Smith - Wednesday, September 30, 2009 - link
An excellent question! This isn't something we had a chance to put in the article, but I'm working on something else for later this week to take a look at exactly that. The 5850 gives us more of an ability to test that, since Overdrive isn't capped as low on a percentage basis.Zool - Wednesday, September 30, 2009 - link
You could make some raw shader tests that doesnt depend on memory bandwith to see if the gpu internal bandwith is somehow limited or the external bandwith. And maybe try out some older games(quake3 or 3dmark2001).In DX11 games will use more shader power for other things which hawe litle impact on bandwith. Maybe they tested those heawy dx11 scenarios and ended with much less costly 256bit interface as a compromis.
Dante80 - Wednesday, September 30, 2009 - link
Up to 80watts lower consumption in load120$ less
Quieter
Cooler
Shorter
Performance hit around 10-15% against 5870 (that means far better perf/watt and perf/$)
~12% More performance than GT285
Overclocks to 5870 perf easily
Ok, this is an absolute killer for the lower performance market segment. Its 4870vs4850 all over again. Only this time, they get the performance crown for single cards too.
Another thing to remember, is that nvidia does not currently have a countermeasure for this card. The GT380 will be priced for the enthusiast segment, and we can only hope for the architecture to be flexible enough to provide a 360 for the upper performance segment without killing profits due to diesize constraints. Things will get even more messy as soon as Juniper lands, the greens have to act now (thats our interest as consumers too)! And I don't think that GT200 respins will cut it.
the zorro - Wednesday, September 30, 2009 - link
maybe if intel heavily overclocks a gma 4500 can can compete with amd?haplo602 - Wednesday, September 30, 2009 - link
hmm ... my next system shoudl feature a GTS 250. Unless ATI releases a 5670 and finaly hits opengl 3.2 and opencl support in their linux drivers.anyway the 5850 will kill lot of Nvidia cards.
san1s - Wednesday, September 30, 2009 - link
interesting results, can't wait to see how gt300 will comparepalladium - Thursday, October 1, 2009 - link
"interesting results, can't wait to see how gt300 will compare "SiliconDoc: WTF?! 5870< GTX295, top end GT300>>295 because it has 384bit GDDR5 ( 5870 only 256 bit), so naturally GT300 will KICK RV8xx's A**!!!!
That's my prediction anyway (hopefully he decides not to troll here)
Dobs - Wednesday, September 30, 2009 - link
My guess is - GT300 wont compare to 5850 or 5870.It will compare with the 5870X2 and be in the price bracket. (Too much for most of us.)
When the GT300 eventually gets released that is.... Then a few months later again nvidia will bring out the scaled down versions in the same price brackets as the 5850/5870 that will probably compete pretty well.
Only question is - can you wait?
You could wait for the 6870 as well:P
Vinas - Wednesday, September 30, 2009 - link
No, I don't have to wait because I have a 5870 :-)The0ne - Wednesday, September 30, 2009 - link
I really think enthusiast that spends hundreds on the MB alone isn't the regular enthusiast. So price wouldn't be an issue. I love building PCs and testing them but I'm not going to spend $200+ of a MB knowing that I will be building another system in few months with better performance parts and pricing. Unless I'm really keeping the system for a long time then I'll pour my hard earn money into the high end parts. But then if you're doing this I don't think you're really an enthusiast as it's really a one shot deal?chizow - Wednesday, September 30, 2009 - link
Ya it already sounds like the 5870X2 and 5850X2 are being positioned in the media to compete with just a single GT300 with rumors of $500 price points. I think the combination of poor scaling compared to RV770/RV790 in addition to some of the 5850/5870 CF scaling problems seen in today's review are major contributing factors. Really makes you wonder how much of these scaling issues are driver problems, CPU/platform limitations, or RV870 design limitiations.My best guess for GT300 pricing will be:
$500-$550 for a GTX 380 (full GT300 die) including OC variants
$380-$420 for a GTX 360 (cut down GT300) including OC variants
$250 and lower GTX 285 followed by GT210 40nm GT200 refresh with DX10.1
So you'd have the 5870X2 competing with GTX 380 in the $500-600 range. Maybe the 5850X2 in the $400-$500 range competing with the GTX 360. 5870 already looks poised for a price cut given X2 price leaks, maybe they introduce a 2GB part and keep it at the $380 range and drop the 1GB part. Then at some point I'd expect Nvidia to roll out their GT300 GX2 part as needed somewhere in the $650-700+ range.....
yacoub - Wednesday, September 30, 2009 - link
Nah. They won't get enough sales at those prices. They need to slot in under $399 and $299 unless they put out 50% more performance than the 5870 and 5850 respectively.Or the heck with them, I'll just wait six months for the refresh on a smaller die, better board layout with better cooling, lower power, and a better price tag.
It's not like i NEED DX11 now, and i certainly don't need more GPU performance than I already have.
chizow - Thursday, October 1, 2009 - link
How would it need to be 50% faster? It'd only need to be ~33% faster when comparing the GTX 380 to the 5870 or GTX 360 to the 5850. That would put the 5870 and 360 in direct competition in both price and performance, which is right on and similar to past market segments. The 380 would then be competing with the 5870X2 at the high-end, which would be just about right if the 5870X2 scales to ~30% over the 5870 similar to 5870CF performance in reviews.Gary Key - Wednesday, September 30, 2009 - link
"It's not like i NEED DX11 now, and i certainly don't need more GPU performance than I already have. "As of today I am limping along on a GTX275 (LOL) and I really cannot tell any differences between the cards at 1920x1080. Considering the majority of PC games coming for the next year are console ports with a few DX10/11 highlights thrown in for marketing purposes, I am really wondering what is going to happen to the high-end GPU market. That said, I bought a 5850 anyway. ;)
chizow - Thursday, October 1, 2009 - link
I'm running GTX 280 SLI right now and have found most modern games run extremely well with at least 4xTrMSAA enabled. But that's starting to change somewhat, especially once you throw in peripheral features like Ambient Occlusion, PhysX, Eyefinity, 3D Vision, 120Hz monitors or whatever else is next on the checkbox horizon.While some people may think these features are useless, it only really takes 1 killer app to make what you thought was plenty good enough completely insufficient. For me right now, its Batman Arkham Asylum with PhysX. Parts of the game still crawl with AA + PhysX enabled.
Same for anyone looking at Eyefinity as a viable gaming option. Increasing GPU load three-fold is going to quickly eat into the 5850/5870's increase over last-gen parts to the point a single card isn't suitable.
And with Win7's launch and the rollout of DX11 and DirectCompute, we may finally start to see developers embrace GPU accelerated physics, which will again, raise the bar in terms of performance requirements.
There's no doubt the IHVs are looking at peripheral features to justify additional hardware costs, but I think the high-end GPU market will be safe at least through this round even without them. Maybe next round as some of these features take hold, they'll help justify the next round of high-end GPU.
chrnochime - Wednesday, September 30, 2009 - link
With PC gaming seemingly going towards MMO like WoW/Aion/Warhammer (and later on Diablo 3) and far less emphasis on other genre(besides FPS, which is more or less the same every year), and as you said most new games being console ports, I really doubt we'll need anything more powerful than the 4890, let alone a 5850 or 5870 for the coming couple of years. Maybe we've enter the era where PC games will forever be just console ports + MMO, or just MMO, and there'd be little incentive to buy any card that cost 100+.Just my take of course.
C'DaleRider - Wednesday, September 30, 2009 - link
I was told by a Microcenter employee the current pre-order retail price for the top end GT300 card was $579, an EVGA card, btw. And reportedly the next model down is the GT350. Dunno if this is fact or not, but he didn't have any reason to lie.Zool - Wednesday, September 30, 2009 - link
The GT300 will need 512bit gddr5 to make memory faster than GT200 and it will hawe even more masive GPGPU bloat than last gen. So in folding it will be surely much faster but in graphic it will cost much more for the same(at least for nvidia depending how close they want to bring it to radeon 5k). And of course they can sell the same gt300 in tesla cards for several thousand(like they did with gt200).The 5850 price with disabled units is still win for ati or else they wouldnt sell the defect gpu at all.
Genx87 - Friday, October 2, 2009 - link
GDDR5 provides double the bandwidth of CGDDR3. No need for 512bit memory bus. This was covered in another story on the front of this site.dagamer34 - Wednesday, September 30, 2009 - link
As great as these cards are, my system only supports low-profile cards since it's a HTPC. Bring on the Radeon HD 5650 & 5670!!!!bigboxes - Wednesday, September 30, 2009 - link
Well, you meant YOUR HTPC case. Not all HTPCs are limited to half-cards. Although, I thik that these fan blower cooling solutions are horrible for HTPC applications with their horrible whine. It may be awhile until aftermarket solutions are out for this new line.NA1NSXR - Wednesday, September 30, 2009 - link
...thats what he said.gigahertz20 - Wednesday, September 30, 2009 - link
I'm surprised it performs so closely with the 5870 yet cooler running and doesn't demand so much from your power supply. I think this is my next card come Christmas time, unless Nvidia releases some details about their next generation GPU's along with expected prices before then.I've read rumors that we will not see any Nvidia cards for sale until next year...ouch, but I'm betting they release before Christmas. Missing the holiday buying season would be a real stupid move for Nvidia.
melgross - Wednesday, September 30, 2009 - link
It isn't a matter of stupid. It's a matter of what they can do. If they can do it, we know they will, but if they can't, well, that's a problem.blanarahul - Wednesday, December 21, 2011 - link
These younger brothers of the fastest parts are always great.GTX 570, 470, 260, 8800 GTS all were superb value cards with high performance!