Barcelona is about as fast as Harpertown in AS3AP. OK.
In your article you write:
"The Scalable Hardware benchmark measures relational database systems. This benchmark is a subset of the AS3AP benchmark and tests the following: ..."
Now you choose a subset of this test in which Harpertown is much faster. Obviously AS3AP consist of several substest and you could as well choose one where Barcelona is much faster. But whats the use of this? You tested all subtest together with your AS3AP-Test.
Its the same as testing a game and both CPUs having the same score. Then you choose a subtest(e.g. KI only) where Harpertown is faster and conclude its faster overall.
So what did I miss here? From what I read Barcelona is as fast in AS3AP as Harpi(and should be faster in some subtest and slower in others) while you conclude:
"Intel has made some successful changes to the quad-core Xeon that have helped it achieve as much as a 56% lead in performance over the 2.0GHz Barcelona part."
Did anyone here notice the huge metal bar across the FB-DIMM slots? It must be for more FB-Dimm cooling. Without looking at the server first hand, you can't tell how the metal bar is attached to the memory.
My question is this: where can you buy the bar if you were to build a server class PC yourself? And can someone tell me the mounting mechanism.
quote: Update: For those that are looking for more details and wondering why certain other chips aren't included, at the time testing was conducted we did not have any of the faster 2.5GHz Barcelona chips (or the slower Harpertowns). That situation has been remedied in terms of AMD's CPUs, and we will have some update articles looking at how the faster Barcelona compares with other processors. Stay tuned...
One other piece of data is missing from the article, and it's looking like it might be important...
Kris Kubicki wrote in his blog
"The 2.0 GHz samples we saw on Monday were of AMD's B1 stepping of Barcelona. But these processors are not the ones we'll see on Newegg's shelves"
"Production Barcelona samples come with the BA revision designator"
"One AMD developer, who wished to remain anonymous for non-disclosure purposes, stated, "B1 versus BA should be at least a 5%, if not more, gain in stream, integer and FPU performance.""
"An AMD engineer, when confronted with the claim, stated that 5% gains when moving from B1 to BA processors "seem conservative.""
Given that, when you guys do the update, could you let us know which stepping it is that you're using? It appears that it may make a significant difference...
quote: Remember: 5% performance gains in synthetic benchmarks that stress specific aspects of a CPU don't mean 5% real-world gains.
I agree...but that's exacly why I am looking forward to some real-world benches on the production steppings. We still have no idea how shipping Barcelonas perform yet.
Further on that...supposedly the reason for the better performance is fixing some major errata. It's quite possible that the performance boost is across the board and not just in synthetic benches.
You run two benchmarks, you run closed software, you run software that might be optimized for the market leader's processors only, you run software that can't be optimized for the new architecture, you don't benchmark any alpha software that uses rapid virtualization.
Maybe we have some benchmark numbers but the real performance of Barcelona is still speculation.
AMD is always the underdog. They need superior product to gain market share. That was the case of Athlon vs Netburst. If Barcelona is just competitive, it is not good enough for them to regain the crown. They will stay as underdog.
From what I understand, these new (Harpertown) Xeons will not be released until November (12th?). Yet the article makes no mention of it, and by reading it, you would assume you can buy them right now.
Intel systems are power mongers...generate enough heat to replace a room heater.Check out any dual socket systems they are using all kinds of cooling to cool the FB-DIMMs those are the worst part in intel builds.
That's really foul. Even the area between the Tick and Tock looks like the urethra. It's so wrong. Is that really the only way they could have presented the information? I mean, if they wanted to get pornographic, couldn't they have used a woman's breasts? Right one for Tick, left one for Tock? It's much more attractive than this.
Marketing geniuses. Intel at its best. A better product, with a bigger...
In all do seriousness, It's no surprise AMD can't compete with an architecture that's been out for over a year. AMD needs more tweaks and needs more clock speed. I just hope they don't disappoint again like they did with the K8. 4-5 years of stagnation.
I think it comes down to Intel being wiser than AMD. They were always smarter, as evidenced by their much more advanced processors like the P7 and Itanium. But AMD was wiser, and chose an easier path that also performed better. Intel had all the great technology, super-advanced trail blazing stuff that just didn't work that well. AMD made the same mistake by going native quad-core before they were ready. Consequently, they have a poor performing part compared to what Intel has, today, and promises for tomorrow. Obviously, the extent of their failure isn't as deep-rooted as the Pentium 4 was and at least the Barcelona can be improved (mainly by clock speed) more quickly, but the big problem is that the Barcelona is getting raped by Intel processors using FB-DIMMS. You add clock speed to the Barcelona, and the power goes up (everything else being equal). You change FB-DIMMS out, and you get better performance and lower power. So, the future doesn't look that bright for AMD, despite the fact they should gain clock speed pretty quickly. It's unlikely to help their power/performance much. Intel using more appropriate memory will to a great extent. Also, if AMD does manage to get close to Intel in performance, Intel will just release a higher performing part. They can hit much higher than 3.2 with their G0 stepping, so it's really a matter of whether it makes marketing sense.
But, it sure sounds good to have native quad-core, and they sure were smart to do it. Right? Just like Intel was to come out with trace-cache, double-pumped ALUs, and super-pipelining and unheard of clock speeds.
But all that aside, if they can get the clock speeds up to a reasonable amount, and increase the size of the pathetic caches (yes, I know they are limited by the IMC and it limits it, but still 512K????) and in a release or two get full memory disambiguation, they will have a really good product. It will at least be competitive.
Any reason why the AMD system had 16 GB of RAM (8x2GB) while the Intel system had only 8GB (4x2GB)?
Also, any reason for the big differences in cooling (AMD system had 7 fans, Intel system had 3)? If the Barcelona system actually uses <i>less power</i>, as your numbers show, surely it can't dissipate <i>more</i> heat.
When you're measuring the power consumption of the whole system (and extrapolating that to the power efficiency of each CPU), you should try to make the configurations match as closely as possible, no? Not to mention that the amount of RAM can have an influence on the actual system performance.
I could understand different configurations if you were testing systems at a specific price point (and couldn't "afford" more RAM for the Intel system due to the more expensive CPUs, for example), but that wasn't the case here.
I would really like to see updated benchmark scores as well! It only seems fair to add more ram to the xeon, for it might improve the benchmark scores and would also increase energy usage (which would be beneficial to the barcelona).
Add the unusual choice of benchmark and fact that Harpertown isn't actually due to be launched until November, and I think this is one (more) article we can file under the "iNandtel" section.
Speaking of that, anyone know what happened to GamePC's "Labs" section? Along with the Tech Report they were probably one of the last sites with a steady output of meaningful, objective reviews of PC hardware.
There was a typo/error in the original config. We apologize for the confusion - I should have verified with Jason/Ross earlier. The Opteron setup was running 8x1GB, not 8x2GB. Sorry to pop all the conspiracy theories (again), but the systems are a lot more similar than you would apparently like to believe.
Note also the update at the end: 2.5GHz Barcelona is on its way and will be tested shortly. We'll see how that compares with the higher clocked Harpertown.
With the last Quad Core Comes to Play article, and now this, I've completely lost faith in Anandtech's benchmarks.
These guys are too clever for them to make a mistake like that, and if they did I'm sure they would see the mistake and rebenchmark.
No, I think these benchmarks were just paid for by Intel, in anticipation of its November launch to steal AMD's thunder. I'm not accusing the entire site of constant bias towards Intel, but rather a bias towards advertising. AMD has probably done the same thing in the past, and I'm sure Anandtech has been happy to oblige.
Yes, Intel pays for an Intel Resource Center page - it includes all of our Intel-related articles and some other information. It's pretty clear that the page is sponsored by Intel. I have no idea how much they pay, however. Don't like that area? Then don't click on it - it still isn't Intel influenced articles as far as the AnandTech articles are concerned.
As for the RAM config, you seem to want us to intentionally handicap Intel just for your own benefit. Eight Registered ECC DIMMs came in the AMD config, and they are single sided DIMMs - meaning, a 2GB double sided DIMM would only consume marginally less power. The Intel setup came with 2GB DIMMs... obviously Intel knows that you pay a power penalty for every FB-DIMM, and you also pay a latency penalty. Ideally, we would have 4x2GB DIMMs on the AMD setup, because any business serious about the platforms is likely running 2GB DIMMs these days.
Taking it to the extreme, obviously running 16 2GB FB-DIMMs uses a lot more power than 16 2GB DDR2-667 DIMMs. I'm not sure how many businesses actually use that approach, though - not many in my experience. As always, we are testing specific facets of performance (and power and whatever else you care to name). Is there more to it? Of course. Is Intel always better or AMD always better? Of course not.
If I were in charge of a server purchase for a large company right now, I'd be looking at my specific needs to determine the best overall platform. For most companies, that's relatively low loads so the Opteron is perfectly acceptable. That means it's going to come down to features like manageability and support rather than performance. Very likely, I'd be looking at slightly more mature hardware anyway - bleeding edge and servers aren't generally a good mix.
Yes, Intel pays for an Intel Resource Center page - it includes all of our Intel-related articles and some other information.
All of them? Interesting, I can only seem to find articles about Core and Core 2, nothing about, oh, say, Prescott or Paxville. In fact, I can't find any article in your "Resource Center" that is even remotely disfarvourable to Intel. Must be a temporary glitch. Or maybe those articles aren't deemed "resourceful" enough.
Also, it's interesting to see an anandtech.com address in my browser's location bar, the Anandtech banner at the top of an Anandtech page, and "This site is presented by Intel" below it. Well, at least "we have been warned".
I guess I'm just used to seeing manufacturer propaganda on the manufacturers' website or inside clearly identified ad boxes, not integrated into supposedly impartial hardware review sites. I know, that's so 20th century of me.
it still isn't Intel influenced articles as far as the AnandTech articles are concerned.
Of course not. And I'm sure that when large corporations make donations to political candidates, they're not expecting to influence their future decisions the least bit. They only do it so they can get a bit of exposure by appearing in the list of contributors. I'm sure Intel would be just as likely to sponsor your "resource center" if your articles pointed out the weaknesses in their products (such as, oh, I don't know, actually using all the FB-DIMM slots on their servers, instead of leaving 75% of them empty).
There wouldn't be anything wrong with an ad banner linking to Intel's (or any other) site. But when a hardware review site allows its own server (and logo, and page template) to be used for advertising, well... says a lot.
P.S. - Interesting how any posts that point out the differences in the test systems (and their consequences) get instantly voted down. Must be just some "regular users" who consider that objective facts are "off-topic" here, eh?
I'm not sure which of your nonsensical posts to respond to, so I'll just pick this one. A few points to think about if you're actually capable of such an act.
1) How much do you pay AnandTech for their articles? Yes that's right, they're ad supported! Guess that means everything they publish is lies and paid for, eh? Or else, it's just a source of revenue, like it's always been. Looking at the "Intel Resource Center" I see pretty much every article that mentions Intel (CPU, chipset, tradeshow, motherboard, whatever) going back about a year. Why are there no Netburst articles? Probably because NetBurst hasn't been worth discussing for over a year, and Anandtech hasn't reviewed any in that time. Did Intel do this on purpose? Maybe - but wouldn't you if you were in their position? Now, I don't see any omission of articles in the past year where Intel got less-than-glowing commentary, and I see some links on the right to some Intel site stuff. It's not a big deal... and ifyou don't like it DON'T CLICK THE LINK! Moron....
2) Memory configs. Ever tested any memory? Apparently not, because you clearly don't know Jack or Squat about the topic. Let's see, try doing a power draw test with 4 x 512MB DIMMs and 2 x 1GB DIMMs. Or 4 x 1GB vs 2 x 2GB. Tell me how much of a power difference there is, because I've looked at it and I see less than a 2W difference. So in terms of power draw the AMD is penalized a few watts at most. Or if you prefer, Get off your stupid soap box and get the hell out if you can't contribute anything useful! Don't worry, we won't miss you.
3) Flaws with the article's methodology. For one, the only thing I'm really sure this testing shows is how these two servers perform in an AS3AP test. It doesn't tell me how it will work in the servers my datacenter uses. You know what? Short of getting the hardware and testing it I doubt anything will show that information. Different apps, drives, networks, and who knows what else yield different results. So this is just a rough estimate, and anyone that takes it as more than that is already a fool.
The RAM configurations are also somewhat questionable, depending on the use. Some places will only use 4x1GB RAM; others will load all the slots with 2GB modules. (That should SERIOUSLY hurt the FB-DIMM setup!) I wonder why they didn't test that way? Oh that's right: they probably don't have 16 2GB FB-DIMMs available and Intel didn't want to help them out. That's only $2000-$2500 depending on the brand (or $6000+ if you get it straight from Dell! But at least then you know the RAM works properly and Dell supports the setup.) Why don't you send them the memory they need? While you're at it, can you get them some realistic benchmarks that will stress that much RAM? Yeah, didn't think so.
There are flaws in the article, true. There are flaws in every article out there. You don't honestly think the latest reviews showing performance in one specific area of a few games is the same as testing every game, right? Or that SLI and Crossfire work properly in new titles most of the time? Or that quad core on the desktop matters at all when it comes to gaming... or anything outside of video encoding and 3D rendering and a few other specific tests?
Now, a bunch of your posts just got one point higher because I commented instead of downrating. Why rate them down? How about because you're being an arrogant prick and a fanboy, complaining about stuff that is largely out of the hands of the reviewers? I'm done. Feel free to miss the point entirely and complain some more.
1.1) I (and all visitors to this website) pay Anandtech through the ads they have on their site. The various companies that advertise here only do so because of us. And the reason why people visit this site is to read (what they believe to be) honest and bias-free articles and hardware reviews, not editorial advertising. If Anandtech thinks it can survive on Intel's sponsorship alone, that's fine. But eventually even Intel will stop sponsoring a site that has no credibility (great as Johan's articles are, I don't think he can churn them out fast enough to make people come here on a daily basis).
1.2) I wasn't the one who said "The Intel Resource Center includes all our Intel-related articles". That statement was made by a member of Anandtech's staff.
1.3) You might as well say that if some site decided to write "AMD is great!" or "Sony is the best!" at the end of every paragraph, people should just "ignore it" and trust that the rest of the site was not biased in any way. There is a difference between clearly labelled banner ads and accepting money to turn your entire site into a big advert for company X or Z. "Moron".
2) Don't try to spin it. The point is that the Intel system was tested with half as many memory sticks as the AMD one, and (more importantly) with 75% of its memory banks empty. I find it quite telling that, when asked about this, the Anandtech employee posting above wrote that was because "Intel knows that you pay a power penalty for every FB-DIMM". So, because "Intel knows" that, Anandtech's system comparison is tweaked to make it less obvious to the readers...? o_O
3.1) Yes, I'm sure poor Anandtech didn't have the resources to buy or borrow another four FB-DIMMs for this review (so they could use at least half the slots in the board). It's not as if they have contact with any manufacturers or retailers that would be happy to send them the RAM in exchange for a mention and a link in the article, eh...? They did manage to get a 12-drive 15k SAS array, though. I bet they give those away on street corners.
3.2) Learn some maths... "moron". 1 GB FB-DIMMs cost $70 each. In other words, it would have cost them $840 to fill the remaining (12) of the slots in the board, $560 to match the configuration in the Opteron system (8x1) or $280 to test power consumption with half the board's slots loaded. This is assuming they didn't have any more FB-DIMMs available and couldn't get / borrow them for free. All of these values are a far cry from your suggested "$6000" or even "$2500".
Don't read too much into my comments, Justin - I don't live anywhere near Jason/Ross. Or Anand, Derek, Wes, Gary, or Johan for that matter. In fact, other than editing I have pretty much nothing to do with the other articles. They are merely my opinion, and I'm speaking realistically: why would Intel send a system specifically equipped to handicap it? They won't. Should Jason and Ross go out of our way to do so? To what purpose? The two are relatively comparable, inasmuch as we can get similar hardware. And don't underestimate the difficulty of getting new hardware - especially very expensive server hardware. Beyond that, readers need to read between the lines a little - take their own needs into account. If an IT guy is looking at getting a new server and installing 32GB (or 64GB) of RAM, I hope they have more sense than to look at 8GB server configurations and assume everything will be the same, only with "more RAM".
I still don't get your whining about the sponsored Intel section - it's just a "site view" ad for Intel as far as I can see, with content from AnandTech and Intel that may be of interest. If you're looking for a quick collection of Intel information, I'd assume that's useful. It's on the home page on a smallish image, and at the end of any Intel specific (i.e. a new Intel CPU) articles. Heck, I wish AMD had a sponsored view as well. :) At any rate, we have huge ads from Gigabyte, Kingston, OCZ, Crucial, and many others splashed around. What makes the Intel "sponsored" version of the site different (which clearly states "This site sponsored by Intel")? Personally, I don't click the ads and I don't click on the little Intel Resource link either (except to see what it was). I'd assume 99.99% of you are the same. Hey - we're also "sponsored" by Verizon and T-Mobile I guess. What's that mean for our iPhone articles?
We have a separation of editorial and advertising staff for a reason - other than looking at the ads, I have no idea who is supporting us. I don't even know how much an ad costs on the site. I do know that at the end of the month, I get a pay check, and for that I'm grateful. There's also a fine line between tact and flaming that needs to be walked, particularly when writing an article. We've harped on FB-DIMM in the past, we ripped on Intel in the NetBurst era, and we've had ups and downs with pretty much every manufacturer out there.
The fact is, we appreciate good technology and products, and right now Intel has the upper hand in most areas. Barcelona isn't bad, but it's not K8 vs. NetBurst by any stretch. Still, if Intel got rid of FB-DIMMs - or at least made them optional for now - and got an integrated memory controller into their systems yesterday, you wouldn't see me complaining. I guess we need to wait for Nehalem on those area.
I understand that AMD and Intel probably submitted test demos, and you were either contractually unable to modify their configuration or felt it wouldnt be good science.
But, then you should have said "Take these power consumption and performance per watt metrics with a bit of salt because in real life no one would handicap a server by putting in 8x1 sticks rather than 4x2."
You've said yourself that a) putting in more memory sticks, irrespective of size increases power consumption, b) anyone needing such an AMD server wouldnt bother with 8x1.
So I dont know why you are baffled when we cry foul - here we have a badly configured AMD server vs a well configured Intel server. Which do you think is going to draw more power? Its like saying, "Our AMD server came with two HD2900 XT cards in CrossFire, so in our SQL Server tests its performance per watt metric is extremely bad." Duh.
Tell your readers you dont think its a fair comparison then. I dont care if AMD screwed up the config, or you couldnt be bothered about correctness. You've said yourself it would be a handicap for the Intel server to have the same memory configuration as the AMD server. Why not make sure the readers know that?
Again this has nothing to do with fanboyism. It has everything to do with poor benchmarking, and as I see it, benchmarketing.
quote: They are merely my opinion, and I'm speaking realistically: why would Intel send a system specifically equipped to handicap it? They won't. Should Jason and Ross go out of our way to do so? To what purpose?
All we're saying is that the configuration between the AMD and Intel systems should as close as possible for the test results to have meaning. You obviously dont agree.
You still missed the point: the configs are close. They're close in every area except the motherboard and CPU, which you're not going to be able to change.
2x4GB DIMMs (on AMD) uses very nearly the same amount of power as 8x1GB DIMMs. It's not an issue there. On the other hand, 8x1GB FB-DIMMs gets a nice 5-7W penalty per FB-DIMM, because of the AMB. Regular DDR2, the power draw is determined by the number of memory blanks on the PCB. FB-DIMMs is that, plus another ~5W for the AMB. If they ran the AMD system with 4x2GB, I expect it would use within 4W of the same power draw.
FB-DIMMs are bad for power consumption. How do power requirements change with 32GB loads? I can guess that AMD will have an advantage, but without actual testing I'm not going to publish a guess. At the same time, I don't think everyone out there runs 16 DIMMs in a server. I think most companies buying a new server would buy 2GB DIMMs (or FB-DIMMs), but if a company knows that they really only need 8GB of RAM they're not going to install twice that much let alone four times as much memory.
What would be ideal? I'd like to see scaling numbers showing performance and power with 2x2GB up through 16x2GB on both systems. I'd also like to see more benchmarks, and benchmarks that can leverage the availability of more RAM. I'd like to see a truly repeatable benchmark showing how the servers behave in virtualized environments (which is were these high performance quad-core CPUs are truly important). There are always more tests people would like to see, but realistically no one can provide tests of everything that might be done.
And you know what? I haven't a clue as to how to do even the tests Jason and Ross are running, let alone something like virtualized environment testing, because I'm not involved with anything like that. :)
The point is not whether Barcelona is bad or not (I think it's a huge disappointment in terms of performance, mainly due to the low clock speed, and I'm not convinced by the L3), and I expect the Xeons to beat the crap out of it in terms of peak performance. In fact, since I work mainly in effects and image processing, the Xeon is by far the best choice for me (for the workstations, at least; render nodes are a different matter, and cost becomes more relevant).
The point is that the Xeons do have a very big weakness for power-critical server environments: the consumption of FB-DIMMs. And leaving 75% of the Xeon's memory slots empty is just not something that people will do, when running a server under high loads.
If a system can take up to 16 DIMMs, and if you cannot or do not want to test multiple configurations (ex., 4, 8, 16), which would have been useful in an article about performance-per-watt, then the logical choice is to fill half the memory banks. You (or whoever supplied it - we still don't know who that was, AMD?) did that with the Opteron system. It doesn't cover every possible case, but it covers the "average" (and I daresay most common) configuration.
I can certainly understand Intel asking you to test it with less FB-DIMMs. I cannot understand you complying with their request, when your main obligation is (or should be) to your readers.
I wonder if you would also accept not running any 3D rendering benchmarks on Opteron systems if AMD asked you to, because they know they're at a disadvantage there?
And if you don't see the problem with having a "site view" that turns your entire website into a giant ad for one manufacturer, I guess I can't explain it to you. It's not labelled as an ad, it looks just like another Anandtech "section", and there are even links to it at the end of some of your articles (again, not labelled as advertising). At the very least change the banner at the top to "Intel", not "Anandtech". Your wish for "more sponsored site views" is, to say the least, worrying about the state of web IT journalism, and Anandtech in particular. But hey, I guess "sponsored world views" work well for Fox News, etc. Why get an objective picture of things when you can just dive straight into the channel that confirms and reinforces your preconceptions, wrapped in an aura of "journalism"?
Maybe the people who wrote this article are just very naive or very distracted, and somehow overlooked the excessive number of fans in the (cooler) Opteron system and the reduced number of (power-hungry) FB-DIMMs in the Xeon system. That's within the realm of possibility. Or maybe they noticed that but thought it wouldn't affect the results of a performance-per-watt comparison (which is stretching it a bit). But when you couple that with other recent articles and the "ad disguised as information" that is the "resource center", your credibility is at stake. Or, as far as I'm concerned, and for some of your writers, it's not even "at stake" anymore.
PS. Where exactly are your Verizon or T-Mobile or Gigabyte "site view" ads...? Last time I checked, when I click on those (clearly identifiable) ads I'm taken to the manufacturers' websites, I don't get a "morphed" version of Anandtech.
PPS. Hans Maulwurf's question below is also an interesting one.
AS3AP is a complete benchmark suite, right? Just like SYSmark2007 in a sense (although that's sort of stretching it). There are different factors that go into the composite score. Without knowing more about the benchmark, I can't tell you how the overall vs. the Scalable scores. I would think there's a possibility that the overall score is heavily I/O limited (RAID arrays and such), in which case the scores of Intel and AMD in those tests might be a tie. In that case, cutting out those results to show the actual CPU/RAM scaling is a reasonable decision. Intel is 27% faster in the overall result, and up to 55% faster (something like that) in the scalable tests. If as an example there are three scalable tests and three tests that are I/O bound, the overall score should be about half the scalable score. Again, however, I don't know enough about the benchmark to answer - feel free to email Jason/Ross.
--------
Anyone that looks at the Intel Resource Center and doesn't recognize that it's basically a different form of advertising is beyond my help. They - and you - have probably already decided that we're bought out, and I doubt there's anything I can say that will change their mind. The fact is that we ripped on Intel for three years with NetBurst and praised AMD. Now the tables have turned, AMD is having problems and we're pointing them out, and Intel has some great CPUs and we're praising them for their performance; suddenly we're "bought out". (Funny thing: I seem to recall a lot of AMD adverts back when they were on top and not as many from Intel; now the situation has reversed.)
The articles in the Intel section are not written for the intent of marketing, though they can be used that way. They are honest opinions on the state of hardware at the time of the articles - anyone actually saying that Intel isn't the faster CPU on the desktop these days would have to be smoking something pretty potent. Are some of the opinions wrong? Probably to varying degrees, and all opinions are biased in some fashion - I'm biased towards price/performance and overclocking, for example, so I think stuff like Core 2 Extreme and Athlon FX is just silly.
When AMD was on top, we did far fewer Intel mobo reviews - even though the market was still 80%+ Intel boards (though not the enthusiast market). Now Intel is on top, and personally I couldn't care less about what the best AM2 board is, because I don't intend to buy one until AMD can become competitive again. (But then, I overclock most of my systems.)
It sucks for AMD that they are the smaller company *and* they have a lower performing part. It sucks for consumers that if there's no competition, R&D tends to stagnate. I'd love to see a competitive AMD again, and the Barcelona 2.5GHz chips might even make it to that stage. I'm more interested in Phenom X4 vs. Core 2 Quad, though, both running with DDR2 RAM and doing the type of work I'm likely to do. More likely than not, however, my next upgrade will be to Core 2 Quad Q6600 and X38, with some overclocking thrown in to get me up to around 3.3-3.5 GHz.
----------
For what it's worth, I'm writing this from my Opteron 165 setup, which is still my primary computer. The X1900 XT is getting a bit sluggish, so I have to go elsewhere for gaming at times (Core 2 E4300 @ 3.3GHz and an 8800 GTX), but for all non-gaming, non-encoding tasks this system is still excellent. It's also a lot cooler/quieter than some of the high-end setups I have access to, though with winter coming on I might want to bring a quad-core over to my desk for use as a space heater. Oh yeah - and I'm still running XP, which is one more reason to save gaming for another setup... I can try out DX10 without actually having to fubar my work computer. :)
Again, you're trying to turn this into Intel vs. AMD and talk about all sorts of unrelated things while avoiding the issue. The issue isn't who makes the fastest CPUs. The issue is Anandtech's testing methodology and system setup options. If Anandtech had chosen to put 8 FB-DIMMs on the Xeon system and just 4 DIMMs on the Opteron system, and stick 16 fans into the Xeon, we would complain in exactly the same way. The issue isn't who wins. The issue is whether we can trust your methods, results and conclusions.
From your posts here, it seems that Intel supplied you the Xeon system, and decided to install just 4 FB-DIMMs, is that correct?
Who supplied the AMD system, and who decided what its configuration should be? AMD? Intel? Neither?
PS. - I recognize the "Intel resource center" exactly for what it is. I'm sure Microsoft would "sponsor" an Anandtech "site view" in a blink if you wrote a couple of Vista vs. OSX articles based on creatively prepared systems and benchmarks. But of course, then you have to keep being nice and creative, or they might decide not to sponsor you any more (bummer). After all, a banner ad is a banner ad; the manufacturer controls what's in it. But if you want them to pay you to use your articles for marketing (as you said above), obviously you know what the conclusions of those articles must be. I wasn't born yesterday and I'm sure you weren't, either.
I'm bringing up all the other issues because you brought them up. This isn't an answer to a single post.
If we were to do what you suggest with the articles, we would lose readership in a large way. I can control what I write, but not so much for other articles. No article is perfect, and since I didn't write this article and I didn't perform the testing, I can't say for sure how flawed the comparisons are. Yes, we used faster Intel CPUs than AMD CPUs, but that's because Intel sent Harpertown and AMD sent Barcelona and with both being new, we basically used what we got. There is an update with 2.5GHz Barcelona coming.
For the fans, you're an IT guy so obviously you know what sort of fans go into a server. The answer is: the fans that the server supplier uses. And no, I don't know who specifically makes these servers... I think it was mentioned in a previous article, perhaps? I also don't know what amperage is on the Intel fans or the AMD fans. Eight lower RPM fans can actually use less power than four higher RPM fans... or they can use less. Ask Jason/Ross for details if you want, but the fact is that's how these servers are configured. I worked in a datacenter for a while, and let me tell you, the thought of removing/disabling fans in any system never crossed my mind. So just like we're stuck with different motherboards, we're stuck with different fan configs based on what the server manufacturer chooses. Considering that the Intel setup *does* use more power in many situations (particularly with more FB-DIMMs), I don't believe that the Intel fans are low RPM models. The real problem may be the internal layout and design decisions of the AMD server - the Intel system seems to have been better engineered in regards to ducting and heat sinks.
Who sent the systems? I don't know. It seems the Intel setup changed from previous articles while the AMD remained the same. Is that because Intel said, "we don't like the configuration you used - here's a better alternative"? I don't know that either. I'm guessing Intel worked with a third party to configure a server that they feel shows them in the best light. AMD probably did the same with the original setup, and AMD is welcome to change chassis/server as well. If they don't it's either because they don't care enough or because it wouldn't make enough of a difference. I'm inclined to think it's the latter: that these tests are still only a look at a small subset of performance, and what they show is enough useful information for people in the know to make decisions.
What *do* these tests show? To me, they show that at lower loads Intel is now a lot closer to AMD thanks to Harpertown, and that AMD per-socket performance has increased thanks to Barcelona. However, at higher loads Intel offers clearly superior performance - even if you stick with Clovertown. Performance/watt is influenced by a lot of things, so I personally take those results with a grain of salt. If a business is really concerned with performance per watt and power density, they'd likely be looking at blade servers instead. The results in this article may or may not apply to blade configurations, so I'm not willing to make that jump.
And as previously discussed, the amount of RAM a company intends to install is a consideration. If a company is going to load all DIMM slots, the Intel servers look like their power requirements will jump close to 50W relative to Barcelona... which means that Harpertown would be more like Clovertown on the graphs. I'd imagine 2.5GHz Barcelona will also require more power, but until I see results I don't know for sure. Companies looking at loading all RAM sockets are probably very concerned with overall performance as well, in which case Intel seems to have the lead... except that in a virtualized server environment, the results here may not show up.
So many factors need to be considered, that I'd be very concerned with anyone looking at this one article and then trying to come to a solid conclusion for all their IT needs. This is just one look at a couple configurations. Then again, a lot of IT departments just go with whatever they've used in the past (Dell, IBM, HP...) and take the advice of the server provider.
Again, you're replying to "complaints" that no one made.
No one is "complaining" about the fact that the Xeon system is faster than the Opteron. I (and anyone else with a clue) would be extremely surprised if it wasn't. If we were going to "complain" about that to anyone, it would be to AMD.
You say that "companies looking at loading all RAM sockets are probably very concerned with overall performance as well, in which case Intel seems to have the lead". You're 100% right about the "seems". That is indeed the idea one gets from this article (where the Xeons seem to be the best choice both in terms of peak performance and in power efficiency). However, as soon as you load more than 1/4th of the memory banks on the Xeon, the tables are turned in power consumption. And when you load all of them, the difference is huge (10 watts per FB-DIMM adds up to 120 watts above your numbers). So, anyone who needs a lot of RAM but also needs to keep power consumption within certain limits (which is the norm rather than the exception, in dense server environemnts) cannot really go with the Xeons. And companies that need all the CPU power they can get (ex., 3D render farms) will naturally go with the Xeons and either reduce the amount of RAM or find some way of dealing with the extra power consumption and heat.
In other words, the Xeons only "seem" like the best choice for people planning to fill all RAM banks because your performance per watt calculations were made with 75% of those banks empty. That is what we (or I, at least) are complaining about.
Testing a 2 GHz Barcelona is fine. Hell, even testing a 1.6 GHz would be fine, if that was all you had. As long as you're honest about it. But tweaking one of the system's configuration (using just 25% of the memory slots), to hide its main weakness for server use (which is precisely what this article is supposed to be about) shows either incompetence or bias.
I thought that by posting here I already was "asking Jason / Ross about it". For some reason they don't seem to want to answer (publicly, at least).
Don't you think that the source of the servers (and the people or companies responsible for deciding their configuration, and how much they knew about the benchmark that was going to be used) should be mentioned in the article?
You talk about things that "maybe" and "probably" AMD did, but apparently you don't even know who supplied the AMD system (it seems to be in a desktop tower case), or how much memory the Intel system actually came with (wouldn't it be interesting if Intel had in fact supplied it with 16 FB-DIMMs?). I can also make conjectures about what may have happened and who may have configured the systems. But if you (as a member of Anandtech's staff) are also limited to guessing, maybe you're not the right person to reply.
Sorry for posting here, but I think you didnt notice my post at the bottom of this page. No problem, I know this article is rather old now ;) An answer here would be nice. Thanks.
"Barcelona is about as fast as Harpertown in AS3AP. OK.
In your article you write:
"The Scalable Hardware benchmark measures relational database systems. This benchmark is a subset of the AS3AP benchmark and tests the following: ..."
Now you choose a subset of this test in which Harpertown is much faster. Obviously AS3AP consist of several substest and you could as well choose one where Barcelona is much faster. But whats the use of this? You tested all subtest together with your AS3AP-Test.
Its the same as testing a game and both CPUs having the same score. Then you choose a subtest(e.g. KI only) where Harpertown is faster and conclude its faster overall.
So what did I miss here? From what I read Barcelona is as fast in AS3AP as Harpi(and should be faster in some subtest and slower in others) while you conclude:
"Intel has made some successful changes to the quad-core Xeon that have helped it achieve as much as a 56% lead in performance over the 2.0GHz Barcelona part."
I think these tests are full of holes, and its a pity,
I was geniunely curious to see how both new chips performed.
instead, we get this sponsored advertising.
In the full A3SAP benchmark, the AMD punches above its weight, while in the subset, it doesnt.
Now, if the results in the scalable CPU benchmarks were a subset of the A3SAP benchmarks (which we are told they are) and a 3 GHz Harpertown was able to lead a 2 GHz Barcelona by 27%, then we can say that the average difference between harpertown and barcelona is 27%.
now, if the chosen subset in question was 59%, that would mean it dragged the average down quite a bit. so what are the rest of the numbers?
there is no point in choosing on particular subset if you test the whole thing.
Taking it to the extreme, obviously running 16 2GB FB-DIMMs uses a lot more power than 16 2GB DDR2-667 DIMMs. I'm not sure how many businesses actually use that approach, though
Well, the general idea when you buy a motherboard with 16 RAM slots is that you're going to fill it with 16 DIMMs, not leave 75% of the slots empty. Especially in the case of high-load servers, maxing out the RAM is pretty much the norm. But if you decide that's "too much", at least fill half the memory slots with RAM. That means 8 FB-DIMMs and 8 DIMMs, respectively (both boards have 16 slots).
As mentioned by others above, it's hard to believe that people working for one of the biggest hardware review sites in the world would overlook this kind of thing involuntarily. I'd risk saying that Anand or Johan never would.
The purpose of a product review is to make an objective comparison between products, and not to avoid or minimize anything that would expose the weaknesses of one of them.
Reducing the number of FB-DIMMs in the Intel system when testing for power consumption is comparable to limiting the total load on the AMD server when testing for peak performance.
Although the review gives the idea that both systems were configured and assembled by Anandtech, your post above suggests that the Xeon system was in fact configured by Intel ("The Intel setup came with 2GB DIMMs... obviously Intel knows that you pay a power penalty for every FB-DIM"). Yes, Intel knows that... so they're allowed to leave 75% of the RAM slots in your test system empty, to minimize that penalty? And who, exactly, configured the Opteron system?
you seem to want us to intentionally handicap Intel just for your own benefit.
Handicap? By actually using (at least) 50% of the available RAM slots? And... "our own benefit"? Hell, yeah. It's to the consumer's benefit to have objective reviews that point out both the strengths and weaknesses of all products. Or is that such an alien concept? Or maybe you think your readers take advertising money from AMD...? Last time I checked, there was no paid "AMD resource center" in my back yard.
8 sticks ram (any size) = lots of power
4 sticks ram (any size) = less power.
7 fans = lots of power
3 fans = less power
You were testing the CPU and the platform in general, not the case or ram. so using differing amounts of ram and fans means your power consumption results are meaningless.
quote: Ideally, we would have 4x2GB DIMMs on the AMD setup, because any business serious about the platforms is likely running 2GB DIMMs these days.
Okay read what you said again slowly. Why didnt you have 4x2GB in the AMD setup? You say yourself any business in need of such a platform wont bother with 8x1GB.
quote: As for the RAM config, you seem to want us to intentionally handicap Intel just for your own benefit.
The only thing that I would benefit from is an ubiased test. If you say that switching teh Intel to 1 Gb sticks would unfairly penalize it, doesnt the same hold true of AMD? How can what is unfair to Intel not be unfair to AMD? I'll tell you, a little thing called marketing.
Read the first post about the ram, its not the total amount, its the configuration. give each platform 4 sticks of 2 GB each, then we will see.
I dont think its that difficult to understand, you guys either made an error which makes your results meaningless, or were paid, which still makes them meaningless.
You bring very valid points! And thanks to the originator of this discussion!
But let me spice things a little.
I think you and Anandtech are wrong!
Correct testing would be loading ALL THE MEMORY BANKS WITH RAM!!!
That would be more realistic scenario.
I see Intel praising the technology edge of FBDIMM by allowing to have more RAM on the system, then lets load the Intel system with the maximum RAM they can handle.
Otherwise seams a little biased test.
Showing how Intel systems:
-are energy efficient = use less RAM on them and add more to the AMD system
-can handle much more RAM than AMD = Show how Intel system have lots of memory banks
although you are correct when you say there are small errors in the setup, i cant agree with the part about being paid by intel todo...
This is an assault which they cannot defend themselves against.
Either way this review would be much more interesting when a 2.5GHz release and low power barcelonas would be available. But that is dependent on AMD itself.
quote: Either way this review would be much more interesting when a 2.5GHz release and low power barcelonas would be available. But that is dependent on AMD itself.
Which is probably one of the reasons why CPUs in some reviews overclock so well, and the ones you buy from retail overclock so poorly.
I don't trust any review where the item was supplied by the manufacturer; chances are they cherry-picked the best one they had, to get the best possible review. If the sites can't afford to buy the items they're reviewing, they should simply strike a deal with a retailer, where they get to test the stuff (and return it) in exchange for a sponsored link or something. That way the chances of getting an above-average (or below-average) part are the same as for anyone else.
In other words, what you're saying that the Opteron did not have more RAM than the Xeon, so it did not get any benefit from the different memory configuration.
Well, that's the "pro-AMD" conspiracy put to rest, no doubt. Thanks.
But you still have 8 DDR2 DIMMs on the Opteron versus 4 FB-DIMMs on the Xeon. As pointed out above, using the same configuration would either reduce the Barcelona system's power consumption (by about 18 watts, if both used 4 DIMMs) or increase the Harpertown system's consumption (by about 40 watts, if both used 8 DIMMs).
In the latter case (which is the likely scenario on a server under high loads - fill it with as much RAM as possible), that would put the Xeon's "performance per watt" below that of the Barcelona system in most of your tests.
And there's still the mystery of why a system that dissipates less heat needs more than twice as many fans. Or was there also a typo on the number of fans in each system? Maybe the number of fans is different but the total number of fan blades is the same, so that's alright? :)
With 8 FBDIMMs the Xeon may consume ~42 watts more !
A standard fan may consume anywhere from 1.6 to 6.0 watts.
Try to use only 4 fans (1 middle-front, 1 top-rear, 2 CPU) with the AMD system.
It will work perfectly and you will save ~15 watts.
1. Add 4 FBDIMM in the Xeon system.
2. Remove three 3.5" fans in the AMD system.
3. Rebench.
4. Update your power consumption and performance/watt graphs.
5. Thank you very much.
okay I also think its fishy but I'm playing devils advocate here.
if you had to run a netburst server, part of your power goes to cooling, thats part of your total energy requirement. if the AMD system requires more cooling, for whatever reason (no matter how strange that may seem), then like it or not its a part of your server and energy expense.
As for the differing amount of ram, that makes no sense at all. Why halve the amount of memory on the Intel system?
Except in this particular case, based on the available data, this does not make sense at all. Power requirements of the AMD system are already lower then that of the Xeons (including the extra fans and ram), so these extra fans should not be required.
A difference of 50 watts would be enough to push the efficiency (performance per watt) of the Barcelona system above that of the Harpertown system in most of the benchmarks used in the article.
Wow if these numbers are representative then Barcelona is killing Intel, even at 45nm, on a $/performance basis and has great perf/watt too. A 2.5 GHz Barcelona will match anything Intel has until 2008 and a 3GHz Barcelona will obliterate them, period.
Looks like Harpertown isn't enough to match AMD if they can get it scaled quickly. I think AMD will be making large server marketshare gains going forward until Nehalem is introduced. Great news for buyers!
Yeah right, because 3GHz Xeon has a 40-55% lead against 2GHz Barcelona, you will think that 2.5GHz (+20% clockspeed) Barcelona will overtake 3.2GHz Xeon?
It's quite funny, two years ago when Intel was selling Netburst dual cores for $150-200 while AMD charged over $300 for a cheapest dual core CPU, nobody cared about performance/$ benchmarks :)
But now some fanbois are making up "performance/$", "performance/$/watt/clock", "performance/watt/Ruiz's IQ" metrics just to artificically boost AMD's poor CPU. This is enthusiast site, most people care also about which product is a simply faster, that's why omitting expensive or 120W CPUs from the reviews is a bit silly. Fastest CPU from manufacturer A vs. fastest CPU from manufacturer B is always a fair game.
quote: also, theres the fact that 1 AMD MHz is not equal to 1 Intel MHz.
Here is your mistake, I was talking about percentage point increases, not MHz increases.
Ok, let's look back at example:
2GHz Barcelona performance: 1
3GHz Xeon performance: 1.4-1.55 (40-55% faster)
assuming perfect scaling:
2.5GHz Barcelona performance: 1.25 (in reality it will be less, since scaling is not perfect)
As you can see, even 2.5GHz Barcelona will not be as fast as current Xeons.
quote: it looks like Barcelona is a very good performer.
Why? Even the future 2.5GHz parts will be slow compared to competition.
Barcelona loses every real world test, in many tests it's significantly behind Xeon. Even when taking FB-DIMMs into account, Xeon has a lower power under load in Povray test.
And 3GHz Xeon isn't even a top speed part, in November Intel will introduce 3.16GHz quad core Xeon with faster parts coming later.
The fact is that AMD needs >3GHz Barcelonas in November, just to achieve parity with Xeon.
on the desktop, barcelona will have a) a faster memory controller b) faster clocks c) faster memory (ddr667 on the server, ddr800 and ddr1066 on the desktop)
yes for desktop apps k10 needs more or less clock parity, but the original poster alluded to the target market for the opterons.
K I see what you saying about relative performance.
In most cases Barcelona seems to perform equal to an approximately equivalently clocked Harpertown.
So when a harpertown chip has a 50% clockspeed advantage, its going to beat any AMD chip until said AMD chip gets up to equal clockspeeds (approximately).
Nonetheless, I think its performance per watt figures should be pretty interesting, and I'm glad it generally outperforms a Clovertown at equivalent speeds. If it couldnt do that, it would be a dead duck.
You are correct when you are talking about performance only related stuff. However barcelona won't be that far behind. Performance/wat will be better (is allready better if you consider that the more memory the worse the situation gets for intel).
Why? Even the future 2.5GHz parts will be slow compared to competition
That sin't true. They will be very competive when we are talking about performance/W.
And i think that amd will be very performance competive vs intel when they can reach the 2.8GHz-3Ghz ratios which are due for start of the coming year. (especially when faster registerd ram is available).
Another comment that i would like to place is that on techreport you can hardly speak about server based benchmarks. although it does point out that amd will need frequency equality to be competive in the fanatic sector.
I don't wanna argue with you over AMD vs. Intel, because you know: Doing so on the Internet is like running in the Special Olympics - even if you win, you are still retarded...
Nevertheless, you can't even calculate.
If 2.0 GHz is the basis and you lift it to 2.5 GHz, it is NOT a 20% improvement, but RATHER 25%. Learn calculating percentages, it helps man, it helps...
Barcelona only kills on $/performance because it's being compared to the higher-end Intel SKUs . There are much cheaper 2.33GHz Clovertowns and comparable Harpertowns. Meanwhile Harpertown pretty much guarantees Intel performance superiority for the forseeable future.
Beyond a certain price difference, it's cheaper to buy two systems than to buy a faster system. Most (granted, not all) software that can run efficiently on multi-core CPUs can also run efficiently on multiple nodes. A single, more expensive system can still be preferable if you have space constraints, of course, but I suspect Intel will lower its prices a bit as a response to Barcelona.
Personally I would like to see a comparison between two systems with a similar price.
Most (granted, not all) software that can run efficiently on multi-core CPUs can also run efficiently on multiple nodes. I wouldn't say most, more like a few. Plus, a lot of expensive software are licensed by the socket, so any savings on the CPU is minor compared to the overall costs of software.
Two systems also take up more space and consume more power, and the trend of virtualization also leads to fewer, bigger servers.
Just out of curiosity, what software are you thinking about that scales well to multiple cores but cannot run on multiple nodes?
Rendering can be done in render farms, most servers can run in multi-node load-balancing configrations, etc.. The only field that comes to mind where multiple nodes really aren't doable is scientific / HPC, which needs very fast access to a shared memory pool. But the days of glory of the monolithic supercomputer are kind of past.
At work (extremely large telecomm company) we don't run each box more than 50% load for failover reasons. So when I'm looking at these numbers, to me it seems like AMD is doing pretty good with 2.0GHz CPU's vs. 3.0GHz Intel CPU's.
I'd really like to see how the scaling goes with Barcelona from say 2.0, 2.5, and 3.0GHz.
I think at 2.5-2.7GHz is when we're really going to see Barcelona start to come into its own...
I think this shows the same trend from the previous generation, in terms of performance per watt: AMD rules for servers (low / medium CPU loads), Intel rules for workstations and render nodes (high CPU loads).
And both are complete overkill for desktop systems, but I'm sure Microsoft will find a way to make Windows crawl on them. :)
HPC / FEA / etc. is also high CPU load but it also needs low memory latency and high bandwidth (where AMD has an advantage), so these benchmarks don't really tell us much. My guess is Intel will have a small advantage (despite the slower memory access), at least until Barcelona hits 2.5 GHz or so.
This review seems biased. If you want to run only the 2GHz part at least calculate the performance per clock because it looks like Barcelona has Intel beat in a lot of the benchmarks. Meaning 2.5GHz would be much more competitive. and you also should have run the 3.2 GHz K8.
As for 3.2GHz, our reasoning was it was a high wattage part, and it didn't make sense to include it. At the wattage it runs at, perf/watt was not pretty.
This review seems biased. If you want to run only the 2GHz part at least calculate the performance per clock because it looks like Barcelona has Intel beat in a lot of the benchmarks. Meaning 2.5GHz would be much more competitive.
Well. It looks like AMD wins. Money is everything. Oh sure, there will be some geek who will say, "Money is no object, only getting the workload done as fast as possible." that geek would be wrong. Amazing how this WOULDNT start a price war. 400 vs.... 2.5-3x that much? You could put four on a board and start rocking in the free world.
Makes me happy about Phenom. Imagine a 190 dollar quad that isnt intel? something to buy finally.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
77 Comments
Back to Article
Hans Maulwurf - Thursday, September 20, 2007 - link
Barcelona is about as fast as Harpertown in AS3AP. OK.In your article you write:
"The Scalable Hardware benchmark measures relational database systems. This benchmark is a subset of the AS3AP benchmark and tests the following: ..."
Now you choose a subset of this test in which Harpertown is much faster. Obviously AS3AP consist of several substest and you could as well choose one where Barcelona is much faster. But whats the use of this? You tested all subtest together with your AS3AP-Test.
Its the same as testing a game and both CPUs having the same score. Then you choose a subtest(e.g. KI only) where Harpertown is faster and conclude its faster overall.
So what did I miss here? From what I read Barcelona is as fast in AS3AP as Harpi(and should be faster in some subtest and slower in others) while you conclude:
"Intel has made some successful changes to the quad-core Xeon that have helped it achieve as much as a 56% lead in performance over the 2.0GHz Barcelona part."
I dont understand this.
tshen83 - Thursday, September 20, 2007 - link
Did anyone here notice the huge metal bar across the FB-DIMM slots? It must be for more FB-Dimm cooling. Without looking at the server first hand, you can't tell how the metal bar is attached to the memory.My question is this: where can you buy the bar if you were to build a server class PC yourself? And can someone tell me the mounting mechanism.
Viditor - Thursday, September 20, 2007 - link
One other piece of data is missing from the article, and it's looking like it might be important...
Kris Kubicki wrote in his blog
"The 2.0 GHz samples we saw on Monday were of AMD's B1 stepping of Barcelona. But these processors are not the ones we'll see on Newegg's shelves"
"Production Barcelona samples come with the BA revision designator"
"One AMD developer, who wished to remain anonymous for non-disclosure purposes, stated, "B1 versus BA should be at least a 5%, if not more, gain in stream, integer and FPU performance.""
"An AMD engineer, when confronted with the claim, stated that 5% gains when moving from B1 to BA processors "seem conservative.""
Given that, when you guys do the update, could you let us know which stepping it is that you're using? It appears that it may make a significant difference...
JarredWalton - Thursday, September 20, 2007 - link
Remember: 5% performance gains in synthetic benchmarks that stress specific aspects of a CPU don't mean 5% real-world gains.Viditor - Thursday, September 20, 2007 - link
I agree...but that's exacly why I am looking forward to some real-world benches on the production steppings. We still have no idea how shipping Barcelonas perform yet.
Viditor - Thursday, September 20, 2007 - link
Further on that...supposedly the reason for the better performance is fixing some major errata. It's quite possible that the performance boost is across the board and not just in synthetic benches.Schugy - Wednesday, September 19, 2007 - link
You run two benchmarks, you run closed software, you run software that might be optimized for the market leader's processors only, you run software that can't be optimized for the new architecture, you don't benchmark any alpha software that uses rapid virtualization.Maybe we have some benchmark numbers but the real performance of Barcelona is still speculation.
clnee55 - Wednesday, September 19, 2007 - link
AMD is always the underdog. They need superior product to gain market share. That was the case of Athlon vs Netburst. If Barcelona is just competitive, it is not good enough for them to regain the crown. They will stay as underdog.randomname - Wednesday, September 19, 2007 - link
From what I understand, these new (Harpertown) Xeons will not be released until November (12th?). Yet the article makes no mention of it, and by reading it, you would assume you can buy them right now.Or have I understood something wrong?
mutambo - Wednesday, September 19, 2007 - link
Intel systems are power mongers...generate enough heat to replace a room heater.Check out any dual socket systems they are using all kinds of cooling to cool the FB-DIMMs those are the worst part in intel builds.Justin Case - Tuesday, September 18, 2007 - link
Anyone else feel that the first image...http://images.anandtech.com/reviews/it/2007/barcel...">http://images.anandtech.com/reviews/it/2007/barcel...
...looks somewhat... er... phallic?
TA152H - Tuesday, September 18, 2007 - link
Oh my, you're absolutely right.That's really foul. Even the area between the Tick and Tock looks like the urethra. It's so wrong. Is that really the only way they could have presented the information? I mean, if they wanted to get pornographic, couldn't they have used a woman's breasts? Right one for Tick, left one for Tock? It's much more attractive than this.
Regs - Tuesday, September 18, 2007 - link
Marketing geniuses. Intel at its best. A better product, with a bigger...In all do seriousness, It's no surprise AMD can't compete with an architecture that's been out for over a year. AMD needs more tweaks and needs more clock speed. I just hope they don't disappoint again like they did with the K8. 4-5 years of stagnation.
TA152H - Wednesday, September 19, 2007 - link
I think it comes down to Intel being wiser than AMD. They were always smarter, as evidenced by their much more advanced processors like the P7 and Itanium. But AMD was wiser, and chose an easier path that also performed better. Intel had all the great technology, super-advanced trail blazing stuff that just didn't work that well. AMD made the same mistake by going native quad-core before they were ready. Consequently, they have a poor performing part compared to what Intel has, today, and promises for tomorrow. Obviously, the extent of their failure isn't as deep-rooted as the Pentium 4 was and at least the Barcelona can be improved (mainly by clock speed) more quickly, but the big problem is that the Barcelona is getting raped by Intel processors using FB-DIMMS. You add clock speed to the Barcelona, and the power goes up (everything else being equal). You change FB-DIMMS out, and you get better performance and lower power. So, the future doesn't look that bright for AMD, despite the fact they should gain clock speed pretty quickly. It's unlikely to help their power/performance much. Intel using more appropriate memory will to a great extent. Also, if AMD does manage to get close to Intel in performance, Intel will just release a higher performing part. They can hit much higher than 3.2 with their G0 stepping, so it's really a matter of whether it makes marketing sense.But, it sure sounds good to have native quad-core, and they sure were smart to do it. Right? Just like Intel was to come out with trace-cache, double-pumped ALUs, and super-pipelining and unheard of clock speeds.
But all that aside, if they can get the clock speeds up to a reasonable amount, and increase the size of the pathetic caches (yes, I know they are limited by the IMC and it limits it, but still 512K????) and in a release or two get full memory disambiguation, they will have a really good product. It will at least be competitive.
Justin Case - Tuesday, September 18, 2007 - link
Any reason why the AMD system had 16 GB of RAM (8x2GB) while the Intel system had only 8GB (4x2GB)?Also, any reason for the big differences in cooling (AMD system had 7 fans, Intel system had 3)? If the Barcelona system actually uses <i>less power</i>, as your numbers show, surely it can't dissipate <i>more</i> heat.
When you're measuring the power consumption of the whole system (and extrapolating that to the power efficiency of each CPU), you should try to make the configurations match as closely as possible, no? Not to mention that the amount of RAM can have an influence on the actual system performance.
I could understand different configurations if you were testing systems at a specific price point (and couldn't "afford" more RAM for the Intel system due to the more expensive CPUs, for example), but that wasn't the case here.
Xspringe - Wednesday, September 19, 2007 - link
I would really like to see updated benchmark scores as well! It only seems fair to add more ram to the xeon, for it might improve the benchmark scores and would also increase energy usage (which would be beneficial to the barcelona).Final Hamlet - Wednesday, September 19, 2007 - link
Yuk!I really would like to see an explanation from an editor on this critique...
Justin Case - Wednesday, September 19, 2007 - link
Add the unusual choice of benchmark and fact that Harpertown isn't actually due to be launched until November, and I think this is one (more) article we can file under the "iNandtel" section.Speaking of that, anyone know what happened to GamePC's "Labs" section? Along with the Tech Report they were probably one of the last sites with a steady output of meaningful, objective reviews of PC hardware.
JarredWalton - Wednesday, September 19, 2007 - link
IMPORTANT UPDATE INFORMATION:There was a typo/error in the original config. We apologize for the confusion - I should have verified with Jason/Ross earlier. The Opteron setup was running 8x1GB, not 8x2GB. Sorry to pop all the conspiracy theories (again), but the systems are a lot more similar than you would apparently like to believe.
Note also the update at the end: 2.5GHz Barcelona is on its way and will be tested shortly. We'll see how that compares with the higher clocked Harpertown.
Proteusza - Thursday, September 20, 2007 - link
With the last Quad Core Comes to Play article, and now this, I've completely lost faith in Anandtech's benchmarks.These guys are too clever for them to make a mistake like that, and if they did I'm sure they would see the mistake and rebenchmark.
No, I think these benchmarks were just paid for by Intel, in anticipation of its November launch to steal AMD's thunder. I'm not accusing the entire site of constant bias towards Intel, but rather a bias towards advertising. AMD has probably done the same thing in the past, and I'm sure Anandtech has been happy to oblige.
Proteusza - Thursday, September 20, 2007 - link
In fact on the front page of Anandtech there is an Intel Resource Centre link.And the URL looks like it makes sure that Intel knows who the referrer is - typical advertising.
JarredWalton - Thursday, September 20, 2007 - link
Yes, Intel pays for an Intel Resource Center page - it includes all of our Intel-related articles and some other information. It's pretty clear that the page is sponsored by Intel. I have no idea how much they pay, however. Don't like that area? Then don't click on it - it still isn't Intel influenced articles as far as the AnandTech articles are concerned.As for the RAM config, you seem to want us to intentionally handicap Intel just for your own benefit. Eight Registered ECC DIMMs came in the AMD config, and they are single sided DIMMs - meaning, a 2GB double sided DIMM would only consume marginally less power. The Intel setup came with 2GB DIMMs... obviously Intel knows that you pay a power penalty for every FB-DIMM, and you also pay a latency penalty. Ideally, we would have 4x2GB DIMMs on the AMD setup, because any business serious about the platforms is likely running 2GB DIMMs these days.
Taking it to the extreme, obviously running 16 2GB FB-DIMMs uses a lot more power than 16 2GB DDR2-667 DIMMs. I'm not sure how many businesses actually use that approach, though - not many in my experience. As always, we are testing specific facets of performance (and power and whatever else you care to name). Is there more to it? Of course. Is Intel always better or AMD always better? Of course not.
If I were in charge of a server purchase for a large company right now, I'd be looking at my specific needs to determine the best overall platform. For most companies, that's relatively low loads so the Opteron is perfectly acceptable. That means it's going to come down to features like manageability and support rather than performance. Very likely, I'd be looking at slightly more mature hardware anyway - bleeding edge and servers aren't generally a good mix.
Justin Case - Thursday, September 20, 2007 - link
Yes, Intel pays for an Intel Resource Center page - it includes all of our Intel-related articles and some other information.All of them? Interesting, I can only seem to find articles about Core and Core 2, nothing about, oh, say, Prescott or Paxville. In fact, I can't find any article in your "Resource Center" that is even remotely disfarvourable to Intel. Must be a temporary glitch. Or maybe those articles aren't deemed "resourceful" enough.
Also, it's interesting to see an anandtech.com address in my browser's location bar, the Anandtech banner at the top of an Anandtech page, and "This site is presented by Intel" below it. Well, at least "we have been warned".
I guess I'm just used to seeing manufacturer propaganda on the manufacturers' website or inside clearly identified ad boxes, not integrated into supposedly impartial hardware review sites. I know, that's so 20th century of me.
it still isn't Intel influenced articles as far as the AnandTech articles are concerned.
Of course not. And I'm sure that when large corporations make donations to political candidates, they're not expecting to influence their future decisions the least bit. They only do it so they can get a bit of exposure by appearing in the list of contributors. I'm sure Intel would be just as likely to sponsor your "resource center" if your articles pointed out the weaknesses in their products (such as, oh, I don't know, actually using all the FB-DIMM slots on their servers, instead of leaving 75% of them empty).
Proteusza - Thursday, September 20, 2007 - link
Wow I hadnt actually looked at the Intel Resource Centre before, I thought it was a site on Intel's server that Anandtech simply linked to.Now I know better and shant be visiting this site anymore.
Its a shame that you threw away your journalistic integrity for money.
Justin Case - Thursday, September 20, 2007 - link
There wouldn't be anything wrong with an ad banner linking to Intel's (or any other) site. But when a hardware review site allows its own server (and logo, and page template) to be used for advertising, well... says a lot.P.S. - Interesting how any posts that point out the differences in the test systems (and their consequences) get instantly voted down. Must be just some "regular users" who consider that objective facts are "off-topic" here, eh?
FrankThoughts - Thursday, September 20, 2007 - link
I'm not sure which of your nonsensical posts to respond to, so I'll just pick this one. A few points to think about if you're actually capable of such an act.1) How much do you pay AnandTech for their articles? Yes that's right, they're ad supported! Guess that means everything they publish is lies and paid for, eh? Or else, it's just a source of revenue, like it's always been. Looking at the "Intel Resource Center" I see pretty much every article that mentions Intel (CPU, chipset, tradeshow, motherboard, whatever) going back about a year. Why are there no Netburst articles? Probably because NetBurst hasn't been worth discussing for over a year, and Anandtech hasn't reviewed any in that time. Did Intel do this on purpose? Maybe - but wouldn't you if you were in their position? Now, I don't see any omission of articles in the past year where Intel got less-than-glowing commentary, and I see some links on the right to some Intel site stuff. It's not a big deal... and ifyou don't like it DON'T CLICK THE LINK! Moron....
2) Memory configs. Ever tested any memory? Apparently not, because you clearly don't know Jack or Squat about the topic. Let's see, try doing a power draw test with 4 x 512MB DIMMs and 2 x 1GB DIMMs. Or 4 x 1GB vs 2 x 2GB. Tell me how much of a power difference there is, because I've looked at it and I see less than a 2W difference. So in terms of power draw the AMD is penalized a few watts at most. Or if you prefer, Get off your stupid soap box and get the hell out if you can't contribute anything useful! Don't worry, we won't miss you.
3) Flaws with the article's methodology. For one, the only thing I'm really sure this testing shows is how these two servers perform in an AS3AP test. It doesn't tell me how it will work in the servers my datacenter uses. You know what? Short of getting the hardware and testing it I doubt anything will show that information. Different apps, drives, networks, and who knows what else yield different results. So this is just a rough estimate, and anyone that takes it as more than that is already a fool.
The RAM configurations are also somewhat questionable, depending on the use. Some places will only use 4x1GB RAM; others will load all the slots with 2GB modules. (That should SERIOUSLY hurt the FB-DIMM setup!) I wonder why they didn't test that way? Oh that's right: they probably don't have 16 2GB FB-DIMMs available and Intel didn't want to help them out. That's only $2000-$2500 depending on the brand (or $6000+ if you get it straight from Dell! But at least then you know the RAM works properly and Dell supports the setup.) Why don't you send them the memory they need? While you're at it, can you get them some realistic benchmarks that will stress that much RAM? Yeah, didn't think so.
There are flaws in the article, true. There are flaws in every article out there. You don't honestly think the latest reviews showing performance in one specific area of a few games is the same as testing every game, right? Or that SLI and Crossfire work properly in new titles most of the time? Or that quad core on the desktop matters at all when it comes to gaming... or anything outside of video encoding and 3D rendering and a few other specific tests?
Now, a bunch of your posts just got one point higher because I commented instead of downrating. Why rate them down? How about because you're being an arrogant prick and a fanboy, complaining about stuff that is largely out of the hands of the reviewers? I'm done. Feel free to miss the point entirely and complain some more.
Justin Case - Thursday, September 20, 2007 - link
1.1) I (and all visitors to this website) pay Anandtech through the ads they have on their site. The various companies that advertise here only do so because of us. And the reason why people visit this site is to read (what they believe to be) honest and bias-free articles and hardware reviews, not editorial advertising. If Anandtech thinks it can survive on Intel's sponsorship alone, that's fine. But eventually even Intel will stop sponsoring a site that has no credibility (great as Johan's articles are, I don't think he can churn them out fast enough to make people come here on a daily basis).1.2) I wasn't the one who said "The Intel Resource Center includes all our Intel-related articles". That statement was made by a member of Anandtech's staff.
1.3) You might as well say that if some site decided to write "AMD is great!" or "Sony is the best!" at the end of every paragraph, people should just "ignore it" and trust that the rest of the site was not biased in any way. There is a difference between clearly labelled banner ads and accepting money to turn your entire site into a big advert for company X or Z. "Moron".
2) Don't try to spin it. The point is that the Intel system was tested with half as many memory sticks as the AMD one, and (more importantly) with 75% of its memory banks empty. I find it quite telling that, when asked about this, the Anandtech employee posting above wrote that was because "Intel knows that you pay a power penalty for every FB-DIMM". So, because "Intel knows" that, Anandtech's system comparison is tweaked to make it less obvious to the readers...? o_O
3.1) Yes, I'm sure poor Anandtech didn't have the resources to buy or borrow another four FB-DIMMs for this review (so they could use at least half the slots in the board). It's not as if they have contact with any manufacturers or retailers that would be happy to send them the RAM in exchange for a mention and a link in the article, eh...? They did manage to get a 12-drive 15k SAS array, though. I bet they give those away on street corners.
3.2) Learn some maths... "moron". 1 GB FB-DIMMs cost $70 each. In other words, it would have cost them $840 to fill the remaining (12) of the slots in the board, $560 to match the configuration in the Opteron system (8x1) or $280 to test power consumption with half the board's slots loaded. This is assuming they didn't have any more FB-DIMMs available and couldn't get / borrow them for free. All of these values are a far cry from your suggested "$6000" or even "$2500".
JarredWalton - Friday, September 21, 2007 - link
Don't read too much into my comments, Justin - I don't live anywhere near Jason/Ross. Or Anand, Derek, Wes, Gary, or Johan for that matter. In fact, other than editing I have pretty much nothing to do with the other articles. They are merely my opinion, and I'm speaking realistically: why would Intel send a system specifically equipped to handicap it? They won't. Should Jason and Ross go out of our way to do so? To what purpose? The two are relatively comparable, inasmuch as we can get similar hardware. And don't underestimate the difficulty of getting new hardware - especially very expensive server hardware. Beyond that, readers need to read between the lines a little - take their own needs into account. If an IT guy is looking at getting a new server and installing 32GB (or 64GB) of RAM, I hope they have more sense than to look at 8GB server configurations and assume everything will be the same, only with "more RAM".I still don't get your whining about the sponsored Intel section - it's just a "site view" ad for Intel as far as I can see, with content from AnandTech and Intel that may be of interest. If you're looking for a quick collection of Intel information, I'd assume that's useful. It's on the home page on a smallish image, and at the end of any Intel specific (i.e. a new Intel CPU) articles. Heck, I wish AMD had a sponsored view as well. :) At any rate, we have huge ads from Gigabyte, Kingston, OCZ, Crucial, and many others splashed around. What makes the Intel "sponsored" version of the site different (which clearly states "This site sponsored by Intel")? Personally, I don't click the ads and I don't click on the little Intel Resource link either (except to see what it was). I'd assume 99.99% of you are the same. Hey - we're also "sponsored" by Verizon and T-Mobile I guess. What's that mean for our iPhone articles?
We have a separation of editorial and advertising staff for a reason - other than looking at the ads, I have no idea who is supporting us. I don't even know how much an ad costs on the site. I do know that at the end of the month, I get a pay check, and for that I'm grateful. There's also a fine line between tact and flaming that needs to be walked, particularly when writing an article. We've harped on FB-DIMM in the past, we ripped on Intel in the NetBurst era, and we've had ups and downs with pretty much every manufacturer out there.
The fact is, we appreciate good technology and products, and right now Intel has the upper hand in most areas. Barcelona isn't bad, but it's not K8 vs. NetBurst by any stretch. Still, if Intel got rid of FB-DIMMs - or at least made them optional for now - and got an integrated memory controller into their systems yesterday, you wouldn't see me complaining. I guess we need to wait for Nehalem on those area.
Take care,
Jarred
Proteusza - Friday, September 21, 2007 - link
I understand that AMD and Intel probably submitted test demos, and you were either contractually unable to modify their configuration or felt it wouldnt be good science.But, then you should have said "Take these power consumption and performance per watt metrics with a bit of salt because in real life no one would handicap a server by putting in 8x1 sticks rather than 4x2."
You've said yourself that a) putting in more memory sticks, irrespective of size increases power consumption, b) anyone needing such an AMD server wouldnt bother with 8x1.
So I dont know why you are baffled when we cry foul - here we have a badly configured AMD server vs a well configured Intel server. Which do you think is going to draw more power? Its like saying, "Our AMD server came with two HD2900 XT cards in CrossFire, so in our SQL Server tests its performance per watt metric is extremely bad." Duh.
Tell your readers you dont think its a fair comparison then. I dont care if AMD screwed up the config, or you couldnt be bothered about correctness. You've said yourself it would be a handicap for the Intel server to have the same memory configuration as the AMD server. Why not make sure the readers know that?
Again this has nothing to do with fanboyism. It has everything to do with poor benchmarking, and as I see it, benchmarketing.
All we're saying is that the configuration between the AMD and Intel systems should as close as possible for the test results to have meaning. You obviously dont agree.
JarredWalton - Friday, September 21, 2007 - link
You still missed the point: the configs are close. They're close in every area except the motherboard and CPU, which you're not going to be able to change.2x4GB DIMMs (on AMD) uses very nearly the same amount of power as 8x1GB DIMMs. It's not an issue there. On the other hand, 8x1GB FB-DIMMs gets a nice 5-7W penalty per FB-DIMM, because of the AMB. Regular DDR2, the power draw is determined by the number of memory blanks on the PCB. FB-DIMMs is that, plus another ~5W for the AMB. If they ran the AMD system with 4x2GB, I expect it would use within 4W of the same power draw.
FB-DIMMs are bad for power consumption. How do power requirements change with 32GB loads? I can guess that AMD will have an advantage, but without actual testing I'm not going to publish a guess. At the same time, I don't think everyone out there runs 16 DIMMs in a server. I think most companies buying a new server would buy 2GB DIMMs (or FB-DIMMs), but if a company knows that they really only need 8GB of RAM they're not going to install twice that much let alone four times as much memory.
What would be ideal? I'd like to see scaling numbers showing performance and power with 2x2GB up through 16x2GB on both systems. I'd also like to see more benchmarks, and benchmarks that can leverage the availability of more RAM. I'd like to see a truly repeatable benchmark showing how the servers behave in virtualized environments (which is were these high performance quad-core CPUs are truly important). There are always more tests people would like to see, but realistically no one can provide tests of everything that might be done.
And you know what? I haven't a clue as to how to do even the tests Jason and Ross are running, let alone something like virtualized environment testing, because I'm not involved with anything like that. :)
Justin Case - Friday, September 21, 2007 - link
The point is not whether Barcelona is bad or not (I think it's a huge disappointment in terms of performance, mainly due to the low clock speed, and I'm not convinced by the L3), and I expect the Xeons to beat the crap out of it in terms of peak performance. In fact, since I work mainly in effects and image processing, the Xeon is by far the best choice for me (for the workstations, at least; render nodes are a different matter, and cost becomes more relevant).The point is that the Xeons do have a very big weakness for power-critical server environments: the consumption of FB-DIMMs. And leaving 75% of the Xeon's memory slots empty is just not something that people will do, when running a server under high loads.
If a system can take up to 16 DIMMs, and if you cannot or do not want to test multiple configurations (ex., 4, 8, 16), which would have been useful in an article about performance-per-watt, then the logical choice is to fill half the memory banks. You (or whoever supplied it - we still don't know who that was, AMD?) did that with the Opteron system. It doesn't cover every possible case, but it covers the "average" (and I daresay most common) configuration.
I can certainly understand Intel asking you to test it with less FB-DIMMs. I cannot understand you complying with their request, when your main obligation is (or should be) to your readers.
I wonder if you would also accept not running any 3D rendering benchmarks on Opteron systems if AMD asked you to, because they know they're at a disadvantage there?
And if you don't see the problem with having a "site view" that turns your entire website into a giant ad for one manufacturer, I guess I can't explain it to you. It's not labelled as an ad, it looks just like another Anandtech "section", and there are even links to it at the end of some of your articles (again, not labelled as advertising). At the very least change the banner at the top to "Intel", not "Anandtech". Your wish for "more sponsored site views" is, to say the least, worrying about the state of web IT journalism, and Anandtech in particular. But hey, I guess "sponsored world views" work well for Fox News, etc. Why get an objective picture of things when you can just dive straight into the channel that confirms and reinforces your preconceptions, wrapped in an aura of "journalism"?
Maybe the people who wrote this article are just very naive or very distracted, and somehow overlooked the excessive number of fans in the (cooler) Opteron system and the reduced number of (power-hungry) FB-DIMMs in the Xeon system. That's within the realm of possibility. Or maybe they noticed that but thought it wouldn't affect the results of a performance-per-watt comparison (which is stretching it a bit). But when you couple that with other recent articles and the "ad disguised as information" that is the "resource center", your credibility is at stake. Or, as far as I'm concerned, and for some of your writers, it's not even "at stake" anymore.
PS. Where exactly are your Verizon or T-Mobile or Gigabyte "site view" ads...? Last time I checked, when I click on those (clearly identifiable) ads I'm taken to the manufacturers' websites, I don't get a "morphed" version of Anandtech.
PPS. Hans Maulwurf's question below is also an interesting one.
JarredWalton - Friday, September 21, 2007 - link
Re: Hans' question,AS3AP is a complete benchmark suite, right? Just like SYSmark2007 in a sense (although that's sort of stretching it). There are different factors that go into the composite score. Without knowing more about the benchmark, I can't tell you how the overall vs. the Scalable scores. I would think there's a possibility that the overall score is heavily I/O limited (RAID arrays and such), in which case the scores of Intel and AMD in those tests might be a tie. In that case, cutting out those results to show the actual CPU/RAM scaling is a reasonable decision. Intel is 27% faster in the overall result, and up to 55% faster (something like that) in the scalable tests. If as an example there are three scalable tests and three tests that are I/O bound, the overall score should be about half the scalable score. Again, however, I don't know enough about the benchmark to answer - feel free to email Jason/Ross.
--------
Anyone that looks at the Intel Resource Center and doesn't recognize that it's basically a different form of advertising is beyond my help. They - and you - have probably already decided that we're bought out, and I doubt there's anything I can say that will change their mind. The fact is that we ripped on Intel for three years with NetBurst and praised AMD. Now the tables have turned, AMD is having problems and we're pointing them out, and Intel has some great CPUs and we're praising them for their performance; suddenly we're "bought out". (Funny thing: I seem to recall a lot of AMD adverts back when they were on top and not as many from Intel; now the situation has reversed.)
The articles in the Intel section are not written for the intent of marketing, though they can be used that way. They are honest opinions on the state of hardware at the time of the articles - anyone actually saying that Intel isn't the faster CPU on the desktop these days would have to be smoking something pretty potent. Are some of the opinions wrong? Probably to varying degrees, and all opinions are biased in some fashion - I'm biased towards price/performance and overclocking, for example, so I think stuff like Core 2 Extreme and Athlon FX is just silly.
When AMD was on top, we did far fewer Intel mobo reviews - even though the market was still 80%+ Intel boards (though not the enthusiast market). Now Intel is on top, and personally I couldn't care less about what the best AM2 board is, because I don't intend to buy one until AMD can become competitive again. (But then, I overclock most of my systems.)
It sucks for AMD that they are the smaller company *and* they have a lower performing part. It sucks for consumers that if there's no competition, R&D tends to stagnate. I'd love to see a competitive AMD again, and the Barcelona 2.5GHz chips might even make it to that stage. I'm more interested in Phenom X4 vs. Core 2 Quad, though, both running with DDR2 RAM and doing the type of work I'm likely to do. More likely than not, however, my next upgrade will be to Core 2 Quad Q6600 and X38, with some overclocking thrown in to get me up to around 3.3-3.5 GHz.
----------
For what it's worth, I'm writing this from my Opteron 165 setup, which is still my primary computer. The X1900 XT is getting a bit sluggish, so I have to go elsewhere for gaming at times (Core 2 E4300 @ 3.3GHz and an 8800 GTX), but for all non-gaming, non-encoding tasks this system is still excellent. It's also a lot cooler/quieter than some of the high-end setups I have access to, though with winter coming on I might want to bring a quad-core over to my desk for use as a space heater. Oh yeah - and I'm still running XP, which is one more reason to save gaming for another setup... I can try out DX10 without actually having to fubar my work computer. :)
Justin Case - Friday, September 21, 2007 - link
Again, you're trying to turn this into Intel vs. AMD and talk about all sorts of unrelated things while avoiding the issue. The issue isn't who makes the fastest CPUs. The issue is Anandtech's testing methodology and system setup options. If Anandtech had chosen to put 8 FB-DIMMs on the Xeon system and just 4 DIMMs on the Opteron system, and stick 16 fans into the Xeon, we would complain in exactly the same way. The issue isn't who wins. The issue is whether we can trust your methods, results and conclusions.From your posts here, it seems that Intel supplied you the Xeon system, and decided to install just 4 FB-DIMMs, is that correct?
Who supplied the AMD system, and who decided what its configuration should be? AMD? Intel? Neither?
PS. - I recognize the "Intel resource center" exactly for what it is. I'm sure Microsoft would "sponsor" an Anandtech "site view" in a blink if you wrote a couple of Vista vs. OSX articles based on creatively prepared systems and benchmarks. But of course, then you have to keep being nice and creative, or they might decide not to sponsor you any more (bummer). After all, a banner ad is a banner ad; the manufacturer controls what's in it. But if you want them to pay you to use your articles for marketing (as you said above), obviously you know what the conclusions of those articles must be. I wasn't born yesterday and I'm sure you weren't, either.
JarredWalton - Saturday, September 22, 2007 - link
I'm bringing up all the other issues because you brought them up. This isn't an answer to a single post.If we were to do what you suggest with the articles, we would lose readership in a large way. I can control what I write, but not so much for other articles. No article is perfect, and since I didn't write this article and I didn't perform the testing, I can't say for sure how flawed the comparisons are. Yes, we used faster Intel CPUs than AMD CPUs, but that's because Intel sent Harpertown and AMD sent Barcelona and with both being new, we basically used what we got. There is an update with 2.5GHz Barcelona coming.
For the fans, you're an IT guy so obviously you know what sort of fans go into a server. The answer is: the fans that the server supplier uses. And no, I don't know who specifically makes these servers... I think it was mentioned in a previous article, perhaps? I also don't know what amperage is on the Intel fans or the AMD fans. Eight lower RPM fans can actually use less power than four higher RPM fans... or they can use less. Ask Jason/Ross for details if you want, but the fact is that's how these servers are configured. I worked in a datacenter for a while, and let me tell you, the thought of removing/disabling fans in any system never crossed my mind. So just like we're stuck with different motherboards, we're stuck with different fan configs based on what the server manufacturer chooses. Considering that the Intel setup *does* use more power in many situations (particularly with more FB-DIMMs), I don't believe that the Intel fans are low RPM models. The real problem may be the internal layout and design decisions of the AMD server - the Intel system seems to have been better engineered in regards to ducting and heat sinks.
Who sent the systems? I don't know. It seems the Intel setup changed from previous articles while the AMD remained the same. Is that because Intel said, "we don't like the configuration you used - here's a better alternative"? I don't know that either. I'm guessing Intel worked with a third party to configure a server that they feel shows them in the best light. AMD probably did the same with the original setup, and AMD is welcome to change chassis/server as well. If they don't it's either because they don't care enough or because it wouldn't make enough of a difference. I'm inclined to think it's the latter: that these tests are still only a look at a small subset of performance, and what they show is enough useful information for people in the know to make decisions.
What *do* these tests show? To me, they show that at lower loads Intel is now a lot closer to AMD thanks to Harpertown, and that AMD per-socket performance has increased thanks to Barcelona. However, at higher loads Intel offers clearly superior performance - even if you stick with Clovertown. Performance/watt is influenced by a lot of things, so I personally take those results with a grain of salt. If a business is really concerned with performance per watt and power density, they'd likely be looking at blade servers instead. The results in this article may or may not apply to blade configurations, so I'm not willing to make that jump.
And as previously discussed, the amount of RAM a company intends to install is a consideration. If a company is going to load all DIMM slots, the Intel servers look like their power requirements will jump close to 50W relative to Barcelona... which means that Harpertown would be more like Clovertown on the graphs. I'd imagine 2.5GHz Barcelona will also require more power, but until I see results I don't know for sure. Companies looking at loading all RAM sockets are probably very concerned with overall performance as well, in which case Intel seems to have the lead... except that in a virtualized server environment, the results here may not show up.
So many factors need to be considered, that I'd be very concerned with anyone looking at this one article and then trying to come to a solid conclusion for all their IT needs. This is just one look at a couple configurations. Then again, a lot of IT departments just go with whatever they've used in the past (Dell, IBM, HP...) and take the advice of the server provider.
Justin Case - Saturday, September 22, 2007 - link
Again, you're replying to "complaints" that no one made.No one is "complaining" about the fact that the Xeon system is faster than the Opteron. I (and anyone else with a clue) would be extremely surprised if it wasn't. If we were going to "complain" about that to anyone, it would be to AMD.
You say that "companies looking at loading all RAM sockets are probably very concerned with overall performance as well, in which case Intel seems to have the lead". You're 100% right about the "seems". That is indeed the idea one gets from this article (where the Xeons seem to be the best choice both in terms of peak performance and in power efficiency). However, as soon as you load more than 1/4th of the memory banks on the Xeon, the tables are turned in power consumption. And when you load all of them, the difference is huge (10 watts per FB-DIMM adds up to 120 watts above your numbers). So, anyone who needs a lot of RAM but also needs to keep power consumption within certain limits (which is the norm rather than the exception, in dense server environemnts) cannot really go with the Xeons. And companies that need all the CPU power they can get (ex., 3D render farms) will naturally go with the Xeons and either reduce the amount of RAM or find some way of dealing with the extra power consumption and heat.
In other words, the Xeons only "seem" like the best choice for people planning to fill all RAM banks because your performance per watt calculations were made with 75% of those banks empty. That is what we (or I, at least) are complaining about.
Testing a 2 GHz Barcelona is fine. Hell, even testing a 1.6 GHz would be fine, if that was all you had. As long as you're honest about it. But tweaking one of the system's configuration (using just 25% of the memory slots), to hide its main weakness for server use (which is precisely what this article is supposed to be about) shows either incompetence or bias.
I thought that by posting here I already was "asking Jason / Ross about it". For some reason they don't seem to want to answer (publicly, at least).
Don't you think that the source of the servers (and the people or companies responsible for deciding their configuration, and how much they knew about the benchmark that was going to be used) should be mentioned in the article?
You talk about things that "maybe" and "probably" AMD did, but apparently you don't even know who supplied the AMD system (it seems to be in a desktop tower case), or how much memory the Intel system actually came with (wouldn't it be interesting if Intel had in fact supplied it with 16 FB-DIMMs?). I can also make conjectures about what may have happened and who may have configured the systems. But if you (as a member of Anandtech's staff) are also limited to guessing, maybe you're not the right person to reply.
Hans Maulwurf - Friday, September 21, 2007 - link
Sorry for posting here, but I think you didnt notice my post at the bottom of this page. No problem, I know this article is rather old now ;) An answer here would be nice. Thanks."Barcelona is about as fast as Harpertown in AS3AP. OK.
In your article you write:
"The Scalable Hardware benchmark measures relational database systems. This benchmark is a subset of the AS3AP benchmark and tests the following: ..."
Now you choose a subset of this test in which Harpertown is much faster. Obviously AS3AP consist of several substest and you could as well choose one where Barcelona is much faster. But whats the use of this? You tested all subtest together with your AS3AP-Test.
Its the same as testing a game and both CPUs having the same score. Then you choose a subtest(e.g. KI only) where Harpertown is faster and conclude its faster overall.
So what did I miss here? From what I read Barcelona is as fast in AS3AP as Harpi(and should be faster in some subtest and slower in others) while you conclude:
"Intel has made some successful changes to the quad-core Xeon that have helped it achieve as much as a 56% lead in performance over the 2.0GHz Barcelona part."
I dont understand this. "
Proteusza - Friday, September 21, 2007 - link
I think these tests are full of holes, and its a pity,I was geniunely curious to see how both new chips performed.
instead, we get this sponsored advertising.
In the full A3SAP benchmark, the AMD punches above its weight, while in the subset, it doesnt.
Now, if the results in the scalable CPU benchmarks were a subset of the A3SAP benchmarks (which we are told they are) and a 3 GHz Harpertown was able to lead a 2 GHz Barcelona by 27%, then we can say that the average difference between harpertown and barcelona is 27%.
now, if the chosen subset in question was 59%, that would mean it dragged the average down quite a bit. so what are the rest of the numbers?
there is no point in choosing on particular subset if you test the whole thing.
Justin Case - Thursday, September 20, 2007 - link
Taking it to the extreme, obviously running 16 2GB FB-DIMMs uses a lot more power than 16 2GB DDR2-667 DIMMs. I'm not sure how many businesses actually use that approach, thoughWell, the general idea when you buy a motherboard with 16 RAM slots is that you're going to fill it with 16 DIMMs, not leave 75% of the slots empty. Especially in the case of high-load servers, maxing out the RAM is pretty much the norm. But if you decide that's "too much", at least fill half the memory slots with RAM. That means 8 FB-DIMMs and 8 DIMMs, respectively (both boards have 16 slots).
As mentioned by others above, it's hard to believe that people working for one of the biggest hardware review sites in the world would overlook this kind of thing involuntarily. I'd risk saying that Anand or Johan never would.
The purpose of a product review is to make an objective comparison between products, and not to avoid or minimize anything that would expose the weaknesses of one of them.
Reducing the number of FB-DIMMs in the Intel system when testing for power consumption is comparable to limiting the total load on the AMD server when testing for peak performance.
Although the review gives the idea that both systems were configured and assembled by Anandtech, your post above suggests that the Xeon system was in fact configured by Intel ("The Intel setup came with 2GB DIMMs... obviously Intel knows that you pay a power penalty for every FB-DIM"). Yes, Intel knows that... so they're allowed to leave 75% of the RAM slots in your test system empty, to minimize that penalty? And who, exactly, configured the Opteron system?
you seem to want us to intentionally handicap Intel just for your own benefit.
Handicap? By actually using (at least) 50% of the available RAM slots? And... "our own benefit"? Hell, yeah. It's to the consumer's benefit to have objective reviews that point out both the strengths and weaknesses of all products. Or is that such an alien concept? Or maybe you think your readers take advertising money from AMD...? Last time I checked, there was no paid "AMD resource center" in my back yard.
Proteusza - Thursday, September 20, 2007 - link
I agree wholeheartedly, and sadly I dont think we will get a meaningful response from AT.It appears everyone has their price.
Proteusza - Thursday, September 20, 2007 - link
You dont seem to get it.8 sticks ram (any size) = lots of power
4 sticks ram (any size) = less power.
7 fans = lots of power
3 fans = less power
You were testing the CPU and the platform in general, not the case or ram. so using differing amounts of ram and fans means your power consumption results are meaningless.
Okay read what you said again slowly. Why didnt you have 4x2GB in the AMD setup? You say yourself any business in need of such a platform wont bother with 8x1GB.
The only thing that I would benefit from is an ubiased test. If you say that switching teh Intel to 1 Gb sticks would unfairly penalize it, doesnt the same hold true of AMD? How can what is unfair to Intel not be unfair to AMD? I'll tell you, a little thing called marketing.
Read the first post about the ram, its not the total amount, its the configuration. give each platform 4 sticks of 2 GB each, then we will see.
I dont think its that difficult to understand, you guys either made an error which makes your results meaningless, or were paid, which still makes them meaningless.
MrKaz - Thursday, September 20, 2007 - link
You bring very valid points! And thanks to the originator of this discussion!But let me spice things a little.
I think you and Anandtech are wrong!
Correct testing would be loading ALL THE MEMORY BANKS WITH RAM!!!
That would be more realistic scenario.
I see Intel praising the technology edge of FBDIMM by allowing to have more RAM on the system, then lets load the Intel system with the maximum RAM they can handle.
Otherwise seams a little biased test.
Showing how Intel systems:
-are energy efficient = use less RAM on them and add more to the AMD system
-can handle much more RAM than AMD = Show how Intel system have lots of memory banks
flyck - Thursday, September 20, 2007 - link
although you are correct when you say there are small errors in the setup, i cant agree with the part about being paid by intel todo...This is an assault which they cannot defend themselves against.
Either way this review would be much more interesting when a 2.5GHz release and low power barcelonas would be available. But that is dependent on AMD itself.
Viditor - Thursday, September 20, 2007 - link
As to that, the low power Barcelonas are available...NewEgg has them in stock already.
http://www.newegg.com/Product/Product.aspx?Item=N8...">NewEgg
flyck - Thursday, September 20, 2007 - link
most hardware site rely on hardware that has been given to them for testpurposes. They wont buy them.Justin Case - Thursday, September 20, 2007 - link
Which is probably one of the reasons why CPUs in some reviews overclock so well, and the ones you buy from retail overclock so poorly.I don't trust any review where the item was supplied by the manufacturer; chances are they cherry-picked the best one they had, to get the best possible review. If the sites can't afford to buy the items they're reviewing, they should simply strike a deal with a retailer, where they get to test the stuff (and return it) in exchange for a sponsored link or something. That way the chances of getting an above-average (or below-average) part are the same as for anyone else.
Justin Case - Wednesday, September 19, 2007 - link
In other words, what you're saying that the Opteron did not have more RAM than the Xeon, so it did not get any benefit from the different memory configuration.Well, that's the "pro-AMD" conspiracy put to rest, no doubt. Thanks.
But you still have 8 DDR2 DIMMs on the Opteron versus 4 FB-DIMMs on the Xeon. As pointed out above, using the same configuration would either reduce the Barcelona system's power consumption (by about 18 watts, if both used 4 DIMMs) or increase the Harpertown system's consumption (by about 40 watts, if both used 8 DIMMs).
In the latter case (which is the likely scenario on a server under high loads - fill it with as much RAM as possible), that would put the Xeon's "performance per watt" below that of the Barcelona system in most of your tests.
And there's still the mystery of why a system that dissipates less heat needs more than twice as many fans. Or was there also a typo on the number of fans in each system? Maybe the number of fans is different but the total number of fan blades is the same, so that's alright? :)
Wirmish - Wednesday, September 19, 2007 - link
The problem is not the number of GB, it's the number of DIMMs.Do you try to convince us that one 8GB DIMM use the same power as eight 1GB DIMMs ?
This is just plain stupid.
Wirmish - Tuesday, September 18, 2007 - link
Same question...AMD..: 8 DIMMs (16 GB) + 7 fans
INTEL: 4 DIMMs (8 GB) + 3 fans
http://www.interfacebus.com/Memory_Module_DDR2_FB_...">LINK
With 8 FBDIMMs the Xeon may consume ~42 watts more !
A standard fan may consume anywhere from 1.6 to 6.0 watts.
Try to use only 4 fans (1 middle-front, 1 top-rear, 2 CPU) with the AMD system.
It will work perfectly and you will save ~15 watts.
1. Add 4 FBDIMM in the Xeon system.
2. Remove three 3.5" fans in the AMD system.
3. Rebench.
4. Update your power consumption and performance/watt graphs.
5. Thank you very much.
Proteusza - Wednesday, September 19, 2007 - link
okay I also think its fishy but I'm playing devils advocate here.if you had to run a netburst server, part of your power goes to cooling, thats part of your total energy requirement. if the AMD system requires more cooling, for whatever reason (no matter how strange that may seem), then like it or not its a part of your server and energy expense.
As for the differing amount of ram, that makes no sense at all. Why halve the amount of memory on the Intel system?
Justin Case - Wednesday, September 19, 2007 - link
Maybe AMD uses inferior knock-off photons, so despite dissipating less heat, it needs more cooling. ;)Xspringe - Wednesday, September 19, 2007 - link
Except in this particular case, based on the available data, this does not make sense at all. Power requirements of the AMD system are already lower then that of the Xeons (including the extra fans and ram), so these extra fans should not be required.Proteusza - Wednesday, September 19, 2007 - link
Hello? Anandtech? can we have some justification for the difference in test beds and the fact that performance per watt is now completely meaningless?Or are you just going to let this one slide?
Justin Case - Wednesday, September 19, 2007 - link
A difference of 50 watts would be enough to push the efficiency (performance per watt) of the Barcelona system above that of the Harpertown system in most of the benchmarks used in the article.DeepThought86 - Tuesday, September 18, 2007 - link
Wow if these numbers are representative then Barcelona is killing Intel, even at 45nm, on a $/performance basis and has great perf/watt too. A 2.5 GHz Barcelona will match anything Intel has until 2008 and a 3GHz Barcelona will obliterate them, period.Looks like Harpertown isn't enough to match AMD if they can get it scaled quickly. I think AMD will be making large server marketshare gains going forward until Nehalem is introduced. Great news for buyers!
defter - Wednesday, September 19, 2007 - link
Yeah right, because 3GHz Xeon has a 40-55% lead against 2GHz Barcelona, you will think that 2.5GHz (+20% clockspeed) Barcelona will overtake 3.2GHz Xeon?It's quite funny, two years ago when Intel was selling Netburst dual cores for $150-200 while AMD charged over $300 for a cheapest dual core CPU, nobody cared about performance/$ benchmarks :)
But now some fanbois are making up "performance/$", "performance/$/watt/clock", "performance/watt/Ruiz's IQ" metrics just to artificically boost AMD's poor CPU. This is enthusiast site, most people care also about which product is a simply faster, that's why omitting expensive or 120W CPUs from the reviews is a bit silly. Fastest CPU from manufacturer A vs. fastest CPU from manufacturer B is always a fair game.
Proteusza - Wednesday, September 19, 2007 - link
As the other guy said, its 25%, also, theres the fact that 1 AMD MHz is not equal to 1 Intel MHz.This may seem like utter fanboy crap, until you consider that a 1.8 GHz Intel Core 2 Duo generally outperforms a 3 GHz Intel Pentium 4.
Similar thing with how a K8 beat the pants off an equivalently clocked P4, and it looks like Barcelona is a very good performer.
You need to study computer architecture to understand why, but until then, keep your ignorance to yourself.
defter - Wednesday, September 19, 2007 - link
This is true, my mistake.
Here is your mistake, I was talking about percentage point increases, not MHz increases.
Ok, let's look back at example:
2GHz Barcelona performance: 1
3GHz Xeon performance: 1.4-1.55 (40-55% faster)
assuming perfect scaling:
2.5GHz Barcelona performance: 1.25 (in reality it will be less, since scaling is not perfect)
As you can see, even 2.5GHz Barcelona will not be as fast as current Xeons.
Why? Even the future 2.5GHz parts will be slow compared to competition.
You can look here for benchmarks between a future 2.5GHz Barcelona and 3GHz 45nm Xeon: http://www.techreport.com/articles.x/13224/1">http://www.techreport.com/articles.x/13224/1
Barcelona loses every real world test, in many tests it's significantly behind Xeon. Even when taking FB-DIMMs into account, Xeon has a lower power under load in Povray test.
And 3GHz Xeon isn't even a top speed part, in November Intel will introduce 3.16GHz quad core Xeon with faster parts coming later.
The fact is that AMD needs >3GHz Barcelonas in November, just to achieve parity with Xeon.
Spoelie - Wednesday, September 19, 2007 - link
server vs desktopon the desktop, barcelona will have a) a faster memory controller b) faster clocks c) faster memory (ddr667 on the server, ddr800 and ddr1066 on the desktop)
yes for desktop apps k10 needs more or less clock parity, but the original poster alluded to the target market for the opterons.
Proteusza - Wednesday, September 19, 2007 - link
K I see what you saying about relative performance.In most cases Barcelona seems to perform equal to an approximately equivalently clocked Harpertown.
So when a harpertown chip has a 50% clockspeed advantage, its going to beat any AMD chip until said AMD chip gets up to equal clockspeeds (approximately).
Nonetheless, I think its performance per watt figures should be pretty interesting, and I'm glad it generally outperforms a Clovertown at equivalent speeds. If it couldnt do that, it would be a dead duck.
flyck - Wednesday, September 19, 2007 - link
You are correct when you are talking about performance only related stuff. However barcelona won't be that far behind. Performance/wat will be better (is allready better if you consider that the more memory the worse the situation gets for intel).Why? Even the future 2.5GHz parts will be slow compared to competition
That sin't true. They will be very competive when we are talking about performance/W.
And i think that amd will be very performance competive vs intel when they can reach the 2.8GHz-3Ghz ratios which are due for start of the coming year. (especially when faster registerd ram is available).
Another comment that i would like to place is that on techreport you can hardly speak about server based benchmarks. although it does point out that amd will need frequency equality to be competive in the fanatic sector.
Final Hamlet - Wednesday, September 19, 2007 - link
I don't wanna argue with you over AMD vs. Intel, because you know: Doing so on the Internet is like running in the Special Olympics - even if you win, you are still retarded...Nevertheless, you can't even calculate.
If 2.0 GHz is the basis and you lift it to 2.5 GHz, it is NOT a 20% improvement, but RATHER 25%. Learn calculating percentages, it helps man, it helps...
DigitalFreak - Tuesday, September 18, 2007 - link
Dude, you just don't have a clue....Accord99 - Tuesday, September 18, 2007 - link
Barcelona only kills on $/performance because it's being compared to the higher-end Intel SKUs . There are much cheaper 2.33GHz Clovertowns and comparable Harpertowns. Meanwhile Harpertown pretty much guarantees Intel performance superiority for the forseeable future.Justin Case - Tuesday, September 18, 2007 - link
Beyond a certain price difference, it's cheaper to buy two systems than to buy a faster system. Most (granted, not all) software that can run efficiently on multi-core CPUs can also run efficiently on multiple nodes. A single, more expensive system can still be preferable if you have space constraints, of course, but I suspect Intel will lower its prices a bit as a response to Barcelona.Personally I would like to see a comparison between two systems with a similar price.
Accord99 - Tuesday, September 18, 2007 - link
Most (granted, not all) software that can run efficiently on multi-core CPUs can also run efficiently on multiple nodes.I wouldn't say most, more like a few. Plus, a lot of expensive software are licensed by the socket, so any savings on the CPU is minor compared to the overall costs of software.
Two systems also take up more space and consume more power, and the trend of virtualization also leads to fewer, bigger servers.
Justin Case - Tuesday, September 18, 2007 - link
Just out of curiosity, what software are you thinking about that scales well to multiple cores but cannot run on multiple nodes?Rendering can be done in render farms, most servers can run in multi-node load-balancing configrations, etc.. The only field that comes to mind where multiple nodes really aren't doable is scientific / HPC, which needs very fast access to a shared memory pool. But the days of glory of the monolithic supercomputer are kind of past.
chucky2 - Tuesday, September 18, 2007 - link
At work (extremely large telecomm company) we don't run each box more than 50% load for failover reasons. So when I'm looking at these numbers, to me it seems like AMD is doing pretty good with 2.0GHz CPU's vs. 3.0GHz Intel CPU's.I'd really like to see how the scaling goes with Barcelona from say 2.0, 2.5, and 3.0GHz.
I think at 2.5-2.7GHz is when we're really going to see Barcelona start to come into its own...
Chuck
Justin Case - Tuesday, September 18, 2007 - link
I think this shows the same trend from the previous generation, in terms of performance per watt: AMD rules for servers (low / medium CPU loads), Intel rules for workstations and render nodes (high CPU loads).And both are complete overkill for desktop systems, but I'm sure Microsoft will find a way to make Windows crawl on them. :)
HPC / FEA / etc. is also high CPU load but it also needs low memory latency and high bandwidth (where AMD has an advantage), so these benchmarks don't really tell us much. My guess is Intel will have a small advantage (despite the slower memory access), at least until Barcelona hits 2.5 GHz or so.
firewolfsm - Tuesday, September 18, 2007 - link
This review seems biased. If you want to run only the 2GHz part at least calculate the performance per clock because it looks like Barcelona has Intel beat in a lot of the benchmarks. Meaning 2.5GHz would be much more competitive. and you also should have run the 3.2 GHz K8.Jason Clark - Tuesday, September 18, 2007 - link
As for 3.2GHz, our reasoning was it was a high wattage part, and it didn't make sense to include it. At the wattage it runs at, perf/watt was not pretty.Jason Clark - Tuesday, September 18, 2007 - link
Ross and I did not have 2.5Ghz, it was nearly impossible just getting ahold of 2.0GHz.... We'd run it if we had it :)Regs - Wednesday, September 19, 2007 - link
Hi Derek Johan De Gelas mentioned that had 2.5 GHz part in your tech labs. Can we expect a preview of that soon?Regs - Wednesday, September 19, 2007 - link
Whoops, I confused you Jason for Derek Wilson.Viditor - Tuesday, September 18, 2007 - link
Which stepping of Barcelona were you using (it wasn't in the test setup and has become an issue of late)Cheers
firewolfsm - Tuesday, September 18, 2007 - link
This review seems biased. If you want to run only the 2GHz part at least calculate the performance per clock because it looks like Barcelona has Intel beat in a lot of the benchmarks. Meaning 2.5GHz would be much more competitive.firewolfsm - Tuesday, September 18, 2007 - link
Sorry for the double...GlassHouse69 - Tuesday, September 18, 2007 - link
Well. It looks like AMD wins. Money is everything. Oh sure, there will be some geek who will say, "Money is no object, only getting the workload done as fast as possible." that geek would be wrong. Amazing how this WOULDNT start a price war. 400 vs.... 2.5-3x that much? You could put four on a board and start rocking in the free world.Makes me happy about Phenom. Imagine a 190 dollar quad that isnt intel? something to buy finally.