The million dollar question: how do you upgrade your datacenter
by Johan De Gelas on April 7, 2009 12:00 AM EST- Posted in
- IT Computing general
In our last article about server CPUs, I wrote:
"the challenge for AMD and Intel is to convince the rest of the market - that is 95% or so - that the new platforms provide a compelling ROI (Return On Investment). The most productive or intensively used servers in general get replaced every 3 to 5 years. Based on Intel's own inquiries, Intel estimates that the current installed base consists of 40% dual-core CPU servers and 40% servers with single-core CPUs."
At the end of the presentation of Pat Gelsinger (Intel) makes the point that replacing nine servers based on the old single core Xeons with one Xeon X5570 based server will result in a quick payback. Your lower energy bill will pay back your investment back in 8 months according to Intel.
Why these calculations are quite optimistic is beyond the scope of this blogpost, but suffice to say that Specjbb is a pretty bad benchmark to perform ROI calculations (it can be "inflated" too easiliy) and that Intel did not consider the amount of work it takes to install and configure those servers. However, Intel does have a point that replacing the old power hungry Xeons (irony...) will deliver a good return on investment.
In contrast, John Fruehe (AMD) is pointing out that you could upgrade dualcore Opteron based servers (the ones with four numbers in their modelnumbers and DDR-2) with hex-core AMD "Istanbul" CPUs. I must say that I encountered few companies who would actually bother upgrading CPUs, but his arguments make some sense as the CPU will still use the same kind of memory: DDR-2. As long as your motherboard supports it, you might just as well upgrade the BIOS, pull out your server, replace the 1 GB DIMMs with 4 GB DIMMs and replace the dual cores with hex-cores instead of replacing everything. It seems more cost effective than redo the cabling, reconfigure a new server and so on...
There were two reasons why few professional IT people bothered with CPU upgrades:
- You could only upgrade to a slightly faster CPU. Upgrading a CPU to a higher clocked, but similar CPU rarely gave any decent performance increase that was worth the time. For example, the Opteron was launched at 1.8 GHz, and most servers you could buy at the end of 2003 were not upgradeable beyond 2.4 GHz.
- You could not make use of more CPU performance. With the exception of the HPC people, higher CPU performance rarely delivered anything more than even lower CPU percentage usage. So why bother?
AMD has also a point that both things have changed. The first reason may not be valid anymore if hex-cores do indeed work in a dualcore motherboard. The second reason is no longer valid as virtualization allows you to use the extra CPU horse power to consolidate more virtual servers on one physical machine. On the condition of course that the older server allows you to replace those old 1 GB DIMMs with a lot of 4 GB ones. I checked for example the HP DL585G2 and it does allow up to 128 GB of DDR-2.
So what is your opinion? Will replacing CPUs and adding memory to extend the lifetime of servers become more common? Or should we stick to replacing servers anyway?
{poll 124:400}
23 Comments
View All Comments
jtleon - Friday, May 1, 2009 - link
I appreciate the diligence in these comments....thanx to those contributing.I am new to the serving world - only been hosting now for about 1 year. I appreciate Intel's efforts to reduce physical server count, but I wanted to share my own insecurities about such a reduction.
My hosting endeavors only use low power legacy hardware, thanks to MS WinFLP, I continue to keep these boxes out of the landfill. Pentium III is still a diligent CPU and meets the needs of my relatively few clients (<10000).
Given the age of my hardware, I take comfort in having more, rather than less units available - my operations are 24/7, and if a unit goes down for any reason, I have spares in waiting to fill in.
Yes Intel's philosophy that less is more - is good to see, but with less, each piece is more critical to the success of the whole. With more units, a single unit failure has less impact overall.
Additionally, I do believe that the weakest link in hosting/serving remains to be the OS/software deployed. It really doesn't matter if you can reduce unit count 10 to 1, if that 1 unit crashes more often due to OS/software failure. It is inevitable that the crash will be more frequent, as you are running 10X the processes on that single CPU/OS. In my field, uptime is GOLD, and downtime is death. Having more independent units with lower workload each is much more secure picture in my mind - until we truly have a robust operating system - like DOS 5.0, lol!
Thanx again to those contributing.
jtleon
BritishBulldog - Thursday, April 16, 2009 - link
I mean ok so you renew your servers every 3 years. So what do you do with the old ones? Where do they go apart from ebay or skip? Machines that are perfectly good and may last another 5 years dumped? Ok so now I tell you that in future its going to cost you to get rid of this kit! Britain is a small country and here we are over run with old electronic junk and we are running out of places to put it, so what do we do with it? It’s called recycling and the company that bought the kit (and the manufacturer) will have to pay to get it!! (EU loves these sorts of initiatives)!!So let’s get innovative. Space is cheap and as most people have said they tend to over spec when they buy servers!! Now we have a great technology called virtualization... see where I'm going... Why not virtualise your desktop solution and run it on the 3 year old servers that you were going to chuck out? The lifetime of your servers will be doubled!!(Working as a pool with shared storage,so what if one breaks!!) And next time you refresh your desktops, replace with thin clients..Which will last much longer than your normal desktops and will be cheaper to recycle!! Since most big desktop projects are going virtual anyway this makes the most sense.
alpha754293 - Sunday, April 12, 2009 - link
Both AMD and Intel have valid points.For AMD's point to be true, you have to already running the latest generation socket in order to be able to do the drop-in replacements. If you don't, then you have to replace the server anyways in order to make that even a remote possibility.
For Intel though, that's true also because while you can virtualize systems, and perhaps to even some extent, servers, there's a big push to consolidate multiple systems into a single system.
Even Sun (with their UltraSPARC T-series) pushes that.
But there are "conditions" and environmental factors that need to be in place for both. And I think that that's also important and worthy of mention.
ssampier - Sunday, April 12, 2009 - link
I work in a government environment. We keep our servers until they die; no upgrades or anything new. It can be frustrating to keep adding duct tape and bailing wire, but the bosses say, if it's works, why replace?Since our revenue is consistent and not profit related, it's hard to justify anything else. I just wish I had some newer and reliable hardware to work with, especially with advances in CPUs and virtualization.
mlambert - Saturday, April 11, 2009 - link
I'm really surprised by some of these articles because they don't have anything to do with the actual business world.Companies don't care about upgrading "old" hardware. They replace hardware as soon as it has depreciated and financially off the books. Most companies do a 3 year cycle, some do 5. Some do it based off the original maintenance agreements. Thats all that matters.
The cost of a maintenance re-up is generally more expensive than buying new hardware. Especially with big ticket items like storage arrays. EMC/NetApp/etc will give you millions in hardware for free these days as long as you buy the software licenses (which you can get for 70%+ off list and even cheaper with "vendor displacement").
I like the idea of enterprise articles at AnandTech but they really need to be valid for the real world enterprise to be worth anyones time.
JohanAnandtech - Saturday, April 11, 2009 - link
"Companies don't care about upgrading "old" hardware."Well, I admit I provide a "too much techie" point of view. But I don't believe in the CTO's that completely detach themselves from the tech side of things to "align" with the business goals. There are quite a lot of IT sites that write books about the CTO and the business, to a point where you ask yourself what the "T" stands for. Understand that the IT site of Anandtech will focus on the tech site of enterprise IT. I strongly believe we have too few good tech IT sites.
There must be balance between "using technology as best as you can" and trying to understand the business needs.
"EMC/NetApp/etc will give you millions in hardware for free these days as long as you buy the software licenses "
IMHO, that is just marketing. They give you hardware for free because they want a steady income and they want to bind you to their product line. At the end of contract, you are probably paying much more in total (and for software features that you never used) than in the classic way of buying hardware. But I agree that it might make a lot of sense in some situations.
But it doesn't make comparing hardware and thinking about lowering the costs of hardware useless.
Ninevah - Friday, May 22, 2009 - link
I've done server support for over 10 years now, so I feel I should chime in.It _IS_ worth it to renew support/warranties on servers 'cuz the cost is usually pretty reasonable for the 4th and sometimes even 5th year. After that, however, the cost skyrockets because the manufacturers know that the cost to them for replacing components is typically a LOT higher.
It is highly debatable whether renewing support/warranties is worth it for such things as storage arrays. Companies like EMC (whom I'm most experienced with) purposely include 3 years of support in the initial price because they don't want the customer to know how much that actually costs compared to the hardware and software itself. Then, after that 3 years is up, they can come to the customer and show them what it would cost to continue the coverage. Most of the time it is just as expensive as upgrading or buying a totally new storage array. This is exactly their intent all along. This drives sales, their commissions, makes the company appear to sell more products, and makes their bottom line look good. And they know that they'll have the exact same discussion in another 3 years.
The problem for companies like EMC that have operated like this for years is that their competitors who sell MUCH cheaper products have been improving their quality and performance enough that customers have trouble justifying the expensive EMC purchases versus HP, Equallogic, etc. In fact, I would daresay that the biggest things driving EMC's business nowadays is the fact that they're already in a LOT of companies and larger organizations like to have preferred vendors selected. They often have established, industry heavyweights like EMC already chosen from years back and so alternatives just aren't up for discussion.
Loknar - Monday, April 27, 2009 - link
I agree,IT matters. Maybe some companies outsource the IT completely and neglect or throw a bit of money at the problem.. but web businesses should invest themselves and integrate everything if they can afford it.
I think most web companies like mine appreciate how technology is the integrate part of their business and are ready to inject millions if it helps productivity by a mere 10%.
Power savings are a new concept to me; although I read it often on Anandtech I never felt it until now. Intel says "upgrade now and recoup your investment in 8 months?". Well that gives me something to think about - but we'll probably upgrade the whole servers because that's the IT philosophy - upgrade the whole thing reduces frequency of upgrades (versus smaller upgrades).
vol7ron - Thursday, April 9, 2009 - link
A computer after 3 years is kind of like a used car. Even if you replace the engine, you might have to replace the transmission.I don't know about you, but I prefer the new. Those old transistors and copper/gold molding breaks up with extreme heat and use. If I were to just replace the CPU, what happens in a year or so when the mobo goes? Now I have to find old mobos, only to find there aren't any that meet my specifications. So I have to buy new parts in addition to the sh1tty processor/memory I originally bought.
Servers are a fixed cost, keep them low and manageable. But if you wanna make a big change, control the variable costs. In addition to CPU power usage, it seems like people hardly mention the AC bill too. Using the lower heat generators will reduce this as well - I'm not sure by how much - but there are many little factors like that, which add up. Such as, more space in the room. Less heat may also mean that it may be able to dissipate into the floor/walls, causing a two-fold reduction in AC bill.
----
This being said, I think it's also safe to say that parts seem to last a lot longer then they used to. Most of the parts I deal with seem to extend beyond 3x the life of the warranty with mild overclocking. While CPU utilization is a good argument, processing speed is also a good one. I'd like to see the study on the server processing performance (not CPU utilization) to determine if there is any noticeable difference to the end user and then do a SWOT analysis.
has407 - Friday, April 10, 2009 - link
Good points, but in commercial IT shops there's a few other factors in play.1. Depreciation. That allows us to write off the equipment. That typically occurs over a 3 year period in the US and is based on IRS rules. That's a hit to the "net" line on most financials (i.e., it's amortized over the equipment's life). CFO's generally prefer that.
2. When you're buying/leasing equipment virtually all companies will do it for at least 3 years, including the service contract (which if you're leasing is typically required anyway). All the costs, including maintenance, can then be amortized. CFO's like that.
3. If you junk or sell the equipment before it's depreciated, you typically take an immediate financial hit (unless you sold it for more than the remaining depreciation). CFO's don't like that.
Which means that for most companies (at least in the US), 3 years is pretty much the minimum refresh cycle, and vendors cater to that. After that, it depends...
4. After the equipment is depreciated, a service contract becomes a hit to the "expense" line on most financials, which means it's a hit to EBITA. CFO's generally don't like that.
5. OTOH, if cash/cashflow is more of an issue than how good the financial statement looks that quarter, that same CFO will want to know why you want to spend more cash to replace something that's working, especially if they're looking at a datacenter with fixed costs and much longer amortization period. Unless you're a Really Big shop, those numbers are baked, and power savings are a drop in the bucket compared to the cost.
E.g., You can reduce the rack count by 50%? So what. We're in a 5-year contract for the space; we built out the datacenter and it's going to take 20 years to amortize. You can reduce power consumption by 50%? So what. That'll cost us more in monthly cashflow after the lease upgrades. Not to mention that we won't see that unless the equipment is in service far beyond the point it can be depreciated. Etc., etc., etc. ...
6. That same CFO, and anyone responsible for a P&L, also values predictability. That's why most companies buy their systems with service contracts, may keep those systems in service longer than otherwise, and ultimately pay a higher price than if they replaced it with the latest-and-greatest--they're paying a premium for predictability.
In short, what makes sense is highly variable and depends on a lot of factors. It would do the IT profession well if more people invested time in learning to read a financial statement and understanding the business parameters, rather than simply focusing on speeds and feeds.