Server Guide Part 1: Introduction to the Server World
by Johan De Gelas on August 17, 2006 1:45 PM EST- Posted in
- IT Computing
Chassis format: why rack servers have taken over the market
About 5 years ago, two third of the servers shipped were still towers, and only one third were rack servers. Today, 7 out of 10 servers shipped are rack servers, a bit less than 1 out of 10 servers are blade servers, and the remaining ~20% are towers.
That doesn't mean that towers are completely out of the picture; most companies expect towers to continue to live on. The reason is that if your company only needs a few small servers, rack servers are simply a bit more expensive, and so is the necessary rack switch and so on. So towers might still be an interesting option for a small office server that runs the domain server, a version of MS Small Business Server etc., all on one server. After all a good way to keep TCO low is running everything from one solid machine.
However, from the moment you need more than a few servers you will want a KVM (Keyboard Video Mouse) switch to save some space and be able to quickly switch between your servers while configuring and installing software. You also want to be able to reboot those servers in case something happens when you are not at the office, so your servers are equipped with remote control management. To make them accessible via the internet, you install a gateway and firewall, on which you install the VPN software. (Virtual Private Networking software allows you to access your LAN via a secure connection on the internet.)
With "normal LAN" and remote management cables hooked up to your Ethernet switch, KVM cables going to your KVM switch, and two power cables per server (redundant power), cable management quickly becomes a concern. You also need a better way to store your servers than on dusty desks. Therefore you build your rack servers into a rack cabinet. Cable management, upgrading and repairing servers is a lot easier thanks to special cable management arms and the rack rails. This significantly lowers the costs of maintenance and operation. Rack servers also take much less space than towers, so the facility management costs go down too. The only disadvantage is that you have to buy everything in 19inch wide format: switches, routers, KVM switch and so on.
Cable management arms and rack rails make upgrading a server pretty easy
Racks can normally contain 42 "units". A unit is 1.75 inch (4.4 cm) high. Rack servers are usually 1 to 4 units (1U to 4U) high.
HP DL145, a 1U solution
1U servers (or the "pizza box" servers) focus on density: processing power per U. Some 1U models offer up to four processor sockets and eight cores, such as Supermicro's SC818S+-1000 and Iwill's H4103 server. These servers are excellent for HPC (High Performance Computing) applications, but require an extra investment in external storage for other "storage intensive" applications. The primary disadvantage of 1U servers is the limited expansion possibilities. You'll have one or two horizontal PCI-X/PCI-e slots (via a riser card) at most. The very flat but powerful power supplies are of course also more expensive than normal power supplies, and the number of hard drives is limited to 2 to 4 drives. 1U servers also use very small 7000-10000 rpm fans.
Sun T2000, a 2U server
2U servers can use more "normal" power supplies and fans, and therefore barebones tend to be a bit cheaper than 1U. Some 2U servers such as the Sun T2000 use only half height 2U vertical expansion slots. This might limit your options for third party PCI-X/e cards.
The 4U HP DL585
3U and 4U servers have the advantage that they can place the PCI-X/e cards vertically, which allows many more expansion slots. Disks can also be placed vertically, which gives you a decent amount of local disks. Space for eight disks and sometimes more is possible.
About 5 years ago, two third of the servers shipped were still towers, and only one third were rack servers. Today, 7 out of 10 servers shipped are rack servers, a bit less than 1 out of 10 servers are blade servers, and the remaining ~20% are towers.
That doesn't mean that towers are completely out of the picture; most companies expect towers to continue to live on. The reason is that if your company only needs a few small servers, rack servers are simply a bit more expensive, and so is the necessary rack switch and so on. So towers might still be an interesting option for a small office server that runs the domain server, a version of MS Small Business Server etc., all on one server. After all a good way to keep TCO low is running everything from one solid machine.
However, from the moment you need more than a few servers you will want a KVM (Keyboard Video Mouse) switch to save some space and be able to quickly switch between your servers while configuring and installing software. You also want to be able to reboot those servers in case something happens when you are not at the office, so your servers are equipped with remote control management. To make them accessible via the internet, you install a gateway and firewall, on which you install the VPN software. (Virtual Private Networking software allows you to access your LAN via a secure connection on the internet.)
With "normal LAN" and remote management cables hooked up to your Ethernet switch, KVM cables going to your KVM switch, and two power cables per server (redundant power), cable management quickly becomes a concern. You also need a better way to store your servers than on dusty desks. Therefore you build your rack servers into a rack cabinet. Cable management, upgrading and repairing servers is a lot easier thanks to special cable management arms and the rack rails. This significantly lowers the costs of maintenance and operation. Rack servers also take much less space than towers, so the facility management costs go down too. The only disadvantage is that you have to buy everything in 19inch wide format: switches, routers, KVM switch and so on.
Cable management arms and rack rails make upgrading a server pretty easy
Racks can normally contain 42 "units". A unit is 1.75 inch (4.4 cm) high. Rack servers are usually 1 to 4 units (1U to 4U) high.
HP DL145, a 1U solution
1U servers (or the "pizza box" servers) focus on density: processing power per U. Some 1U models offer up to four processor sockets and eight cores, such as Supermicro's SC818S+-1000 and Iwill's H4103 server. These servers are excellent for HPC (High Performance Computing) applications, but require an extra investment in external storage for other "storage intensive" applications. The primary disadvantage of 1U servers is the limited expansion possibilities. You'll have one or two horizontal PCI-X/PCI-e slots (via a riser card) at most. The very flat but powerful power supplies are of course also more expensive than normal power supplies, and the number of hard drives is limited to 2 to 4 drives. 1U servers also use very small 7000-10000 rpm fans.
Sun T2000, a 2U server
2U servers can use more "normal" power supplies and fans, and therefore barebones tend to be a bit cheaper than 1U. Some 2U servers such as the Sun T2000 use only half height 2U vertical expansion slots. This might limit your options for third party PCI-X/e cards.
The 4U HP DL585
3U and 4U servers have the advantage that they can place the PCI-X/e cards vertically, which allows many more expansion slots. Disks can also be placed vertically, which gives you a decent amount of local disks. Space for eight disks and sometimes more is possible.
32 Comments
View All Comments
AtaStrumf - Sunday, October 22, 2006 - link
Interesting stuff! Keep up the good work!LoneWolf15 - Thursday, October 19, 2006 - link
I'm guessing this is possible, but I've never tried it...Wouldn't it be possible to use a blade server, and just have the OS on each blade, but have a large, high-bandwith (read: gig ethernet) NAS box? That way, each blade would have, say (for example), two small hard disks in RAID-1 with the boot OS for ensuring uptime, but any file storage would be redirected to RAID-5 volumes created on the NAS box(es). Sounds like the best of both worlds to me.
dropadrop - Friday, December 22, 2006 - link
This is what we've had in all of the places I've been working at during the last 5-6 years. The term used is SAN, not NAS, and servers have traditionally been connected to it via fiberoptics. It's not exactly cheap storage, actually it's really damn expensive.To give you a picture, we just got a 22TB SAN at my new employer, and it cost way over 100000$. If you start counting price for gigabyte, it's not cheap at all. Ofcourse this does not take into consideration the price of Fiber Connections (cards on the server, fiber switches, cables ect). Now a growing trend is to use iScsi instead of fiber. Iscsi is scsi over ethernet and ends up being alot cheaper (though not quite as fast).
Apart from having central storage with higher redundancy, one advantage is performance. A SAN can stripe the data over all the disks in it, for example we have a RAID stripe consisting of over 70 disks...
LoneWolf15 - Thursday, October 19, 2006 - link
(Since I can't edit)I forgot to add that it even looks like Dell has some boxes like these that can be attached directly to their servers with cables (I don't remember, but it might be an SAS setup). Support for a large number of drives, and mutliple RAID volumes if necessary.
Pandamonium - Thursday, October 19, 2006 - link
I decided to give myself the project of creating a server for use in my apartment, and this article (along with its subsequent editions) should help me greatly in this endeavor. Thanks AT!Chaotic42 - Sunday, August 20, 2006 - link
This is a really interesting article. I just started working in a fairly large data center a couple of months ago, and this stuff really interests me. Power is indeed expensive for these places, but given the cost of the equipment and maintenance, it's not too bad. Cooling is a big issue though, as we have pockets of hot and cold air through out the DC.I still can't get over just how expensive 9GB WORM media is and how insanely expensive good tape drives are. It's a whole different world of computing, and even our 8 CPU Sun system is too damned slow. ;)
at80eighty - Sunday, August 20, 2006 - link
Target Reader here - SMB owner contemplating my options in the server routeagain - thank you
you guys fucking \m/
peternelson - Friday, August 18, 2006 - link
Blades are expensive but not so bad on ebay (as is regular server stuff affordable second user).
Blades can mix architecture eg IBM blades of CELL processor could mix with pentium or maybe opteron blades.
How important U size is depends if it's YOUR rack or a datacentre rack. Cost/sq ft is more in a datacentre.
Power is not just $cents per kwh paid to the utility supplier.
It is cost of cabling and PDU.
Cost (and efficiency overhead) of UPS
Cost of remote boot (APC Masterswitch)
Cost of transfer switch to let you swap out ups batteries
Cost of having generator power waiting just in case.
Some of these scale with capacity so cost more if you use more.
Yes virtualisation is important.
IBM have been advertising server consolidation (ie not invasion of beige boxes).
But also see STORAGE consolidation. eg EMC array on a SAN. You have virtual storage across all platforms, adding disks as needed or moving the free space virtually onto a different volume as needed. Unused data can migrate to slower drives or tape.
Tujan - Friday, August 18, 2006 - link
"[(o)]/..\[(o)]"Zaitsev - Thursday, August 17, 2006 - link
Fourth paragraph of intro.
Haven't finished the article yet, but I'm looking forward to the series.