Original Link: http://www.anandtech.com/show/754



Introduction

Can you believe it?  It's been almost four years that AnandTech has been up and running online.  It has always been overwhelming to think about where we came from and where you all, our readers, have helped us go.  You've taken what started out as an idea on a Geocities page to an online magazine with a circulation of just under 3 million people.  Saying thanks is the absolute least we can do, but keeping AnandTech and the Forums a place where you can continue to come to for entertainment, enjoyment and education is definitely something that you all deserve. 

We have been experiencing tremendous growth at AnandTech over the past few months.  Traffic has more than doubled since we made our major server upgrade in August of last year, however until recently we were running on a relatively unchanged server setup.  In order to cope with the increase in traffic we had to plan out yet another server upgrade, however this time around the situation was much more complicated.  Our software load balancing solution that was used to make sure that you all would get put on the servers with the least amount of load at any given time was unfortunately showing its flaws and resulted in quite a few undesirable side effects.  At the same time we ran into an unfortunate bug with a few of our server boards, proving that long-term real world use is the best test of reliability we can offer. 

Without further ado, let's take a look at the issues we encountered, how we dealt with them, and what the new AnandTech Server Setup looks like. 



Mayday, mayday, we're going down

As you'll remember from our first article on our server upgrade, we built our first five Athlon servers on MSI K7T Pro (KT133) Socket-A motherboards.  These boards had been working perfectly for months on end until we got a call from our datacenter one day informing us that our Mail/Images Server (the server that hosts staff mail and all of the images you see in reviews) was down. 

While the Mail/Images Server problem ended up being related to a bad set of redundant power supplies, that unfortunate incident was the start of a number of problems with our K7T Pro based Athlon servers. 


Click to Enlarge

Randomly we'd get calls from the datacenter saying that one of the AnandTech Web Servers would be locked, and connecting a console to the machine would reveal nothing more than a black screen, no video, nothing.  Resetting the machine would almost never work, often times resetting the BIOS would be necessary in order to get the machine to POST again. 

Quick searches through Deja News and our own AnandTech Forums revealed that quite a few users have had similar experiences with MSI boards, particularly the MSI K7T Pro2A.  And most recently, we actually duplicated the problem in the lab with the MSI K7T266 Pro.  It was clear that this wasn't an isolated incident to our servers, rather a much larger problem. 

Quite a few readers have wrote us about the problem as well as hypothesized as to what caused it.  The most plausible explanation seems to be an issue with delivering the correct voltage to the CPU, however we are currently working with MSI on getting to the bottom of this issue. 


Click to Enlarge

Not able to wait on MSI for a fix (BIOS updates did not work), we had to find replacements for the K7T Pro boards that we did have up in Pittsburgh (where the Stargate datacenter is located).  We decided to go with the tried and true ASUS A7V as a replacement board since we have been using it in lab for months now without any problems.  We will keep you all updated on how these hold up as well.



Software Clustering

If you'll remember back to our first upgrade article, you'll know that we went with a software clustering solution to balance the load between our (at the time) five AnandTech Web Servers.  The software clustering software we used was Allaire's ClusterCATS that is bundled with Cold Fusion Enterprise Edition (Cold Fusion is our Web front end software).

The way ClusterCATS worked with our server setup was simple; we used something known as Round-Robin DNS that meant whenever you typed in www.anandtech.com the domain would resolve (point to) one of the five IP addresses of our five AnandTech Web Servers.  The first person to type in www.anandtech.com would get pointed at the first AT Web Server, the second person would get the second AT Web Server and so on…

After getting to one of these servers, ClusterCATS would look at the server and find out whether it was loaded too heavily or not (we set variables in software to define what "too heavy" of a load was).  If the server was loaded too heavily, then you would be seamlessly transported to one of the other web servers, more specifically, the one with the least amount of load on it. 

As you can probably guess, this creates a decent amount of overhead because you are being bounced around once you're already on one server.  Another side effect was that the software wasn't exactly the best-made package we had used, often times it would do odd things such as cause certain images not to load properly and we even had problems with servers being randomly thrown out of the cluster.  It was clear that this wasn't a good long-term solution, and other software packages would still have the same overhead problems as ClusterCATS; we needed a dedicated piece of hardware that would handle the task of load balancing.

Cisco happened to make such a device that they called the Local Director.  Just recently Cisco's acquisition of ArrowPoint Communications allowed them to replace the Local Director with ArrowPoint's line of hardware load balancing devices.  These devices are relatively simple, they consist of a RISC processor, a decent amount of memory (usually around 128MB), and a few network interface ports for connectivity.

With a hardware based solution such as Cisco's units, instead of using a Round-Robin DNS setup, www.anandtech.com would resolve to the IP address of the load balancer and this device would direct you at the server with the least amount of load on it.  The premise is simple, and the hardware is really nothing more than a dedicated computer, however Cisco charges around $25,000 for their mid-range ArrowPoint solutions.  There had to be a better way.



A Better Alternative

With the specs of the Cisco-ArrowPoint load balancers publicly available, it wasn't too far of a stretch to think that we could put together a PC setup that would offer similar if not greater performance as a hardware load balancer.  The only problem here would be finding software that offered the same load balancing features as the Cisco-ArrowPoint systems.

While the Linux community is often put down by saying that the solutions they advocate couldn't possibly be used in a corporate environment because of a lack of support, the same argument could be made in regards to building and maintaining your own servers.  It turns out that there is quite an interesting project online known as the Linux Virtual Server Project.  This particular project essentially centers itself around the creation of a Linux based hardware load balancer, that would perform all of the duties of a more expensive Cisco unit, but without the excessive cost; after all, Linux is Open Source and thus definitely on the affordable side ;)

We were introduced to the Linux Virtual Server Project and it struck us as an option that just might work for our needs.  We took a risk eight months ago on Athlon servers and we have yet to regret it, so we felt that we might as well take a chance on this project too. 

The system works quite well actually; taking www.anandtech.com as an example (the same applies to forums.anandtech.com, just with a different IP address), the domain resolves to the IP address of the load balancer machine and the software directs you to the server with the least amount of load. 

The load balancer determines which box has the least amount of load by looking at the number of connections it has sent over to the box and comparing it to a weight variable that we define, which tells the load balancer how powerful one server is in comparison to another.  For example, one of our 1.3GHz Athlon servers would have a greater weight than a 1.0GHz Athlon server because it can handle more traffic.

The load balancer also does not require that all of the traffic go through it, it simply acts as a traffic director and sends you to the appropriate server (although you can configure it otherwise).  After you're sent to a server, all communication occurs directly between you and that particular server.  So when an AnandTech Web Server prepares Page 5 of our AMD 760MP Review and sends it off to you, it sends it through its own network cards directly to you and not through the load balancer.

With another load balancer in the setup, we would also have built in fail over support, because it's never good to have your entire setup rely on one machine.  If anything happens to the first load balancer, the second one takes over immediately.

Speaking of fail over support, one of the benefits of this setup vs. a software solution working with Round-Robin DNS is that there is seamless fail over support should a web server go down.  With the software/round-robin solution we had to remove the failed box from the Round-Robin DNS setup otherwise people could be redirected to a machine that wasn't up.  In this case, as soon as a box fails, the load balancer takes it out of the cluster. 



Finding the right hardware

These two load balancer boxes would obviously have to have hardware that is well supported and solid under Linux.  They didn't necessarily have to be the most powerful boxes in the world; they just had to be robust enough to handle thousands upon thousands of simultaneous compares that would be executed upon deciding what server to redirect visitors to.  An important capability would be the presence of at least two network cards in each box, one for incoming traffic and one for outgoing traffic to our SMC switch that connects the rest of the web servers and database boxes.  At the same time we wanted these machines to be relatively low profile, in 1U cases, so the amount of rack space lost to them would be minimal.  There was no need for these machines to be in a 4 or 5U chassis.


Click to Enlarge

For their experience in building just these types of systems, we turned to PogoLinux to help us with building our first Linux based servers for AnandTech.  Pogo actually had the perfect bare bones system: their WebWare 1100 line.  A slightly modified WebWare 1100 configuration was what Pogo supplied us with, using an Intel i815 EAL motherboard, a Pentium III 866, 256MB of PC133 SDRAM, two NICs and an IBM IDE drive all mounted in a very low profile 1U chassis was ideal for our load balancer system. 


Click to Enlarge

The requirements for a motherboard in a 1U chassis are a bit different than in other, larger cases.  The biggest issue is that your memory slots must be at a 45-degree angle, because there isn't enough room for them to remain upright.  This unfortunately kept us from using any Socket-A platforms, however in the future we will definitely see some 1U capable Socket-A motherboards.

For the software configuration we turned to the man behind the project, Wensong Zhang, the current maintainer of the Linux Virtual Server Project website.  Wensong proved to be a guru of the software setup and worked with us in structuring the load balancers to work properly with our hardware setup.  We'd like to take this time to thank him for his incredible work and show our appreciation for what he has done for us.  Thanks Wensong.



DDR Athlons as servers?

Eight months ago we asked the question of whether or not Athlons could be used as servers, and the answer was an astounding yes.  Recently we asked ourselves if the AMD760 DDR platform was capable of being a reliable host for the second generation of Athlon servers to host AnandTech, and it seems like we've got another yes on our hands.

For motherboards we decided to use the Gigabyte GA-7DX, which is actually the board AMD chose to use in all of their DDR review sample systems.  The board is rated for operation with the newest 1.33GHz Athlons and we haven't had any stability issues with them in lab. We purchased our boards from NewEgg who promptly delivered them to us in Pittsburgh, it seems like these guys have been helping us out quite a bit lately.

We populated both DIMM slots on the 7DX with two 256MB PC2100 DDR modules from Mushkin, giving each webserver a total of 512MB of PC2100 DDR SDRAM.  Since the majority of the AnandTech site is cached, you are generally served the pages directly out of memory, meaning that the higher bandwidth, higher performance memory solution should actually come in handy. 


Click to Enlarge

In terms of CPUs, we continued to use 1.0GHz Athlons as well as added in 1.1GHz, 1.2GHz and 1.3GHz parts into the array. 

In terms of number of servers, we added 2 more servers to the AnandTech Web Server setup and 3 more servers to the AnandTech Forums Server setup.  The Forums have been at a disadvantage since they have been running on Dual 500MHz Xeons while the AnandTech main site has been running off of 1.0GHz+ Athlons.  Also the Forums aren't able to take advantage of the same caching techniques as the main site and thus have a much higher load on individual web servers as we explained in our last article. 



Our Senior Developer and Webmaster, Jason Clark, explains the software side of things here at AnandTech:

AnandTech Web Architecture

Evolution exists in all areas of technology. One of the main areas of evolution has been the progression of delivery of web content. AnandTech was originally serving content from static HTML pages. While this solution is the least demanding form of content delivery, and the most cost effective, it has some fundamental drawbacks.

Static Content

As sites grow, so does their maintenance, and complexity levels. Posting content to the web using the static HTML delivery format is simple for small infrastructures, where there aren't many dependencies. A site like AnandTech, however, requires that links be created to point to the document, and the necessary HTML be included for the site's look and feel. Once you have hundreds of documents published, and thousands of news posts posted per year, this method of content delivery begins to show its weakness. When the time comes, and it will, to do a new look and feel for the web site, the site maintainer is left with the task of editing thousands of HTML pages -- a very tedious job. As site popularity increases, so does the need for more user interactivity. Searching content and filtering content by type are not easy tasks in the static HTML delivery format.

Dynamic Content

Database driven web sites have been around for quite some time; their popularity, however, has dramatically increased over the last few years. AnandTech saw the need to move to a more efficient manner of delivering content to end-users. The first iteration of the dynamic web site started with a Sun Solaris environment. AnandTech chose the Allaire ColdFusion Web Application Server as the front-end. For the back-end (database), Oracle 8i was implemented. This system was touch-and-go, due to some problems in the Solaris port of ColdFusion. Oracle was a good platform for the back-end, but unkind in terms of the cost of ownership. The more time people have to spend administrating a system, the higher the cost of ownership becomes. As the days are most certainly not getting longer, this raised some issues for us.

Why ColdFusion?

Many people asked why ColdFusion was implemented, and why not some of the various other available software platforms. The main reason is time to market. ColdFusion's ease of HTML integration is its main strength. Every web application platform has its strengths and its weaknesses. But, the main weakness in web application is the actual source code. All of the mainstream web-application platforms are similar in performance. It's just a matter of how well the code is written. This is most always the determining factor.

Moving to the NT Platform

After learning a few hard lessons, AnandTech moved on to the plaftform that ColdFusion was the most mature on, "NT". AnandTech also switched from Oracle 8i to a SQL7/2000 environment, which greatly improved administration time. There was no performance drop from Oracle 8 to SQL 7. In fact, in some cases, performance improved.



AnandTech Today

AnandTech went through a few different look-and-feel routines, and finally concluded with what you see today. Today's AnandTech is completely dynamic. Documents, news, polls, etc. are all published using an Administration system. To keep speed to a maximum, caching routines are used for most of the main queries. What this does is allow the system to run a query, such as the latest web news, and store the query in server memory for a specified amount of time. Thereby permitting the system to retrieve the data from memory when the page is read, and precludes ColdFusion from contacting the database server. Searching AnandTech is accomplished by using the Microsoft Full Text search engine built into SQL Server. Collections are created on the tables that require searching, and the system runs ColdFusion queries against these collections. The majority of the site is held in Include files (i.e. the top, bottom, left, and right parts of the page). This allows new look-and-feel changes to be done with ease, since only the outside templates need to be modified. Because the actual content for the web site is stored in the database, it makes changing fonts, logos, site design a fairly easy task.

AnandTech Administration

This is the heart of the AnandTech website where content is published and reports are generated. The Administration area is built solely for Internet Explorer, due to some of the scripting used and advanced controls. There are numerous modules in the administration area that allow each area of the site to be administrated. When an author logs in to the Administration area, they are presented with the "engines" that they are permitted to use. The system administrator sets to which engines that each user has access.


Click to Enlarge
 

Click to Enlarge

Documents are published using a document engine. This engine allows the author to add a document of any number of pages by using simple HTML forms. Documents can be updated and deleted, or pages added or removed from them through this engine. When documents are read from the site, the number of times each page was read is stored, so that the document authors can see how well their document was viewed. AnandTech's Web News authors use a similar engine to add news to the site; only they have a WYSIWIG editor to post their news. This allows the news post to be written in HTML, and viewed in HTML as the author posts their news. This editor is very similar to some of the thick-client WYSIWIG editors, like Microsoft FrontPage.



Behind AnandTech - The Server Pictures

Here are updated pictures of our servers and even some pictures of the "mobile office" (read: Hotel room):

These cabinets are what hold our servers. Each one is 45U high, meaning they can hold 45 x 1U cases, or 9 x 5U cases, etc. The AnandTech Servers are in 1U, 2U, 4U and 5U cases. The higher the rating (e.g. 5U) the taller the case.


Click to Enlarge

Below we have two pictures of our main cabinet, this cabinet holds Forums 1, 2 and 4 as well as AnandTech 1 and 2. This cabinet houses our database servers as well. The missing server at the bottom was being worked on at the time, it is present in the second picture.

This is our newest cabinet. The arm in the back belongs to Jason Clark who is working on a console back there. This cabinet houses AnandTech Web 3 - 5 and Forums 3, 5 and 6. This rack will also hold the two load balancers which are currently in the rack to the left of our primary rack.

The cases in this picture are all new 4U units and are actually quite nice, we'll take a look at them next.



The Server Pictures - New Cases

Below is a picture of the front of the new 4U cases that we brought up with us on this trip to Pittsburgh. The top opens through the use of a single thumbscrew which is actually quite useful. The cases were outfitted with Seventeam 300BLP power supplies that are rated for operation with Athlon 1.33GHz parts.

Here is another picture of the case except with the front door closed.

This was our work area in the datacenter. Luckily we were doing the upgrade over a holiday weekend so we went relatively unbothered and no one complained about our mess ;)


Click to Enlarge

Here are the five new boxes ready to be rackmounted.


Click to Enlarge



The Server Pictures - Behind the Racks

This is our SMC 24-port switch that handles all of the internal traffic between the AnandTech Web and Forums servers as well as the DBs. When you hit either the main site or the forums, the load balancer box will toss you over to this switch and from here you to go the server with the least amount of load on it.


Click to Enlarge

Another picture of the switch and the back of our primary rack.


Click to Enlarge

The same rack, just tilted towards the bottom so you can see the rest of the servers.


Click to Enlarge



The Server Pictures - The AnandTech Hotel Room

Static bags-a-plenty in our hotel room the day before we brought the servers to the datacenter.

We were courteous though, we laid down a towel to make sure we didn't scratch their desks.

Five brand new cases ready to be installed.



Behind AnandTech - The Main Site Server Configs

Here is the updated server configs list of the 17 servers behind AnandTech, starting with the main site web servers:

AnandTech - Web Server 1

Processor(s):
AMD Athlon (Thunderbird) 1GHz
Motherboard(s):
Microstar K7T Pro
RAM:
3 x 256MB Corsair PC133 SDRAM
Hard Drive(s):
Western Digital 20.4GB Ultra ATA/66
Storage Controller:
On-board VIA 686A
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
4U 19" Mushkin Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech - Web Server 2

Processor(s):
AMD Athlon (Thunderbird) 1GHz
Motherboard(s):
Microstar K7T Pro
RAM:
3 x 256MB Corsair PC133 SDRAM
Hard Drive(s):
Western Digital 20.4GB Ultra ATA/66
Storage Controller:
On-board VIA 686A
Network Card(s):
2 - AMD 10/100 PCI Adapters
Case:
4U 19" Mushkin Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech - Web Server 3

Processor(s):
AMD Athlon (Thunderbird) 1GHz
Motherboard(s):
ASUS A7V VIA KT133 Motherboard
RAM:
2 x 256MB Mushkin PC133 SDRAM
Hard Drive(s):
Western Digital 20.4GB Ultra ATA/66
Storage Controller:
On-board VIA 686A
Network Card(s):
2 - AMD 10/100 PCI Adapters
Case:
4U 19" Mushkin Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech - Web Server 4

Processor(s):
AMD Athlon (Thunderbird) 1.2GHz
Motherboard(s):
Gigabyte GA-7DX AMD 760 Motherboard
RAM:
2 x 256MB Mushkin PC2100 DDR SDRAM
Hard Drive(s):
Western Digital 20.4GB Ultra ATA/66
Storage Controller:
On-board VIA 686B
Network Card(s):
2 - AMD 10/100 PCI Adapters
Case:
4U 19" Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech - Web Server 5

Processor(s):
AMD Athlon (Thunderbird) 1.1GHz
Motherboard(s):
Gigabyte GA-7DX AMD 760 Motherboard
RAM:
2 x 256MB Mushkin PC2100 DDR SDRAM
Hard Drive(s):
Western Digital 20.4GB Ultra ATA/66
Storage Controller:
On-board VIA 686B
Network Card(s):
2 - AMD 10/100 PCI Adapters
Case:
4U 19" Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech - Database Server 1

Processor(s):
2 - Intel Pentium III Xeon 500/1MB
Motherboard(s):
Tyan Thunder X Server Board
RAM:
4 x 256MB Corsair PC100 ECC SDRAM
Hard Drive(s):
3 - IBM Ultrastar 9LZX 4.5GB 10,020RPM RAID 5
1 - Quantum Atlas 10K II - Boot Drive
Storage Controller:
Adaptec AAA-133U2
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
5U 19" Mushkin Rackmount
Operation System:
Windows NT

 



Behind AnandTech - The Forums Server Configs

AnandTech Forums - Web Server 1

Processor(s):
2 - Intel Pentium III Xeon 550/1MB
Motherboard(s):
Intel C440GX+ Server Board
RAM:
4 x 256MB Corsair PC100 ECC SDRAM
Hard Drive(s):
2 - IBM Ultrastar 9LZX 4.5GB 10,020RPM - RAID 1
Storage Controller:
AMI MegaRAID 1400
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
5U 19" Mushkin Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech Forums - Web Server 2

Processor(s):
2 - Intel Pentium III Xeon 550/1MB
Motherboard(s):
Intel C440GX+ Server Board
RAM:
4 x 256MB Corsair PC100 ECC SDRAM
Hard Drive(s):
2 - IBM Ultrastar 9LZX 4.5GB 10,020RPM - RAID 1
Storage Controller:
Adaptec AAA-131U2
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
5U 19" Mushkin Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech Forums - Web Server 3

Processor(s):
AMD Athlon (Thunderbird) 1.1GHz
Motherboard(s):
Gigabyte GA-7DX AMD 760 Motherboard
RAM:
2 x 256MB Mushkin PC2100 DDR SDRAM
Hard Drive(s):
Western Digital 20.4GB Ultra ATA/66
Storage Controller:
On-board VIA 686B
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
4U 19" Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech Forums - Web Server 4

Processor(s):
AMD Athlon (Thunderbird) 1GHz
Motherboard(s):
ASUS A7V VIA KT133 Motherboard
RAM:
2 x 256MB Corsair PC133 SDRAM
Hard Drive(s):
Western Digital 20.4GB Ultra ATA/66
Storage Controller:
On-board VIA 686A
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
4U 19" Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech Forums - Web Server 5

Processor(s):
AMD Athlon (Thunderbird) 1.3GHz
Motherboard(s):
Gigabyte GA-7DX AMD 760 Motherboard
RAM:
2 x 256MB Mushkin PC2100 DDR SDRAM
Hard Drive(s):
Western Digital 20.4GB Ultra ATA/66
Storage Controller:
Adaptec AAA-133U2
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
4U 19" Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech Forums - Web Server 6

Processor(s):
AMD Athlon (Thunderbird) 1.0GHz
Motherboard(s):
Gigabyte GA-7DX AMD 760 Motherboard
RAM:
2 x 256MB Mushkin PC2100 DDR SDRAM
Hard Drive(s):
Western Digital 20.4GB Ultra ATA/66
Storage Controller:
Adaptec AAA-133U2
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
4U 19" Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech Forums - Database Server 1

Processor(s):
2 - Intel Pentium III 800EB
Motherboard(s):
ASUS CUR-DLS
RAM:
3 x 512MB Mushkin PC133 Registered ECC SDRAM
Hard Drive(s):
4 - Quantum Atlas 10K II 9.2GB 10,000RPM RAID 0+1
1 - Seagate Barracuda 18XL 9.2GB - Text-Catalog Drive
1 - Western Digital 20.4GB Ultra ATA/66 - Boot Drive
Storage Controller:
Adaptec 2100S RAID & on-board LSI Ultra160 SCSI
Network Card(s):
2 - Intel Pro/100 Server Adapters (one on-board)
Case:
2U 19" BoomRack BOOM2U300XA
Operation System:
Windows 2000 SP1


Behind AnandTech - The Admin Server Configs

AnandTech - Mail Server 1

Processor(s):
AMD Athlon (Thunderbird) 1GHz
Motherboard(s):
Microstar K7T Pro
RAM:
3 x 256MB Mushkin PC133 SDRAM
Hard Drive(s):
Western Digital 20.4GB Ultra ATA/66
Storage Controller:
On-board VIA 686A
Network Card(s):
2 - AMD 10/100 PCI Adapters
Case:
4U 19" Mushkin Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech - AD Database Server 1

Processor(s):
2 - Intel Pentium III Xeon 500/1MB
Motherboard(s):
Tyan Thunder X Server Board
RAM:
4 x 256MB Corsair PC100 ECC SDRAM
Hard Drive(s):
2 - IBM Ultrastar 9LZX 4.5GB 10,020RPM RAID 5
1 - Quantum Atlas 10K II - Boot Drive
Storage Controller:
Adaptec AAA-133U2
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
5U 19" Mushkin Rackmount
Operation System:
Windows 2000 SP1

 

AnandTech - Load Balancer 1

Processor(s):
Intel Pentium III 866MHz
Motherboard(s):
Intel i815 EAL
RAM:
2 x 128MB PC133 SDRAM
Hard Drive(s):
IBM 30GB Ultra ATA/66
Storage Controller:
On-board ICH2
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
1U 19" Rackmount
Operation System:
RedHat Linux 6.2

 

AnandTech - Load Balancer 2

Processor(s):
Intel Pentium III 866MHz
Motherboard(s):
Intel i815 EAL
RAM:
2 x 128MB PC133 SDRAM
Hard Drive(s):
IBM 30GB Ultra ATA/66
Storage Controller:
On-board ICH2
Network Card(s):
2 - Intel Pro/100 Server Adapters
Case:
1U 19" Rackmount
Operation System:
RedHat Linux 6.2

We'll keep on adding more boxes to the server farm as the needs grow, but for now we're definitely happy being powered by both AMD and Intel based servers; how's that for the best of both worlds? And of course, all of the servers were assembled by AnandTech. The servers are connected on a private network courtesy of one Ethernet card in each system, as well as a 24-port 100Mbit SMC switch.

None of this would have been possible had it not been for our excellent host, Elite Internet Communications and their colocation datacenter with Stargate. If you're looking for a host, Elite is the best we have ever had and for those of you that have been long time visitors of AT you'll know that we've seen them all.

A very special thanks goes out to the following companies that have helped us construct these servers:

AMD - http://www.amd.com/
Azzo - http://www.azzo.com/
Corsair - http://www.corsairmicro.com/
Intel - http://www.intel.com/
Linux Virtual Server Project - http://www.linuxvirtualserver.org/
Memman - http://www.memman.com/
MSI - http://www.msi.com.tw/
Mushkin - http://www.mushkin.com/
NewEgg - http://www.newegg.com/
PogoLinux - http://www.pogolinux.com/
SMC Networks - http://www.smc.com/
TC Computers - http://www.tccomputers.com/
Tyan Computers - http://www.tyan.com/

And, of course, a huge thanks to the readers that make AnandTech what it is on a daily basis. Thanks guys.

Log in

Don't have an account? Sign up now