Original Link: https://www.anandtech.com/show/1555




Ever since the inception of AnandTech, we've given our readers an inside look at what makes AnandTech.com tick. We think that by exposing what we did, both right and wrong, it might help similar organizations make better decisions about their hardware and software infrastructures. Unlike those commercials that you see on TV, and in some written IT publications, architecting and operating a site of this size is not as easy as throwing some software in and flipping a switch.

Our evolution has had both success and some failure, no different than most any successful IT organization. From our failures come experience in architecting a site that has better up-time, and a very scalable back-end, which also costs less to operate. The latest architecture shift to an ASP.NET based AnandTech has been extremely successful for us. So, we thought that we'd share with you what we've gained thus far after running the new platform for over 6 months. We'll also talk a bit about our architecture, on the software side of the house, to keep you up to date.




Architecting www.anandtech.com

In a previous article about our migration, we discussed briefly what we did to move to ASP.NET. In this edition, we'll go into a bit more detail specifically on the development side.

AnandTech consists of a few different applications that make the entire site operate. The main website (www.anandtech.com) serves the articles that we write - this was the first application that we migrated. The migration of the main website was fairly straightforward; we simply ported our ColdFusion code into ASP.NET, learning new syntax as we went. We made use of the code-behind techniques for which ASP.NET is known as we ported parts of the application. And, we wrote a web API similar to our old ColdFusion site to handle back-end logic. Each time that a request begins, we have an init class, which handles instantiating the framework for the page request. The page inherits the web-based API and can then execute methods for the various operations in the pages. This is not much different than most web-based applications of this size.



One thing that we took for granted in our old ColdFusion website was debugging output. ColdFusion has a fairly unique feature that allows you to expose debugging output for a page request. This output consists of the various web-based variables, form, URL and the contents of cookies. It also outputs the SQL syntax submitted to the RDBMS and the various method calls made to any functions in the application. This type of debugging output is extremely helpful when troubleshooting a problem or even when tuning an application, as you can see clearly what the page is doing and how long it's taking to perform each individual method call or query to the database.

So, during our development of the API, we wrote a debug class, which tied into our database class and the OnRequestEnd event from the Global.asax part of the .NET framework. Essentially, it tracked query calls and variables and outputted the data in the OnRequestEnd event of the page request.






Our bread and butter is up next

We're an online publication. It's no secret that advertising is what keeps AnandTech running, so this was the next critical piece of our migration. For years, we've been using FuseAds, which is a ColdFusion based ad serving software package. Since we were moving to a completely ASP.NET backend, we now had to be able to track our advertising within our new architecture.

We decided to split up our ad tracking routines into essentially "ad display" and "ad track" applications. Our display functionality selects the right ad for the portion of the site that the user is on, and the tracking software keeps track of how many times an ad was viewed and clicked on. It might seem simple at first glance, but we built these applications for enterprise load. The tracking software runs in memory and dumps to the RDBS every x minutes (configurable), and dynamically tracks the weight of each banner, if a banner is being weighed over a certain period of time. We had some help on our ad tracking system from a good friend of ours, Dominic Plouffe. He was the lead engineer on the old ColdFusion based FuseAds.

Next up was some new functionality that the sales folks required. We were requested to add in the ability to Geo-Target ads, which allows ads to be delivered to different geographic regions. To solve this, we used MaxMind for our geo-targeting needs, by accessing their COM object to deliver these ads.

At the same time that we wrote the new ad framework, we updated the admin interface to our ad software using ColdFusion. Yes, we still use ColdFusion here. ColdFusion's strength is in building form-based interfaces to data, and reporting on that data. Nothing that we've seen compares to ColdFusion in this aspect of web development. ColdFusion still runs the form based administration of the AnandTech website, and probably will for some time to come.




The Forums get migrated

We've used FuseTalk for our discussion forums for a long time now, and it was music to our ears that they decided to port their product to ASP.NET. Implementing the new FuseTalk platform happened a few months ago and it went very well.

To implement the new .NET version of FuseTalk, we had to include our ad tracking software into FuseTalk in order for us to serve and track ads within the forums. We accomplished this by using FuseTalk's include code, which allows you to include your own code in various places within the FuseTalk application. Essentially, FuseTalk inherits our ad tracking dll and we are able to call ads within the FuseTalk look and feel, and track them.

Performance-wise, the new .NET port of FuseTalk has worked extremely well for us. Our average page execution time is 20-40ms with approximately 1000 users online. We used to require 2 servers running at all times to get near that kind of page execution time, and now, we could run the entire forums off of one server easily.




Scalability of the website

Since the site went live with the new ASP.NET framework, we've constantly been monitoring the servers and tracking statistics, and to say that we've been impressed is an understatement. We've had near 99% up-time and have seen dramatic increases in performance and scalability of the site infrastructure. Each of our clustered servers use an average of 12% CPU, and each server has at least 450 users on them at any given time. Pages execute in an average of 15ms, which is 60-80ms better than what we used to do.

To track our statistics, we wrote a performance counter monitor that resides on each server. It essentially writes the output of various performance counters to an aspx page, which we then feed into a database and run reports, view trends and monitor new versions of the site code as they are put into play.

Last week, we performed a test on scalability, to see how scalable one server was in the cluster. We monitored a server in the cluster for a period of 10 minutes while all five servers were in the web cluster. Next, we shut off four of the servers leaving one to serve the load for the website. Below is a table of the before and after of this test - impressive results in our opinion.

 Performance Counter  Before
(5 servers in the cluster)
 After
(1 server in the cluster)
CPU Usage 5% 30%
Concurrent Users 450 2400
Requests per second 12 80
Average request time (ms) 15 18

As you can see, ASP.NET is very efficient. CPU usage climbed to 30%, not much considering nearly 2000 more users were on the server. Request per second were 7 times higher and our average page request times were still approximately the same, showing the efficiency of the ASP.NET runtime and the scalability of the website itself.

The migration has been successful, but there is always something else to do with a site of this size. We have a meeting at CES in January to talk about future plans. There will be, more than likely, new functionality to plan for and architect (what I live for). We're also going to take a look at a possible firewall upgrade next year. We've managed to hit our 18,000 simultaneous sessions limit twice in the past few months; once, we sustained 65Mbit/second of throughput for a few hours.

Log in

Don't have an account? Sign up now