• What
    is this?
    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.
    PRESENTED BY

Cloud = x86 and open source

From a high-level perspective, the basic architecture of Facebook is not that different from other high performance web services.

However, Facebook is the poster child of the new generation of Cloud applications. It's hugely popular and very interactive, and as such it requires much more scalability and availability than your average website that mostly serves up information.

The "Cloud Application" generation did not turn to the classic high-end redundant platforms with heavy Relational Database Management Systems. A combination of x86 scale-out clusters, open source websoftware, and "no SQL" is the foundation that Facebook, Twitter, Google and others build upon.

However, facebook has improved several pieces of the Open Source software puzzle to make them more suited for extreme scalability. Facebook chose PHP as its presentation layer as it is simple to learn, write, and read. However, PHP is very CPU and memory intensive.

According to Facebook’s own numbers, PHP is about 39 times slower than C++ code. Thus it was clear that Facebook had to solve this problem first. The traditional approach is to rewrite the most performance critical parts in C++ as PHP Extensions, but Facebook tried a different solution: the engineers developed HipHop, a source code transformer. Hiphop transforms the PHP source code into faster C++ code and compiles it with g++.

The next piece in the Facebook puzzle is Memcached. Memcached is an in-RAM object caching system with some very cool features. Memcached is a distributed caching system, which means a memcached cache can span many servers. The "cache" is thus in fact a collection of smaller caches. It basically recuperates unused RAM that your operating system would probably waste on less efficient file system caching. These “cache nodes” do not sync or broadcast and as a result the memory cache is very scalable.

Facebook quickly became the world's largest user of memcached and improved memcached vastly. They ported it to 64-bit, lowered TCP memory usage, distributed network processing over multiple cores (instead of one), and so on. Facebook mostly uses memcached to alleviate database load.

Facebook Technology Overview The Facebook Open Compute Servers
POST A COMMENT

62 Comments

View All Comments

  • setzer - Thursday, November 03, 2011 - link

    I'm guessing they are comparing their algorithms and I hope they are good programmers for all the languages they tested otherwise the tests don't mean anything. Reply
  • Taft12 - Thursday, November 03, 2011 - link

    I'm not surprised that part of the article would lead to programming language holy wars, but general benchmarks are utterly useless for Facebook. They should (and surely do) care only about performance of the compiled code and hardware platforms that run the site. Reply
  • bji - Thursday, November 03, 2011 - link

    It's illogical to suggest that an interpreted language like Java or C# could ever approach C++ in speed when the same level of optimization is applied to each.

    In my experience, the least optimized C++ code can sometimes be approximated in performance by the best optimized Java code, depending on the task in question.

    Of course, once you spend time optimizing the C++ code then there is no way for Java to keep up.

    I have never used C# but I expect the result for it would be very similar to Java due to the similar mechanics of the language implementation.

    That being said, in many situations raw speed is not the most important factor, and Java and C# can have significant advantages in terms of mechanism of deployment, programmer productivity, etc, that can make those languages very much the best choice in some situations; which is why they are, in fact, used in those situations in which their advantages are best exploited and their weaknesses are least important.

    I think that Ruby takes the last paragraph even further; Ruby is so ungodly slow that it has to make up for it by allowing extreme productivity gains, and I expect that it must (I've never programmed in it to any significant extent), otherwise it wouldn't have any niche at all.
    Reply
  • data003 - Thursday, November 03, 2011 - link

    While I've lurked this site for many years I just created an account to correct this erroneous bit of fail above.

    1. C# and Java are not interpreted languages. The are compiled at runtime into machine code.

    2. The C# JIT compiler can actually produce more efficient machine code than a compiled C++ binary.

    Since you have never used C# and clearly don't understand how it works, I'd suggest you refrain from commenting on it.
    Reply
  • Jaybus - Friday, November 04, 2011 - link

    I agree that in some cases a JIT compiler can produce more efficient code, particularly when the application lends itself to runtime optimizations, however that is far from typical. Usually, for a single process, the JIT code, once compiled, will be reasonably close, though the static C/C++ code has the edge.

    But that is for the typical case. Facebook is not a typical case. Each web server is constantly starting many, many short-lived processes. Each process must start up its own copy of the code. This is where JIT fails badly to ahead-of-time compilation. It isn't the execution speed of the code after the JIT gets it compiled. The problem is the startup delay. Even with caching, the bytecode still must be compiled at least once for each new process, which in Facebook's case is millions of times. There is no such delay with ahead-of-time compilation. Therefore, Java and C# have no chance of competing in Facebook's environment.
    Reply
  • erwinerwinerwin - Thursday, November 03, 2011 - link

    i wonder whether power consumption justifies them to create a new hardware w/ green power architecture and the cost they spend to having a custom build power supply running on 270volt, if it's only saves about 10-20 percent average of power consumption, rather than lets say make a corporate deal to the best power/performance servers producer on the market and modified it with water cooling (for example)??? Reply
  • Menetlaus - Thursday, November 03, 2011 - link

    Power savings absolutely justifies the work they did in customizing.

    20W less power consumption x 24/7/365 operation = 175KW.h (per server per year)
    175KW.h x $0.1/kw.h = $17.50 in power savings/year

    Just looking at the final image in the article there are easily 30 racks of 30 servers visable (30 x 30 x 17.50 =) $15 750/year in power saving.

    Since most power going into a computer ends up as wasted heat, if the 900 servers (from above) were consuming the additioanl 20W this would be ~18KW of additional heat being produced which needs to be cooled. This offers additional operational and capital cost savings due to the smaller cooling requirements.

    Water cooling may be a more efficient way of pulling heat out of the server rack, but the additional parts to move the water around the facility and to cool it adds to the total costs. Water is more efficient because it carries more heat/volume than air and with the piping the heat can be taken outside of the server room, while fans heat the air around the servers where another method of removing the heat is then required.

    The custom power supply at 270V and custom motherboard aren't really that difficult to get, as so many makers of each part already do custom designs for major PC makers (Dell/HP/etc). The difference between 208v and 270v from an electrical design standpoint isn't a big change, neither is removing parts from a motherboard.

    In short it's the economy of scale. You or I wouldn't be able to do this for a dozen personal systems as the costs would be huge per system, on the other hand for anyone managing 1'000's of servers the 20W/per adds up quick.
    Reply
  • iwod - Thursday, November 03, 2011 - link

    And i am guessing Facebook has at least 10 times more then what is shown on that image. Reply
  • DanNeely - Thursday, November 03, 2011 - link

    Hundreds or thousands of times more is more likely. FB's grown to the point of building its own data centers instead of leasing space in other peoples. Large data centers consume multiple megawatts of power. At ~100W/box, that's 5-10k servers per MW (depending on cooling costs); so that's tens of thousands of servers/data center and data centers scattered globally to minimize latency and traffic over longhaul trunks. Reply
  • pandemonium - Friday, November 04, 2011 - link

    I'm so glad there are other people out there - other than myself - that sees the big picture of where these 'miniscule savings' goes. :) Reply

Log in

Don't have an account? Sign up now