Introduction

Despite its incredible importance, it is difficult to find independent hardware advice on database servers. Only a few major hardware and software vendors publish the majority of the TPC and other benchmark numbers. Although a discussion on TPC benchmarks is beyond the scope of this article, it is clear that there is no substitute for independent benchmarking.

Benchmarks that vendors provide have a tendency to be to rosy or perhaps even flawed. Vendors may use hardware setups or software configurations that are unlikely to exist in the real world, yet attain the highest score on a particular benchmark. Benchmarking done by Jason and Ross are a notable exception on the internet, of course.

Because many of our readers are interested or are engaged in this field, we started a new databaseserver benchmarking project just a few months ago.

The primary objective of this project is to determine the hardware that makes sense for a database server of small and medium-sized organizations. We tested DB2, and My SQL on SUSE SLES 8 on many different systems based on four different Xeons CPUs and two Opteron configurations.



"Servers are all about large caches and fast I/O." This is a generalization that is heard a lot in the IT community and the cliché has been proven, more or less, to be accurate in the high-end server market. But does this common wisdom also apply to the smaller dual processor systems that act as database servers? Should you pay more for a Xeon that has a healthy amount of L3-cache, or will a less expensive Intel without L3-cache do just fine? Does 64-bit really matter? How important is memory latency/bandwidth? Is a hyperthreaded CPU better equipped when the database is accessed by many users simultaneously?

While we still continue to improve the quality of our benchmarks, we decided to report our first impressions.

A gigantic market
POST A COMMENT

46 Comments

View All Comments

  • blackbrrd - Friday, December 03, 2004 - link

    I think that it is Quad-channel, as the board is Numa aware.. Reply
  • Olaf van der Spek - Friday, December 03, 2004 - link

    > The result is that the Lindenhurst board can offer 4 DIMMs per channel while the other Xeon servers with DDR-I were limited to 4 DIMMs in total, or one per memory channel.

    Is that chipset quad-channel?
    Reply
  • Olaf van der Spek - Friday, December 03, 2004 - link

    > It is especially impressive if you consider the fact that the load on the address lines of DDR makes it very hard to use more than 4 DIMMs per memory channel. Most Xeon and Opteron systems with DDR-I are limited to 4 DIMMs per memory channel

    Isn't the Opteron limited to 3 or 4 DIMMs per channel too?
    After all, it's 6 to 8 DIMMs per CPU and each CPU is dual-channel.
    Reply
  • prd00 - Thursday, December 02, 2004 - link

    I am waiting for 64 bit Nocona vs 64 bit Opteron. Also, I think SLES9 would be interesting. Reply
  • mczak - Thursday, December 02, 2004 - link

    #16 ok didn't know 2.4.21 already supported NUMA. SuSE lists it as a new feature in SLES9.
    I agree it probably really makes not much of a difference with a 2-cpu box, but I think there should be quite an advantage with a 4-cpu box. The HT links are speedy, but I would guess you would end up using basically only one ram channel for all ram accesses way too often, bumping into bandwidth limitations.
    Reply
  • JohanAnandtech - Thursday, December 02, 2004 - link

    Lindy, you are probably right, I probably got carried away a little too much. however, you seem to swing the other way a little too far. For example, a peoplesoft server is essentially a database server (or are you talking about the application server, working in 3 tiers?)

    A webserver is in many cases a databaseserver too. I would even doubt an exchange server is not related, but I never worked with that hard to configure stubborn application. Many of those turnkey and homegrown apps are probably apps on top of database server too...

    And I think it is clear we are not talking about fileservers. I agree fully that fileservers are all about I/O but I don't agree about database servers.

    To sum it up: yes, you are right, it is not the lionshare in quantities. However, it is probably still the biggest part when we look at costs. Because I can probably buy 5 fileservers for one database server. Why even use fileservers when you have NAS?

    Reply
  • dragonballgtz - Thursday, December 02, 2004 - link

    cliff notes :P Reply
  • lindy - Thursday, December 02, 2004 - link

    This statement……

    Up to $46 billion is spent in the Servers (hardware) market, and while a small portion of those servers is used for other things than running relational databases (about 20% for HPC applications), the lion's share of those servers are bought to ensure that a DB2, Oracle, MS SQL server or MySQL database can perform its SQL duties well.

    ……..Is so far off base, its almost funny.

    I would reverse that statement, as in a small portion of servers are database server in a most companies. I manage an IT department that takes care of about 160 servers for a company. A good mix of mostly 2/3 windows servers and 1/3 UNIX/LINUX. System administration/engineering is my trade.

    When I look at our servers I see DNS, DHCP, WINS, Domain Controllers, Exchange, SMTP, Blackberry, Proxy, File, Print, WEB, Backup, turnkey application, and Database servers. Maybe 20 of the approximately 160 servers are database servers. Of that 2, (8 CPU Sun 1280’s clustered running Sybase) are the busiest, containing our customer database of over 200,000 customers. Even at that, those servers are rarely over 50% CPU utilization.

    The other 18 database servers, run a variety of databases (none DB2) Oracle, MySQL, and Microsoft SQL. The databases server up data for all kinds applications, like Microsoft SMS2003, Crystal Reports, ID badge security application, People Soft, Remedy, all kinds of turn key applications based around our industry, home grown apps and the list goes on. There are times when some of these servers are really busy CPU wise, about 5% of the time, and usually at night doing data uploads or re-index’s.

    My point is most servers waste CPU power. Sure you can find applications and uses for servers that eat CPU all day long…..but that is the minority of the 46 billion spent on servers…..tiny minority. For most servers network I/O and especially disk I/O are way more critical. Database servers setup with the wrong disk configurations have their CPU’s sitting around doing not much. Servers like File, print, DHCP, DNS, SMTP, some in every company…..can get away with single CPU’s. Heck our print servers are running on Dell 1650’s with 1.4ghz P3-CPU’s that are coasting, but the disks are spinning all the time, and the network cards are busy, busy.

    When you realize these things, Xeon CPU’s vs Opteron does not really matter 99% of the time, cost does. When you a company like Dell that has sold its soul to Intel for low prices, that they turn around and offer to people like me……I don’t even consider what CPU is in the box most of the time.
    Reply
  • JohanAnandtech - Thursday, December 02, 2004 - link

    about MySQL:
    I don't think you can find a way to make the Xeon go faster than the Opteron.

    But I do agree that performance depends on the kind of application, the size of the database etc.

    "A database that fits entirely inside of RAM isn't very interesting"

    Well, I can understand that. But

    1) do realize that for really performance critical (read applications) applications you are doomed if information has to come from your harddisks, no matter how fast RAID 50 is. Caching is the key to a speedy database application

    2) The information that is being requested 99% of the time (in most applications) is relatively small compared to the total amount of data. So a test with a 1 GB database can be representive for a database that is in total 30 GB or something. Just look at Anandtech: how many of you are browsing the forum of 3 months ago? How interesting is it for AT to optimise for those few that do?

    3) I think we made it very clear that our focus was not on the huge OLTP databases but the ones behind other applications


    Reply
  • Slack3r78 - Thursday, December 02, 2004 - link

    I'd agree that using SuSe 8 was a poor choice. I like the "not using the latest and greatest" theme for servers as that's a reality in the field, but SuSe 8 was realeased essentially alongside the first Opterons. The move to a 2.6 kernel and the time for developers to really play with the new architechture could mean even bigger performance numbers.

    Given that Nocona, or public knowledge of an Intel x86-64 chip at all, didn't exist when SuSe8 was released, I'm not surprised that it wouldn't run in 64 bit mode. EMT64 has proven to be rather quirky and less than perfect, from the reports I've read, anyway. See here:
    http://www.theinquirer.net/?article=16879

    Another test running a distribution that was more recently released would definitely be interesting, if possible.
    Reply

Log in

Don't have an account? Sign up now