Original Link: https://www.anandtech.com/show/1977
Sun’s T2000 “Coolthreads” Server: First Impressions and Experiences
by Johan De Gelas on March 24, 2006 12:05 AM EST- Posted in
- IT Computing
Introduction
The Sun T2000 server, based on the UltraSparc T1 CPU, sparks our curiosity. For years, x86 servers have been gobbling up the server market share fast, forcing the RISC vendors – who don’t have the same economies of scale – to retreat to niche markets. First, the low end market was completely overrun, and right now, Xeons and Opterons are on their way to dominate the mid-range market too.
It is, however, clear from reading the T2000 server documentation that Sun hopes that the T1 CPU and the T2000 server can turn the x86 tide. The documentation files contain many references to the x86 competition, which indicate how the T1 outshines the x86 competition.
No, this is not a server and Sparc CPU that must keep Sun’s current position among the RISC vendors safe. This is an ambitious effort to take back some of the lost server market.
Indeed, since the introduction of the new UltraSparc T1, Sun is bursting with ambition:
“The Sun Fire T2000 Server marks the dawn of a new era in network computing, by allowing customers to break through limitations of capacity, space and cooling.”
If Sun’s own benchmarks are accurate, it is no exaggeration to call the T2000 a server with a revolutionary CPU. Sun claims that this 72 W eight core, 32-thread CPU can outperform the power hungry (200-400W) quad IBM Power 5, Intel Xeon and the AMD Opteron machines in many server applications.
However, there is no substitute for independent benchmarks. So, we proudly present you our first experiences with Sun’s T2000. Note the phrase, “first experiences”, as there is still a lot of benchmarking going on in our labs. We are only scratching the surface in the first part, but rest assured that we’ll show you much more soon.
In this first part of our T2000 review, we look at the T2000 as a heavy Solaris, Apache, MySQL and PHP web server or SAMP web server. We also take a look at the performance of a single T1 Sparc core, to get an idea of how powerful each individual core is.
We are working on a JSP web server benchmark and there are several database benchmarks also in progress.
Introducing the T2000 server
The Sun fire T2000 is more than just a server with the out-of-the-ordinary UltraSPARC T1 CPU. 16 DIMM slots support up to 32Gbytes of DDR2-533 memory. Internally, there is room for 4 SAS 73GB, 2.5" disks. Don't confuse these server grade disks with your average notebook 2.5" disks. These are fast 10.000 RPM Serial Attached SCSI (SAS) hard disk drives. With SAS, you can also use SATA disks, but you will be probably limited to 7200 RPM disks.
Four Gigabit Ethernet ports and 3 PCI-Express (supporting 8x, 4x and 1x low profile cards) and 2 x PCI-X 133MHz/64Bit low profile card slots allow you to add more storage via DAS, NAS or SAN racks.
The T2000 in practice
The T2000 is a headless server. To get it up and running, you first access the serial management port with Hyperterminal or a similar tool. The necessary RJ-45 serial cable was included with our server.
This way, you get access to Sun's Advanced Lights Out Manager (ALOM), which is a system controller that runs the necessary console software to administer the server. Running independently of the main server using standby power, the ALOM card is ready for operation as soon as power is applied to the server.
Once you have given the SC net management port an IP address, you can remotely manage and administer the server over a dedicated network, and you no longer need the serial connection.
The console software offers you plenty of different commands to administer the T2000 server.
From within the administrator console, you can get access to your Solaris installation in the screen output of your T2000 server. You can always get back to the management console simply by typing "#.".
While using Putty or a similar SSH tool, combined with the fact that the T2000 is a headless design, it might give you the impression that the T2000 is Command Line Interface only; you can get access to the sleek GUI of Solaris. After browsing the file system through an ssh shell, we noticed that an Xorg server was installed. Since Solaris uses the Java Desktop Environment, which is based on Gnome, the Gnome Display Manager (gdm) was also installed. We reopened an ssh connection to the Sun, with X11 forwarding enabled. You can run 'gdmsetup' as root, where we could enable or disable XDMCP. After enabling it, we were able to connect to the Sun using Xnest, or by running our local XDMCP chooser. XNest showed Solaris in all its GUI glory...
The GUI offers more than pretty pictures: Solaris Management Console (SMC) is a very intuitive tool that gives you a very detailed overview of the installed hardware configuration.
Some hardcore administrators may feel that real administrators only use Command Line Interfaces. However, it is our experience that the SMC (GUI version) is a very helpful and useful tool.
RAS
Traditionally, Sun systems have been known for excellent RAS capabilities. Besides the obligatory redundant hot swappable fans, the T2000 also incorporates dual redundant hot swap Power.
Sun also claims that the T1 CPU excels in RAS. See below for a comparison between the RAS capabilities of the current Xeon and UltraSparc T1 (source: Sun).
Sceptical as we are, we decided to give Intel a chance to criticize this table, but it seems that Sun was honest.
In fact, current Intel production Xeon Paxville processor - which find a place in similarly priced servers as the T2000 - do not support Parity checking on the L1 I-Cache - Tag. Intel also pointed out that the upcoming Woodcrest CPU will have improved RAS features.
So, it seems that right now, the Ultrasparc T1 outshines its x86 competitors.
The T2000 also features Chipkill(memory), which complements standard ECC. According to Sun, this provides twice the level of reliability of standard ECC. Chipkill detects failed DRAM, then DRAM Sparing reconfigures a DRAM channel to map out failed DIMM.
Each of UltraSPARC T1's 4x memory controllers implements background error scanner/scrubber to reduce multi-nibble errors - programmable to adjust frequency of error scanning.
When it comes to RAS, especially the cheaper T2000 (with 1 GHz CPU), there is not an equal at their price point in the server market.
First x86 competitor: MSI’s K2-102A2M and Opteron 275 HE
The MSI K2-102A2M was one of the first servers to arrive in the lab. It is not really a direct competitor to our T2000, but one of the main reasons why we liked to have the MSI server in this test is its support for the Opteron 275 HE. The recently launched 275 HE is a dual core Opteron running at 2.2 GHz and consuming only 55W at the most.
Along with the SuperMicro H8DCE with BIOS v1.0c (and later), the board inside the K2-102A2M is one of the few boards that has the proper power states and PowerNow! enabled for the 275 HE, as it should.
So, while the MSI K2-102A2M aims at a lower priced sector of the market than the T2000, it gives an idea of what the best x86 servers will be capable of in terms of performance/watt in the next months. The MSI K2-102A2M allows us to answer the question of whether or not the T2000 can outperform the x86 competition performance/watt-wise by a large enough margin. The MSI K2-102A2M supports two 940-pin AMD Opteron, thanks to the ServerWorks HT2000 Chipset. Eight 144-bit DDR DIMM slots allow up to 16 GB of registered ECC DIMMs. Upgrading is possible via one PCI Express x8 slot and one PCI-X 133 slot. The ServerWorks HT1000 Serial ATA host controller supports two SATA-II drives.
A slightly negative point is the use of a slim CD-ROM drive. Some of the current software is delivered on DVD, so we like to see at least a DVD-ROM drive.
On the positive side, there are the excellent dual-ported BCM5780 controller and the integrated MSI Server Management IPMI 1.5 with the MSI-9549 BMC card. We’ll discuss remote management options in more detail in one of our upcoming server reviews.
The ACBEL power supply with active PFC delivers 411W max.
Words of thanks
A lot of people gave us assistance with this project, and we like to thank them, of course.
Chhandomay.Mandal, Sun US
Luojia Chen, Sun US
Peter A.Wilson, Sun US
Peter Hendrickx, Sun Belgium
(www.sun.com)
Colin Boroski, LPP (www.lpp.com)
Damon Muzny, AMD US (www.amd.com)
Ilona van Poppel, MSI Netherlands
Ruudt Swanen, MSI Netherlands
(www.msi-computer.nl)
Waseem Ahmad, Intel US
Matty Bakkeren, Intel Netherlands
Trevor E. Lawless, Intel US
(www.intel.com)
Bert Devriese, developer of MySQL&PHP benchmark
Brecht Kets , Development of improved Bench program
Tijl Deneut, Solaris support
Dieter Saeys, Linux support
Ben Motmans, .Net development
Sam Van Broeck, DB2 support
I also like to thank Lode De Geyter, manager of the PIH, for letting us, once again, use the infrastructure of the Technical University of Kortrijk to test the servers.
Benchmark configuration
We used Solaris 10 for the Sun T2000, as the only supported OS for the T2000 right now is Solaris 10 3/05 HW2 (and upwards). The T1 is fully binary compatible with the existing SPARC binaries, but it needs this version of Solaris.
The Sun T2000 server was the only that used 16 x 2 GB DIMMs, resulting in 32 GB of RAM. This gives the T2000 a small disadvantage in our first round of benchmarking (we do not use more than 4 GB in our web server test). So, the Sun CPU has a bit more "managing pages" overhead. However, Sun advised us to populate all DIMMs; so we did.
All benchmarking was monitored by our laptop as you can see on top. CPU load, network and disk I/O was observed, thanks to CPU graph, top, vmstat and prstat. This way, we could see whether or not the CPU or another component was the bottleneck.
Our web server tests were performed on Apache2 2.0.55, including the mod_deflate module for gzip compression, PHP 4.4.0-r9 and Mysql 4.0.24. This last MySQL version was chosen because it came standard with our Sun T2000 and all tests proved to be very reliable with this version
Hardware configurations
Here is the list of the different configurations:
Sun T2000: Sun UltraSparc T1 1 GHz, 8 cores, 32 threads
Sun Solaris 10
32 GB (16x2048 MB) Crucial DDR-2 533
NIC: 1 Gb Intel RC82540EM - Intel E1000 driver
Intel Server 1: Dual Intel Xeon "Irwindale" 3.6 GHz 2 MB L2-cache, 800 MHz FSB - Lindenhurst
Gentoo Kernel 2.6.15-gentoo-r1
Intel® Server Board SE7520AF2
8 GB (8x1024 MB) Micron Registered DDR-II PC2-3200R, 400 MHz CAS 3, ECC enabled
NIC: Dual Intel® PRO/1000 Server NIC (Intel® 82546GB controller)
Opteron Server 1: Dual DualCore Opteron 275 and 27HE (2.2 GHz - 4 cores total)
Gentoo Kernel 2.6.15-gentoo-r1
Solaris x86 10
MSI K8N Master2-FAR
4 GB: 4x1GB MB Crucial DDR400 - (3-3-3-6)
NIC: Broadcom BCM5721 (PCI-E)
Opteron Server 2: MSI K2-102A2M, Dual Dual Core Opteron 275 and 275 HE
Gentoo Kernel 2.6.15-gentoo-r1
Solaris x86 10
4 GB: 4x1GB MB Crucial DDR400 - (3-3-3-6)
NIC: Broadcom BCM5721 (PCI-E)
Client Configuration: Dual Opteron 850
MSI K8T Master1-FAR
4x512 MB Infineon PC2700 Registered, ECC
NIC: Broadcom 5705
Shared Components
1 Seagate Cheetah 36 GB - 15000 RPM - SCSI 320 MB/s Maxtor 120 GB DiamondMax Plus 9 (7200 RPM, ATA-100/133, 8 MB cache)
Common Software
Apache2 2.0.55 + mod_deflate module for gzip compression
PHP 4.4.0-r9
Mysql 4.0.24
The Slim T1 CPU
It is very unfair of us to compare one of the eight very slim T1 cores to mammoths like the Opteron or the Xeon, which have about 10 to 20 times more transistors. Still, we are curious. We know that Sun sacrificed single-threaded performance on the altar of power consumption, multi-threaded performance and die space. How far did they go? Let us find out with LMBench 3.0a. By the way, you can find much more information about the T1 CPU in our previous article.
First, we check the cache latency and RAM latency. For fat modern superscalar cores like the Opteron and Xeon, these numbers are extremely important. The T1 CPU is less sensitive to the latency of the memory subsystem as long as it has enough threads. The T1 swaps threads waiting for the memory to respond for more responsive threads.
CPU (LMBench) | OS | Clockspeed | L1 (ns) | L1 (cycles) | L2 (ns) | L2 (cycles) | RAM (ns) | RAM (cycles) |
Opteron 275 | SunOS 5.10 | 2211 | 1.357 | 3 | 5.436 | 12 | 67.5 | 149 |
Pentium- M 1.6 GHz | Linux 2.6.15- | 1593 | 1,880 | 3 | 6 | 10 | 92.1 | 147 |
Sun T1 1 GHz | SunOS 5.10 | 980 | 3.120 | 3 | 22.1 | 22 | 107.5 | 105 |
Opteron 275 | Linux 2.6.15- | 2209 | 1.357 | 3 | 5 | 12 | 73 | 161 |
Xeon Irwindale 3.6 GHz | Linux 2.6.15- | 3594 | 1.110 | 4 | 8 | 28 | 48.8 | 175 |
Sun has definitely favoured power consumption here. A 3-cycle latency at 1 GHz on a 90 nm process is very conservative. A 22-cycle L2-cache latency is even a bit slow, but again, the thread Gatling gun takes care of that. The built-in memory controllers pay off: latency is about 105 cycles, while even the Pentium-M needs 147 cycles. This helps to keep the average latency (seen from viewpoint of the CPU) low.
Let us see if there is some integer crunching power in the little Sparc core.
CPU (LMBench) | OS | Bit | Add | mul | div | mod |
Opteron 275 | SunOS 5.10 | 0.45 | 0.45 | 1.36 | 18.60 | 19.00 |
Pentium- M 1.6 GHz | Linux 2.6.15- | 0.63 | 0.63 | 2.51 | 19.50 | 11.50 |
Sun T1 1 GHz | SunOS 5.10 | 1.01 | 1.00 | 29.10 | 104.00 | 114.00 |
Opteron 275 | Linux 2.6.15- | 0.45 | 0.45 | 1.36 | 18.60 | 19.00 |
Xeon Irwindale 3.6 GHz | Linux 2.6.15- | 0.28 | 0.28 | 2.79 | 17.30 | 23.30 |
The very common ADD instruction is executed in one cycle, but it takes no less than 29 cycles to multiply and 104 to divide. Faster mul and division would have taken up much more die space and consumed much more power. Considering that those instructions are very rare in most server workloads, this is a pretty clever trade-off. Update: the Sun documentation tell us 7-11 cycles for multiply and 72 for division.
Let us check out what the lonely FPU of the T1 can do.
CPU (LMBench) | OS | FADD | FMUL | FDIV |
Opteron 275 | SunOS 5.10 | 1.80 | 1.80 | 10.90 |
Pentium- M 1.6 GHz | Linux 2.6.15- | 1.88 | 3.14 | 23.90 |
Sun T1 1 GHz | SunOS 5.10 | 26.50 | 29.30 | 54.20 |
Opteron 275 | Linux 2.6.15- | 1.81 | 1.81 | 9.58 |
Xeon Irwindale 3.6 GHz | Linux 2.6.15- | 1.39 | 1.95 | 12.60 |
FADD and FMUL are a little faster than what we first reported (40 cycles), and the main part of that latency might just consist of getting the data to the FPU of the T1. It is clear that the Sun T1 doesn't like FP code at all.
PHP/MySQL: T2000 as a heavy SAMP web server
In this first part of our T2000 review, we look at the T2000 as a heavy Apache, MySQL and PHP web server (or SAMP web server). You do not buy a T2000 to offer some basic web services or to serve up some static HTML.
There are two ways that the T2000 could be useful as a web server. The first one is to use Solaris zoning (a.k.a. "Solaris containers") techniques to run a lot of light/medium web servers in parallel virtual zones. As virtualisation is still something that requires quite a bit of expertise, and we didn't have much experience with Solaris Zones, we decided to test the second scenario.
As a side note, the T1 has a built-in support for a hardware called Hypervisor (which is a first for Sparc), which might make virtualisation quite a bit faster. It also makes OS support easier (once your OS can support running inside the Hypervisor, there'd be little porting left to do), except for new features. Basically, the Hypervisor virtualises the chipset, giving a consistent view to the OS. Sun is helping to make sure that Linux and BSD gets ported to the T1.
Back to our web server testing. The second scenario is a heavy web server, which gets a lot of traffic on dynamically generated content that requires quite a bit of number crunching.
We have two real world examples: one using JSP and Sybase, and the other one using PHP/MySQL. In this article, we like to introduce you to the first example.
The PHP test script retrieves hourly-stored weather information out of a MySQL database, which can be overviewed by month. An 'opening page' displays all months that are stored in the database, and if you open a 'detail page', the month you have selected is submitted by query string parameters.
On that new opened page, you see the following information for that month:
- Overall Minimum, Maximum & Average Temperature for that month
- Minimum, Maximum & Average Temperature by day and by night, for that month
- Average temperatures for each day in that month
- Minimum, Maximum & Average wind speed
- percentages for the wind direction (for example: for 20% of the time, the wind direction was WEST, 10% SOUTHWEST, etc.)
- the script execution time
"When the page is requested, first thing the script does is checking if all $_GET variables, such as 'm' and 'j' are set. If somebody requests the page without these $_GET variables, the script will not continue because it has insufficient parameters to continue.He included a nice diagram for us to make the process clearer.
Next thing that happens is the 'cache-file check'. I haven't used any type of PEAR class, or other frameworks that support caching, to enable caching in the weather-script. I simply check if the cache file exists (the cache file contains the regular HTML output that the browser of the client normally would receive), and if it's not older than one minute. If so, the file will be included, and the php 'exit' command will be executed so that the script thinks that it has 'ended' and the output will be sent to the browser.
If the cachefile does not exist, or if it's older than one minute, the script will simply continue, but the ob_start() function, to start 'output-buffering', will be executed. All regular scripts and several MySQL queries will be executed, and at the end of the file, a new file will be created with fopen($file,'w') (the 'w' makes sure that if the file already exists, it just will be blank and it will seem as if you're writing to a 'new' file. If the file does not exist already, it will be created). When the script ends, the output buffer contents will be retrieved with ob_get_contents(), and all this output will be written to the new cache-file with fwrite(). At the end, ob_end_flush() will flush the output buffer contents to the browser of the client..."
For benchmarking, httperf was used in conjunction with autobench, a Perl script written by Julian T. J. Midgley, designed to run httperf against a server several times, with the number of requests per second increasing at each iteration. The output from the program enables us to see exactly how well the system being tested performs as the workload is gradually increased until it becomes saturated. In each case, the server was benchmarked with 5 requests per connection.
The T2000 as a heavy SAMP web server: the results
To interpret the graphs below precisely, you must know that the x-axis gives you the number of demanded requests, and the y-axis gives you the actual reply rate of the server. So, the first points all show the same performance for each server, as each server is capable of responding fast enough.
We tested the Opteron machines with both Linux on Solaris to get an idea of the impact of the OS.
The Sun T2000 isn't capable of beating our Quad core Opteron, but there are few remarks that I should make.
First of all, we tested the 1 GHz T1, which is, of course, about 20% slower than the best T1 at 1.2 GHz. The T2000 peaked at 950 req/s, the quad core Opteron at 1368 (Linux) and 1244 (Solaris) req/s. However, the T2000 was capable of delivering 935 req/s without any error (request timeout) while the Quad Opteron delivered 1100 (Solaris) and 1250 (linux) req/s without any errors. So, given the criteria that there cannot be any time-out, the difference gets a little bit smaller.
In defense of the Opteron and Xeon: the average response time for one particular request was (most of the time) between 5 and 15 ms. Once the server came close to its saturation point, we noted a maximum of 70 ms. With the T2000, the response time was at least 20 ms, typically 40 ms, with peaks of up to 220 ms when we came close to the peak throughput.
Of course, this is the disadvantage of the lower single-threaded performance of this server CPU: the individual response times are higher. In case of OLTP and web server, this is hardly a concern; in case of a decision support system, it might be one.
There is a better way to do this test, of course: enable the mod_deflate module and get some decent gzip compression. Once we enabled compression, our network I/O, which peaked at up to 12 MB/s, came down to a peak network I/O of 1.8 MB/s. Let us see the next benchmark, where we measured with gzip compression on.
The Sun T1 starts to show what it can do: performance has hardly decreased. Gzip compression is almost free on the T1; compression lowers performance by only 2%. The Opteron sees its performance lowered by 21% (977 vs 1244), and the Xeon by 19% (730 vs 899).
On Solaris, the T1 performs like a quad Opteron. Linux, which has probably slightly better drivers for our Opteron server, gives the quad Opteron the edge.
Let analyse this a little further.
PHP/MySQL No Gzip | |||
Single Opteron 275 | 665 | 4-core T1 | 535 |
Dual Opteron 275 | 1244 | 8-core T1 | 949 |
Scaling 2 Opteron cores to 4: | 87% | Scaling 4 to 8 T1 cores: | 77% |
PHP/MySQL Gzip | |||
Single Opteron 275 | 538 | 4-core T1 | 477 |
Dual Opteron 275 | 977 | 8-core T1 | 933 |
Scaling 2 Opteron cores to 4: | 82% | Scaling 4 to 8 T1 cores: | 96% |
Gzip performance vs no Gzip | |||
Opteron 275 | 79% | Sparc T1 | 98% |
As you can see, our application should be a prime example of an application where multi-core server CPU feels at home. With Gzip compression enabled, performance is still almost perfect at 96% going from 4 to 8 T1 cores.
So, why aren't we seeing the performance that the Sun claims regarding, for example, Spec Web2005, where the T1 has no problem outperforming quad core x86 CPUs? We are not sure, as we measured that 97% of the processing was done in the OS code (97% "system") and only 2-3% of the CPU load was done in the actual application ("user"). We suspect that the relatively light load of FP operations might have lowered the T1's performance. Depending on the tool that we used, we saw 0.66 to 1% of FP operations in our instruction mix, with peaks to 2.4%. Those FP operations are a result of our script calculating averages most likely.
Power
Superb Performance/Watt is what Sun promises with its Coolthreads T2000 server. However, the current T2000 still consumes a bit more power than what they necessarily need. For example, for logistical efficiency reasons, Sun uses a 550 Watt power supply that is also used in the Sun's Galaxy Opteron servers. That power supply will be replaced by a 400-450W more power efficient one later on.
Also, it seems that the current T1 cannot disable entire cores or threads automatically, while future T1 CPUs will be able to do that. The On-chip thermal sensors and the throttling mechanism are built-in.
This is the power supply that we used on the Xeon.
Take the results with a grain of salt, as it is impossible to make everything equal and perform a scientifically accurate power test. We tested all machines with only one power supply powered on, and we also tried to have a similar number and type of fans (excluding the CPU fan, the T1 didn't have one). But there are still differences between the different motherboards, and the Sun used 2.5 inch disks.
System | Configuration | Max Power usage (100% CPU load, W) |
Dual Opteron 275 HE | 1CPU (275HE) | 4 GB RAM | 149 |
Dual Opteron 275 | 1CPU | 4 GB RAM | 166 |
Sun T2000 | 1CPU / 8 Cores | 8 GB RAM | 188 |
Dual Opteron 275 HE | 2CPU's (275HE) | 4 GB RAM | 192 |
Dual Opteron 275 HE | 2CPU's (275HE) | 8 GB RAM | 198 |
Sun T2000 | 1CPU / 8 Cores | 16GB RAM | 208 |
Sun T2000 | 1CPU / 4 Cores | 32GB RAM | 216 |
Sun T2000 | 1CPU / 8 Cores | 32GB RAM | 230 |
Dual Opteron 275 | 2CPU's | 4 GB RAM | 239 |
Dual Xeon 3.6 GHz | 2CPU's | 8 GB RAM | 374 |
The big loser here is, as expected, the Intel Xeon server. The T1 outperformed the dual Xeon 3.6 GHz with a decent, even significant margin and consumes about half the power of the Xeon machine. Add to this that a more efficient 450 Watt power supply will lower the power by another 30 to 40 Watt for the T2000.
The other winner is, of course, the Opteron HE. This CPU could also be used with a more efficient and lower peak power power supply. An Opteron HE is the best x86 alternative for the Sun UltraSparc T1, but fortunately for Sun, this CPU has not yet been picked up by a big OEM like HP or IBM.
First impressions so far
Even if we assume that the exceptional Spec Web 2005 and JBB 2005 numbers, which Sun posted are too optimistic, our own power measurements confirm that the T2000 is much more than yet another Sun Server.
At first sight, Sun has won the performance/watt battle for now, but it cannot rest on its laurels. Low voltage versions of the Xeon "Woodcrest" (Core architecture) and Opteron might be able to come very close to the performance/Watt levels that the T1 offers. Our first impression is also that Sun still has a lot of room for improvement - better power supplies and power management - and it can continue to outperform the x86 servers by a decent to large margin when it comes to performance per Watt.
We also can't shake the feeling that the number of applications, which will really exhibit the kind of exceptional performance that Sun's own heavily optimised benchmarks show, will be quite limited. A slightly annoying issue is indeed the fact that relatively low amounts of FP instructions in your applications may lower the performance of your T2000 significantly. We are not talking about heavy HPC FP crunching applications, but server apps with a bit of FP calculations here and there. Contrary to other servers, 1.5-2% of FP instructions might be enough to make your application less suitable for the T2000. And profiling your applications in depth is not something that all administrators can or like to do. Sun seems to agree quietly, and is very busy with the T2 (Niagara 2), which has one FPU per core. Last, but certainly not least, Sun's solid engineering has impressed us. Sun's meticulous attention to detail resulted in a sturdy, well-polished machine. It gives the impression that it is made to run in a desert as an impressive battery of noisy fans cool down a CPU and a motherboard, which hardly need much cooling. If the air-conditioning of the datacenter fails, the last server that I would worry about are the T2000 servers. The Xeons, however...
The price, about $13000 for the tested server (with 8 GB), seems a bit higher than a typical x86 server, even when equipped with redundant power supplies, fans, etc. But as we said, we'll save our final judgment for our next article, when we have access to much more benchmarking data. So far, the Sun T2000 has been one of the best-made servers that we have seen, and while the performance claims of Sun have not (yet?) materialised in our labs, it is definitely the most attractive Sun Server that we, as "x86 server buyers", have seen in years.