If you read our last article, it is clear that when your applications are virtualized, you have a lot more options to choose from in order to build your server infrastructure . Let us know how you would build up your "dynamic datacenter" and why!

{poll 157:440}
POST A COMMENT

48 Comments

View All Comments

  • idlehands - Thursday, November 12, 2009 - link

    Really? Another virtualization discussion without even mentioning z? I'd go with z/linux on a z/10ec or z/10bc depending on what it was I was going to do. Save on some floor space with an upgrade path to z/os if need be. Reply
  • ultimatebob - Friday, October 30, 2009 - link

    If I got the rebuild the entire server room from scratch? I'd go with 2 socket blade servers loaded with quad core processors and tons of memory. Then I'd connect those to a SAN running VMWare ESX and VCenter. I could probably get at least 8 VM's running on each blade server that way, and I can squeeze 14 blade servers into 7U of rack space. That's 112 VM's in just 7U of space (not counting the SAN or UPS) Reply
  • RagingDragon - Wednesday, October 14, 2009 - link

    My apps mostly have low CPU utilization with IO or memory (quantity, not performance) bottlenecks. Considering that, and your results from "Expensive Quad Sockets vs. Ubiquitous Dual Sockets": a few 4S/24C Opteron servers with 128GB of RAM and lots of fiberchannel cards would suit my loads well. Reply
  • Quarkhoernchen - Monday, October 12, 2009 - link

    "In the real-world, virtual environments fit a very specific niche, and in no way should dominate the datacenter. "

    That depends on the environment. I'm quite sure that virtual environments are able to dominate a datacenter in small-, mid-sized and even in some big companies. In my company, 80 percent of all servers are virtual machines.

    The big and heavy loaded file-, database- and mail applications are installed on their own hardware and nearly all smaller applications, domain controllers and web servers are completely virtualized in a VMware 3.5 cluster with all nice features that virtualization technology offers especially easy image-level backup (snapshots) and recovery. For mass deployment of new servers, just create a template and deploy from that or clone an existing virtual machines in minutes.

    Due to virtualizing existing servers I was able to reduce the power consumption of the whole datacenter by 25 % now and at the end of next year it is possibly only the half.

    regards, Simon
    Reply
  • Robear - Monday, October 12, 2009 - link

    It's like asking, "If you could restart the auto industry from scratch, would you make all vehicles unleaded?"

    I chose multi-node servers only because I like their versatility. Blades are great for scaling out horizontally and have been really great for installing department or appliance servers, but unless you have a SAN infrastructure, they don't leave you many options for storage.

    You also have to consider the politics involved. IT isn't a monarchy. Different departments get different budgets, and most are none-too-thrilled about getting a virtual environment with their money, especially if the departmental decision-maker is tech-savvy.

    Lastly, there's a difference between requesting a server for a SQL cluster and requesting one for a file share or application server.

    Your data center needs to be versatile. Blades pair well with a SAN if that storage fits the application. Virtual environments are great for software R&D and lightweight apps (apps that will typically take < 50% server resources). When you get above that, you need to pack a lot in for a little. The dense servers we're seeing from HP and SuperMicro are nice because you don't need the blade housing; you can replace normal servers and increase the density AND you have a little more versatility with storage.

    In the real-world, virtual environments fit a very specific niche, and in no way should dominate the datacenter.

    Anyway I think some would agree or disagree with some of my statements, and I think HOW the IT operation runs and how the company runs is a big part of what the datacenter looks like. At the end of the day, I think I'd like to see a 1-node-per-U standard would be a great target.



    Reply
  • NewBlackDak - Tuesday, October 13, 2009 - link

    This completely depends on your storage. We find that with our Netapp/NFS setup we have a sustained 95MBps disk throughput. We haven't found a single real world application that is bottle-necked by storage in our virtualized environment. This includes the 5 different DBMS we use/tested.

    The biggest factor is if the virtualized datacenter was setup correctly. If it was put together with misinformation, or slapped together on the cheap(or with existing parts).
    Reply
  • lynxinator - Sunday, October 11, 2009 - link

    A couple of months ago I built a Vmware ESXi 3.5 host / server with the parts listed below. Each of the 4 windows 2008 virtual machines use a single NIC and a single core. All of the VMs share a 8 drive RAID 10 array. The 5th NIC is used for management. There are a few other VMs that I start and stop as needed.

    I had to move the case fan that is mounted at the top of the case towards the front of the case because the power supply is longer than normal. Sometimes the time on the virtual machines is incorrect even though the time on the ESXi is correct. I have not spent a lot of time trying to fix the problem. I recently upgraded to ESXi 4.0 which took at least a half hour.

    Newegg.com prices:

    Rosewill 6" Molex 4pin Male to Two 15pin SATA Power Cable Model RC-6"-PW-4P-2SA - Retail
    Item #: N82E16812119238
    Price: $2.29 * 3 = $6.87

    Thermaltake 11.8" Y Cable with Blue LED Light Model A2369 - Retail
    Item #: N82E16812183147
    Price: $3.49 * 3 = $10.47

    Rosewill R901-P BK Triple 120mm Cooling Fan, Mesh Design Front Panel, ATX Mid Tower Computer Case - Retail
    Item #: N82E16811147125
    Price: $49.99

    Western Digital Caviar Blue WD1600AAJS 160GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive - OEM
    Item #: N82E16822136075
    $39.49 * 8 = $315.92

    BIOSTAR TFORCE TA790GX 128M AM3/AM2+/AM2 AMD 790GX HDMI ATX AMD Motherboard - Retail
    Item #: N82E16813138130
    Price: $109.99

    Intel EXPI9301CTBLK 10/ 100/ 1000Mbps PCI-Express Network Adapter - Retail
    Item #: N82E16833106033
    Price $29.99 * 3 = $89.97

    hec HP585D 585W ATX12V Power Supply - No Power Cord - OEM
    Item #: N82E16817339009
    Price: $26.99

    AMD Phenom II X4 940 Deneb 3.0GHz Socket AM2+ 125W Quad-Core Black Edition Processor Model HDZ940XCGIBOX - Retail
    Item #: N82E16819103471
    Price: $169.99

    Adaptec 2258100-R PCI-Express x8 SATA / SAS (Serial Attached SCSI) 5405 Kit Controller Card - Retail
    Item #: N82E16816103096

    MASSCOOL FD12025S1L3/4 120mm Case Fan - Retail
    Item #: N82E16835150070
    Price: $4.79 * 3 = $14.37

    WINTEC AMPX 2GB 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Desktop Memory Model 3AXT6400C5-2048 - Retail
    Item #: N82E16820161182
    Price: $30.99 * 4 = $123.96

    Intel PRO/1000 MT Gigabit NIC PWLA8490MT W1392
    Price: $19.00 x 2 = $38
    Ebay URL: http://cgi.ebay.com/Intel-PRO-1000-MT-Gigabit-NIC-...">http://cgi.ebay.com/Intel-PRO-1000-MT-G...LH_Defau...

    Shipping: $53.01

    Total: $1389.52
    Reply
  • lynxinator - Sunday, October 11, 2009 - link

    Both the Adaptec 5405 and the 5805 work well with ESXi.

    Adaptec 2244100-R PCI Express SATA / SAS (Serial Attached SCSI) 5805 Kit Controller Card - Retail
    Item #: N82E16816103098
    Price: $569.99


    Adaptec 5805 Total: $1,526.51
    Reply
  • maomao0000 - Sunday, October 11, 2009 - link

    http://www.myyshop.com">http://www.myyshop.com

    Quality is our Dignity; Service is our Lift.

    Myyshop.com commodity is credit guarantee, you can rest assured of purchase, myyshop will

    provide service for you all, welcome to myyshop.com

    Air Jordan 7 Retro Size 10 Blk/Red Raptor - $34

    100% Authentic Brand New in Box DS Air Jordan 7 Retro Raptor colorway

    Never Worn, only been tried on the day I bought them back in 2002

    $35Firm; no trades

    http://www.myyshop.com/productlist.asp?id=s14">http://www.myyshop.com/productlist.asp?id=s14 (Jordan)
    Reply
  • joekraska - Sunday, October 11, 2009 - link

    Dell R710's with 72GB of ram. Dual 10GE, aggregating to Force 10 10GE switches top of rack. 10GE CISCO line cards to the core.

    Dell EqualLogic tiered storage cluster for the VMDK files in Tier 1. In Tier 2, NetApp NFS volume with DEDUP turned on.

    Joe.

    Reply

Log in

Don't have an account? Sign up now