POST A COMMENT

25 Comments

Back to Article

  • Tarrant64 - Thursday, July 21, 2011 - link

    With the growing use of the cloud computing, what are ISPs doing to ensure adequate bandwidth for not only the provider but the customers? With the increase of ISPs testing and implementing data usage caps, I think it should be a growing concern for some who want to work from the cloud from anywhere (including home) without the worry about the increase in data usage. Reply
  • TypeS - Friday, July 22, 2011 - link

    Isn't the current answer to this from ISPs to throttle traffic like torrent data and other applications they deem not a "priority"? Can't have them damn pirates saturating their 7-50+Mbit links can we now? Reply
  • Sapan - Thursday, July 21, 2011 - link

    How do the cloud servers give results so fast? I imagine there is literally Petabytes of data in Google servers and it would be way too expensive to run on SSDs. Is their some sort of hierarchy that masks latency? Even then wouldn't it still take long just to communicate to the server itself? Reply
  • extide - Saturday, July 23, 2011 - link

    look up BigTable and MapReduce Reply
  • carnachion - Thursday, July 21, 2011 - link

    How do you see the future of cloud computing for High Performance Computing applications, like scientific software that demands a lot of processing power? The alternatives that exists today like Cyclone from SGI don't seem very appealing. You think that will change in future? Reply
  • gamoniac - Thursday, July 21, 2011 - link

    Cloud is the bussword, but while it makes sense to outsource hardware resources using cloud computing, does it really make sense to add extra layers of complexity to the applications (I would imagine troubleshooting or replicating problems would be significantly more challenging compared to the same case within an internal environment)?

    Is the saving justifiable to inadvertently create a single point of failure? And how could enterprise spent so much on security, then turn around and entrust all the data to some data center at sometimes unkonwn location?

    Lastly, does the government have any power or jurisdiction to subpoena data stored on the cloud in another country (or countries)? Or that is just a big loop hole for corporate to get away with it?

    I know that was a lot of questions, but they hold me back from jumping on the bandwagon... Thanks, Johan.
    Reply
  • Firekingdom - Friday, July 22, 2011 - link

    What I understand that the cloud is nothing more than a group of machines(lets call this host vm) hosting vms in a workgroup, that share data and resources. While all the host vms are balancing the load. It is basically a cluster with nodes in it.

    The only benfits I see in it is relocation of a vm to keep latency down. I think it is better to have one machine host all your servers in vms. A 96 core vm server should handle most things.

    My question is this a server with 4 cpu with 32 cores and vms on it. Vm1 is on cpu 1, Vm2 is on cpu 1, Vm3 is on cpu 2. Vm1 ask for data from vm2. Vm2 ask its backup Vm3 to see if it has it. Vm2 and 3 are like name servers. So they both trust each other and vm1 trust vm2 and vm3. Can vm3 send the data to vm1 with out leaving the machine and not involving the nic?

    I know cpu can talk to each other, but can their ram talk to each other? It just seems something like a north bridge should should do this? Will the cloud marker push for this? I think arm maybe good for this.

    Dont steal the idea I thought it up.
    Reply
  • HMTK - Friday, July 22, 2011 - link

    I don't mean to be rude but WTF are you talking about?

    A "cloud" is made up of hosts which have only a hypervisor installed. On top of that hypervisor run virtual machines. Depending on the hypervisor and management software of your choice, you can move running vm's from one host to another. This can also be done autumatically for load balancing or to keep vm's available in case the host they run on crashes. VM's can be put in VLANs just like physical machines. The main advantages of "the cloud" ara availability, scalability and the performance you can wring out of your physical servers. It's also quite easy to migrate a vm from an old physical server to a newer model, assigne more RAM or add disk space.

    VM's are not tied to a given CPU or CPU core, the hypervisor schedule the CPU's for the vm's when they need it. You can easily have more than 4 vm's on a quad core CPU for example. For example one of our customers is starting with a vritual desktop environment. Currently there are 80 VM's on a dual Xeon X5550 server and each VM has 2 virtual CPU's. That's 160 virtual CPU's for only 8 physical cores.
    Reply
  • fzzzt - Saturday, July 23, 2011 - link

    I don't mean to be rude but WTF are YOU talking about?

    A "cloud" is, as the OP said, a buzz word. It's basically the same as a cluster, except maybe more abstracted and easier to use. You can have a cloud of physical hosts, VM hosts, VM guests or any combination. Indeed many Google services don't use VMs at all. A cloud is really just an abstraction of resources so that the user doesn't have to deal with the underlying infrastructure, in the same way RAID is a "cloud" for storage. A user doesn't have to copy the files to multiple disks--they just write to "the disk" and it magically gets replicated or striped. Cloud services can be configured to reduce latency or improve reliability, but not both (that's usually impossible).

    IMHO, "cloud" is equivalent to "a bunch of resources", or perhaps "a bunch of computers" (virtual or physical).
    Reply
  • essdeeay - Tuesday, July 26, 2011 - link

    It's tempting to dwell too much on the hardware, but I've still not seen a better definition of cloud computing than this:

    http://www.zdnet.com/news/the-five-defining-charac...

    1. Dynamic computing infrastructure
    2. IT service-centric approach
    3. Self-service based usage model
    4. Minimally or self-managed platform
    5. Consumption-based billing
    Reply
  • fzzzt - Saturday, July 23, 2011 - link

    That might seem like a good idea, but it isn't. Once an ant shorts a chip on the motherboard, or an intern pushes the wrong button, and your 200 VMs are down--possibly corrupted--you'll likely revisit that opinion.

    Different VM software schedules workloads differently, but generally speaking, the VM hypervisor manages separation between VMs and controls which VM uses which CPU. Registered RAM may be able to talk to each other, but I think the CPU or a bridge is a gatekeeper for that. Good VM hypervisors can share RAM and do other tricks like balooning or over-committing to make it appear that you have more RAM (or disk) than you actually do.

    VMware (and probably others) has documentation on most of these technologies if you're interested.
    Reply
  • fzzzt - Saturday, July 23, 2011 - link

    Actually, I read this week somewhere about Microsoft I think and the US government requiring that it is able to gain access to data. The US government can, of course, attain data if it wants to (e.g. for security reasons), usually with a court order. The issue is that some of this cloud data is about European citizens...or something like that...so the EU was up in arms about privacy. The US gov't can subpoena Google for all the email on the British Prime Minister, if it were stored in GMail, for example, since the data lives in the US. Complicating things is the fact that increasingly the data is replicated across countries.

    One can encrypt the data before it's put into the cloud, if security is important, though this will kill your throughput. There are safeguards on the service side to prevent cross-access, of course. It's just a matter of how paranoid you are. Do you trust the service provider? The system admin? The janitor? I suspect an enterprise that spends a large amount on security will either use a private cloud, or not store its data in a cloud.
    Reply
  • iamabovetheclouds - Friday, July 22, 2011 - link

    Will Blu-Ray and USB become obsolete? Will people forgo optical drives? Reply
  • HMTK - Friday, July 22, 2011 - link

    VMware just screwed their existing customers with their new licensing scheme.

    After introducing a new licensing model with vSphere 4 VMware has done it again and personally I think they're going to lose a lot of customers over this. Nobody's talking about the new features in vSphere 5, only about being screwed for the second time in as many years. Only this time it's worse than missing the new high end features you only get in vSphere Enterprise Plus. With the new licensing scheme CURRENT investments in vSphere licenses become insufficient for companies that use more than 48 GB/CPU on average. Companies that use any version for VDI are also screwed because is typically memory hungry and 48 GB/CPU just doesn't cut it.

    Any thoughts about how this may change the virtualization landscape? People are scrambling to check out Hyper-V and Xen and even when VMware changes its licensing for the better they may not come back. After all, VMware has for the second time proven to be a bad partner. Having the best technology is NOT enough. VHS vs BetaMax anyone?
    Reply
  • ServiceChaperon - Friday, July 22, 2011 - link



    From an end business user service perspective, which cloud service is growing fastest? (Platform, Application, Data) Why?

    For businesses moving to cloud based services what order of migration makes the most sense? (Platform, Application, Data) Why?

    Suggestions for finding/working with software vendors to support cloud based platforms to run their applications? (It is hard getting vendors interested in making their products scalable for cloud platform (VM) based installations).

    Top three biggest security risks with cloud based services for businesses using external cloud service providers?

    Internally developed cloud vs. external cloud provider, which to choose and when?

    How to maintain a healthy datacenter architecture with both physical and VM based systems?

    Cloud for global service distribution vs. for compute power distribution, which is the biggest driver, when does each make sense?
    Reply
  • prophet001 - Sunday, July 24, 2011 - link

    To piggy back on the security question. That is my biggest concern. Why would people want to give the government and large corporations more power and information? It has already been seen that anything is hackable and can be compromised. Not only that but the corporations themselves aren't necessarily the best place to land your personal and private documents.

    Why should we give all of our email to google or all of our word docs to microsoft? Why do we need to?
    Reply
  • JohanAnandtech - Thursday, July 28, 2011 - link

    Your questions are very broad. Can you narrow them down a bit? Reply
  • vignyan - Friday, July 22, 2011 - link

    With so many processors serving the cloud, It would be wise to improve the efficiency of the processor, by making a custom processor for the cloud. I am looking for feedback on some development in hardware front to make the developer's life easier and/or make processor's more efficient by effectively masking the processor's short commings with expectations from the cloud. Reply
  • policeman0077 - Saturday, July 23, 2011 - link

    assignment of such a lot request from customers and hardwares must be a tough work Reply
  • smcguire6177 - Saturday, July 23, 2011 - link

    With VMware stating that vsphere 5 will bring new license terms, their costs have also gone up exponentially. The data center I work at is looking at virtualizing all their machines using VMware, and based on what I've been reading on articles like the below, we would be severely affected.
    http://benincosa.org/blog/?p=400
    With costs going up astronomically for machines (half the costs of some blades or more?!?), do you see resellers moving away from high capacity machines to more mid-level configurations with lower power draw? Or is this the opportunity competing companies need to lessen the lead VMware offers in the visualization space and offer better value products?
    Reply
  • jhh - Monday, July 25, 2011 - link

    Without SR-IOV virtual NICs, network-bound applications don't virtualize well, as the virtualization layer has to handle each packet individually. SR-IOV allows virtual NICs in the guest access, without significant involvement from the virtualization layer. The virtual NICs have difficulty with live migration of guests, because the hardware association with the virtual NIC driver, and associated state information. I saw one company which transitioned between processing packets in the virtualization layer during the switch to avoid this problem. Related to this problem is access to storage. FCoE and iSCSI are a couple of options, with different host overhead.

    What do you see in the future for better support of virtual network devices and live migration, both on the hardware and software side? What direction do you see storage access going - FCoE or iSCSI or something else? What options are there for hardware acceleration capabilities, and will they support live migration? What impact will that have on data center networks?
    Reply
  • ProDigit - Tuesday, July 26, 2011 - link

    How long until Intel Atom Z processor powered servers (or ARM) become available for small businesses, mainly servers that handle several tens or perhaps hundreds of atom cores?

    And how far are we from an (Intel) 100+ thread, low power CPU core (also for small servers/ home project rendering farms/video conversion)?

    Talking about a serious boost in the amount of threads a CPU can process (to increase performance of multi threaded apps), while lowering cores to 1,66 or 1Ghz per core to keep power draw low (or at tolerable levels)!
    Reply
  • bobbozzo - Wednesday, July 27, 2011 - link

    I doubt I/O would scale well with that many processors. Reply
  • ProDigit - Tuesday, July 26, 2011 - link

    ps: above comment is also meant for cloud serving, the more hardware side of cloud services.
    I'm sure many services could do perfectly fine with low power, megamulti threaded machines, while others that run mainly games would probably need some serious arrays of Xeon processors or something...

    I'm more interested in the smaller, simpler cloud services, at least, the hardware that they could be running!
    Reply
  • casperb - Friday, July 29, 2011 - link

    Of the available and soon-to-be released processors from the different manufacturers, which is likely to be most suitable for the most widely used cloud applications and why? Reply

Log in

Don't have an account? Sign up now