Workstations – Servers – Clouds – Comparing apples to apples
A little decision-making support for the undecided - An article by Wolfgang Gentzsch, The UberCloud, December 17, 2014 - Join the UberCloud
A recent discussion in 25 HPC and CAE LinkedIn Groups about engineers’ concerns about cloud computing revealed that many engineers tend to compare the benefits of workstations versus in-house servers versus clouds in a somewhat misinformed way, picking the most positive aspects of their workstation with the (apparent, negative) roadblocks of the cloud. Forgive me: it’s almost like comparing your own bicycle with renting a Porsche. More serious example: data transfer; there is obviously no data transfer necessary when you compute your task in your workstation, and yes, there is heavy data transfer necessary from the cloud back to your workstation. But this is often simply due to the many more and much bigger computations (often in parallel) in the cloud which are impossible to do on your workstation anyway.
First and foremost, we should answer this question: is my workstation big enough and fast enough for the kind of problems I want to solve? If my answer is YES, fine, then I don’t need an in-house HPC Server at all, and I don’t need an HPC Cloud; full stop.
But if my answer is NO, my workstation is not big enough and fast enough for the kind of problems I want to solve, then a reasonable way to look for viable alternatives is to compare the two more powerful solutions and check which one is most reasonable for me: in-house HPC Servers (Apples) versus remote HPC Cloud (Apples), and NOT versus my own workstation (Oranges) which already proved to be useless for my more complex, more challenging tasks. Servers against Clouds! FYI: I am fully aware that such a comparison is coming along with generalizations, simplifications, over- or under-emphasizing the reality. And, I am somewhat biased: In the last 2.5 years I have accompanied hands-on 165 engineering teams on their way to the cloud; I have learned the hard way how these teams struggle and succeed, and have tried to help them overcoming the many roadblocks. And finally, with our UberCloud Software Containers, I have seen most of the roadblock simply going away. Here are the results: I have picked the most common concerns and roadblock which have been recently discussed by our fellow engineers in the 25 LinkedIn Groups mentioned above, and herewith provide a summary of my findings:
And here’s a more detailed summary of all these features and functionalities:
Procurement: is the act of buying expensive hardware, software, and services above a certain budget limit which is therefore bound to approval from upper management. The process includes preparation and processing of a demand as well as the end receipt and approval of payment. It often involves purchase planning, standards determination, specifications development, supplier research and selection, value analysis, financing, price negotiation, making the purchase, supply contract administration, inventory control, accepting delivery, installing and certification testing the hardware, training people, and other related functions. This process can easily take several months. On the other hand, cloud services are usually short-term on-demand or on-reservation.
Budget: Companies have to deal with two different kinds of budgets: CAPEX and OPEX. CAPEX (capital expenditure) is the amount spent to acquire or upgrade productive assets such as compute servers in order to increase the capacity or efficiency of a company for more than one accounting period. OPEX (operational expenditure) is the money a company spends on an ongoing, day-to-day basis. CAPEX related assets have to be approved often by upper management, while OPEX usually falls into the responsibility of mid-management or even the employee.
Operations, maintenance: Company equipment can be complex and costly to operate and maintain. To run a compute server, for example, one needs specially trained people; regularly upgrade system and application software; handle and fine-tune the system, workload and resource management; deal with power consumption, cooling, and room temperature; take care of downtime and user productivity; and many more. In contrast to cloud where none of these efforts apply.
Flexibility: With your own server come many obligations, some of them are mentioned above. There is no easy choice of other resources like clouds as long as your system is not fully utilized, even if for some specific applications your system might not be optimal. Some software which you would love to try might not even run on your system. Completely different with clouds: there is flexibility in the choice of hardware, software, related tools, timing, pricing, utilization, and so on.
Agility: comes when users can self-service against a large and flexible service catalog. They can pick up whatever they need, whenever they need it. And usually whenever they need resources is when they are inspired and ready to get some real work done; long wait queues kill that inspiration and it’s lost forever.
Reliability: Unless there is a choice in the company among different compute servers, having available only one system result in one single point of failure, and a complete outage during regular system maintenance. One way out could be making use of cloud services during such times. And cloud reliability can easily be improved by working with several cloud providers.
Average utilization: The higher the utilization of your compute server the better the cost per core hour and thus the better the overall economics. However, especially in small and medium enterprises, utilization is unpredictable, because of different project deadlines, the engineers’ vacations or business trips, and weekends where these servers are often almost completely ‘jobless’. In fact, average server utilization numbers which circulate in industry are just around 20% (pouring 80% down the drain). For more details here please see our other article about “How Cost Efficient is HPC in the Cloud? A Cost Model for In-House Versus In-Cloud High Performance Computing.” In clouds, in contrary, prices per core per hour are calibrated with high utilization assumed; cloud service providers with many different customers can obviously much easier utilize their systems to the full.
Security: we all remember news about security breaches stemming from inside and outside the companies: stolen blueprints; CDs with personal customer data; employees copying and selling IP information and many more. This concerns all companies, large and small. We have to pay high salaries for security experts to secure our infrastructure and assets; and often we can’t afford them, and thus remain vulnerable even with our standard and proven security software. Sizing clouds: any cloud provider today has integrated high levels of security to protect data and exchanges. Interconnections are covered by a secure protocol, IP addresses are filtered (only the client’s own domain name is allowed). For security reasons, at many cloud providers, application installations are carried out by badged cloud experts only. Other options (VPN, encryption…) are possible depending on the context and needs.
Technology: Today our systems and technologies are aging faster and faster, and new technology and products are coming to market at fast pace. To make up for this we have to regularly upgrade our existing equipment and thus invest even more money. Then we have to stick with our existing systems for at least throughout the depreciation phase. Completely different with clouds: to stay competitive cloud providers are regularly refreshing their infrastructure. Therefore, in the cloud, we can shop around for the fastest and best suited hardware and services.
Data transfer: Many applications produce GBs of results. Obviously, on workstations with all functions in one, this is not a problem; but forget workstations because the tasks we consider here won’t fit into our workstations anyway. Already with in-house servers data transfer depends on the network between them and the end-users’ workstations, which is under the companies’ control. More challenging indeed is the transfer of GBs of data between clouds and the end-user’s workstation, often limited by the end-user’s last mile of network. Here we should differentiate between intermediate results and the final dataset: Intermediate data can often be stored in Cloud storage services, such as Dropbox or Box.com, with very fast connections to the clouds, and for checking intermediate simulation results high-res real-time remote visualization is the perfect means. For the final dataset there are now transfer technologies in the makes or already available which compress and encrypt the data, can send it in parallel, or stream it back to the user. And if all this doesn’t help today, there is still as a backup method over-night FedEx delivering all the data at once early next morning.
Full control over your assets: In the early days of cloud, there was no control over your assets in the cloud at all. However, caused by the pressure of early cloud users, cloud providers start offering more transparency to their customers. And, with the advent of system container technology like the UberCloud Containers, additional functionalities like collecting granular usage data, logs, monitoring, alerting, reporting, and more are bringing back the control a user wants.
Software licensing: Independent Software Vendors (ISV) are naturally concerned with maintaining their level of revenue, and it was not clear for a long time whether software licensing in the cloud would be damaging for them or not. However, engineers continue to use their workstations for daily design and development, and might use clouds only for bursting capacity, for bigger, for more complex simulation jobs. Therefore, and due to competitive forces, more and more ISVs are now adding flexible cloud-based licensing models, e.g. for monthly, weekly, daily, or even hourly usage.
Access: To ease the access to high performance computing we have done a lot in the past, like developing system and workload management software, portals, and other tools, which however come with a continuous training of system and user experts and other investments. On the other hand, in the cloud, all this is done by the cloud service provider experts, invisible by the end-user. Therefore, access to many clouds today can be considered seamless, and is included in the bill ($ / core / hour).
Wait time: When you own your compute server it is usually too big at times of low demand and way too small at times of high demand. When you have peak load and ironically this is when you need your server the most, your jobs are sitting in the wait queue for hours at end. Clouds change that, simply because Clouds offer “infinite” resources; and if the resources of one cloud provider are not “infinite” enough, you can move on to the next cloud provider. Clouds inherently have very short or no wait time at all.
Wolfgang Gentzsch, The UberCloud, December 17, 2014