Uses cloud computing server virtualization

Virtualization in the cloud - the technology of the future?

David Wolski

Virtualization on the desktop computer is a thing of the past. Despite legitimate security concerns, virtualization is pushing into the cloud, as hardware and network resources can be rented there quickly and at short notice.

EnlargeNowadays, the cloud can take over almost the entire classic network structure.
© © Sam Johnston under Creative Commons

Since its inception, virtualization and the resource management that it makes possible no longer include just a single computer or a local network of servers, but rather leaps into the cloud. There is then nothing to be seen of the actual hardware. From individual servers to entire computer networks, almost everything is available in the cloud in the form of virtual systems that run in the data center of a service provider. Although many old-school administrators are uncomfortable with virtualization in the cloud, the trend is continuing due to the associated cost savings. In the cloud, resources can be rented (scaled) quickly and easily as required.

The tech industry is competing for the big business of the future, the cloud. The metaphorical data cloud covers many companies: software companies, providers of storage space in the network and, last but not least, hardware manufacturers. With ever faster internet lines, more and more data is migrating to central computer farms and storage facilities in the network. However, making money with cloud services is not easy. Business is growing rapidly, but prices are under massive pressure in the face of fierce competition.

The greatest advantages of server virtualization

Virtual systems: from PC to server

From 1972, IBM presented its System / 370 for the first time, a mainframe computer that could routinely start several hardware-supported virtual machines with VM / CMS as the operating system. This was made possible by a separately available, virtual memory management. With the decline of the mainframes and the rise of the PC, the topic of virtualization fell silent for a long time.

The technology experienced a comeback on PCs in order to run typical desktop operating systems in virtual machines, and this without any complex hardware support, just with a software hypervisor. Connectix first introduced its Virtual PC 1.0 software for the Mac in 1997. In 1999, Vmware started a boom with the Virtual Platform for x86 processors, which would later become Vmware Workstation, which continues to this day.

Because Vmware quickly recognized the potential of server virtualization and only one year later released the GSX Server 1.0, which already included a management console for several virtual machines on distributed servers in a network. From then on, every step in the development of virtualization environments and management tools expanded their field of application: The focus was no longer on a single host, but on a network of servers and entire data centers. This development is still driven today by rising energy costs and the need for consolidation.

EnlargeThe IBM 370/165 was able to start hardware-supported virtual machines as early as 1972 and used virtual memory management for the first time.
© Source: University of Cambridge Computer Laboratory

Cloud: tailor-made performance for virtual machines

With cloud computing (“the cloud” for short) nothing changes in the existing concepts of virtualization in computer networks. However, cloud providers provide the option of maintaining virtual systems in their data centers. The big difference to virtual private servers, which hosting companies have been around for a long time, is the comparatively free division of required computing power and network throughput according to requirements. This is achieved via management consoles, which run as web apps and are operated directly by the customer so that the distances remain short. The customer orders computing power with a click, installs virtual systems from the Amazon Marketplace or adds resources to a running virtual machine - and the provider then charges the customer's credit card depending on the service ordered. This service is called "Infrastructure as a Service" (IaaS). Amazon was the first to bring this model to market in 2008. For order processing and coordination, Amazon has to maintain its own data centers anyway and offers the option of using the data center with Elastic Compute Cloud (Amazon EC2 for short). The virtual systems that run under Xen can be set up by the customer with Linux or Windows himself, and the smallest available computer instance for a machine with Linux costs around 15 dollars per month. IP addresses, load balancing and outgoing network traffic are billed per gigabyte.

In addition to Amazon, the major providers of computing power are Microsoft, IBM and Google. According to data from market researcher Synergy Research, the Amazon cloud division's sales in the first quarter of 2015 were larger than those of the four subsequent tech giants combined. Many startups are at home with their data and apps in Amazon's server farms - a market that Microsoft wants to tap into with partly free starter offers.

Microsoft Azure is now a serious cloud platform and is developing at a rapid pace. There are innovations practically non-stop. Perhaps the most important innovation of the last six months is the introduction of extremely large virtual machines (VMs): the G-Series VMs. They are considered to be the largest and most powerful machines currently available for the public cloud. G-Series VMs have the largest memory, the highest processing capacity and the most local SSDs of all virtual machine sizes for the public cloud.

EnlargeWith EC2 (Elastic Compute Cloud), Amazon offers the possibility of using the available resources for virtual machines on demand.

Data protection at "Made in Europe"

The cloud boom is accompanied by doubts as to whether it is a good idea to store confidential data in the cloud. For service providers in Germany such as QSC, Lufthansa Systems Cloud Lounge and Nionex, who have a local data center, as well as contracts under German law, these doubts open up an opportunity. Despite all the resources of American providers, German companies can win many customers through the clarity of European legislation. The law is not always a hurdle, but rather a unique selling point for cloud brands “Made in Europe”.

Reading tip: The best free cloud storage from Germany

Furthermore, companies demand legal certainty and should not be afraid to negotiate with providers on this basis. The competition in such a popular market as the cloud is fierce. And the manufacturers and the specialist trade are quite inclined to negotiate the details of the contract with users and to give them many points to the benefit of the customer.

Glossary: ​​IaaS, PaaS, SaaS

Hardly any IT hype can do without its own zoo of highly idiosyncratic abbreviations. The service models of cloud providers, which are largely based on virtualization, are divided into three levels.

Infrastructure as a Service (IaaS): The customer only receives basic resources such as computing power, storage and network capacities in the provider's data center. The customer controls what runs on it. However, he has no influence on the actual hardware in the data center. The abbreviation IaaS is often used synonymously with cloud virtualization.
Platform as a Service (PaaS): Instead of worrying about the infrastructure of operating systems and networks, PaaS customers only care about their applications. For this purpose, the cloud provider provides a set of development tools on its fully configured, virtual systems. The development environments are mostly Java, Python, Ruby or Node.js.
Software as a Service (SaaS): The cloud provider allows customers to access their own applications and only gives them control over the stored data and some settings. Almost everyone uses SaaS in everyday life today: Webmailers such as Google Mail, GMX or Web.de, for example, fall into this category because they map the functions of a mail program as a web app.