Opting for containers: do you still need (virtual) servers?

Have you opted for dedicated server hardware on account of security or performance? That is possible, of course. It actually used to be the standard in the small world of hosting. Nowadays, the cost price and limited flexibility are often a minus point. Thanks to virtualisation, matters can be better managed and are also more scalable and affordable. And a few years ago, technology took yet another step forwards: you can scale up in seconds in an almost unlimited way with containers.

Containers provide the building blocks for the age of the cloud. They are changing the paradigm for anyone who is now working with virtual servers. In the age of cloud computing, the physical infrastructure and operating system disappear completely into the background. You no longer manage servers but a cloud platform. Or you have it managed for you by a reliable partner such as Combell.

Take a look at our managed container services

 

Advantages of the cloud: robust and extremely scalable

Do containers make servers unnecessaryContainers and their management platforms such as Kubernetes have developed from cutting-edge technology at Internet giants like Google into a popular ICT building block for businesses. Both in terms of companies’ own infrastructure and external hosting, the technology is swiftly gaining ground. While specialised knowledge is still the main hurdle, there are numerous advantages.

By running your application in multiple side-by-side containers, you can almost infinitely expand the resources for your application. Naturally, you do need to add hardware to the underlying platform as the actual load continues to rise. And you need to evenly distribute the dozens, hundreds or thousands of connections.

Fortunately, the technology allows you to manage this fully automatically (for the most part). Or you can entrust this to Combell. You can then scale up immediately in the event of an unexpected peak. And you can phase out unused resources to keep everything affordable.

Another advantage of placing multiple containers next to each other to do exactly the same thing is that you get an error-proof architecture. If one container gets stuck, then the others take over the additional traffic load. You can start up extra containers within a few seconds to ensure there is no overloading. The result is high availability without a stiff additional price hike.

While building your application or website, you are continually thinking about the non-functional requirements: response times, number of simultaneous users, performance optimisation, security, etc. But in the age of the cloud, you define all this in advance, during the development phase. Which system sources and configuration parameters does this application need? Have you found the optimum configuration? Then, you put everything into the containers and let the underlying platform take care of the rest.

 

Compact and hyper-distributed

Containers are not virtual machines (VM). In the classic form of virtualisation, each physical server contains a number of ‘virtual’ servers until the limits of the underlying hardware are reached. Then, there is a hypervisor layer on the physical server, on which multiple VMs are run, and on top of these there is a fully-fledged operating system and software application(s) each time. Containers have much less overhead, so that in a classic VM approach you need up to five times more computing power for the same workload (‘VM tasks’). It is also difficult, in a classic VM approach, to protect virtual servers against a neighbouring application that consumes a tremendous amount of computing power, memory or storage.

So, how do you switch over to containers? It is important to properly understand what the consequences of that are, in particular as regards the way in which you yourself develop applications, or have them developed for you by others. It is really a matter of working in a different way, even thinking and taking responsibility differently. Containers fit perfectly into a DevOps approach, in which the classic dichotomy between software development and infrastructure management disappears.

Each container contains configuration details, libraries and all other necessary components to allow the application to start up immediately. No more installation is involved. This makes capacity expansion rapid and simple. Starting up new, identical containers is fully automated. This is the way in which Internet giants like Google work.

 

One large application or many small ones?

Containers and microservicesIncidentally, the recent trend for more complex applications is to break down each application into smaller functional elements or microservices. This has important operational benefits... Large code changes can be packaged and rolled out as a new container, while the other elements remain unchanged. This avoids a complicated tangle of dependencies. Furthermore, you can also scale up very selectively by deploying extra resources precisely where a slowdown or overload is occurring.

In reality, you seldom choose to set up and roll out one or two containers. You use them for testing, acceptance and production. In a microservices architecture, you set up multiple or several dozen containers in each of the environments. And to work in a performant and fault-tolerant way, you double up each container. And scale up further wherever necessary. Even though it is not difficult to set up new containers quickly from an existing image, it is worth taking the trouble to automate the procedure as far as possible. Certainly when dozens, hundreds or even thousands of containers are involved.

What, typically, does not sit in the container is any form of data storage. Your cloud application does receive and process data but then stores it in a back-end database, or passes it on via web services or an API platform to yet another system. No longer do you set up your application(s) and your database on one system, although for security reasons that was already a bad idea in the past as well.

It is precisely the unchanging (stateless) nature of the container that means you can stop, duplicate or restart it at any time. But you do, of course, need to take your application into account in this context. A typical, older application is not that suited to containers, unless it undergoes thorough re-engineering. Come and talk to Combell’s specialists about that.

 

How can you administer and automate this?

Anyone who makes the move to a microservices architecture will get dozens, hundreds or even thousands of containers to manage. Too many to count, perhaps? Fortunately, the technology is again your trusted ally in this. Instead of being the system administrator playing a single instrument, as it were, thanks to automation you assume the role of the conductor, or orchestrator from now on. Does this sound complicated? Combell can show you the way if necessary.

Docker most popular container formatDocker is by far the most popular tool for building images and running containers. It works with standard hardware and with the most widely used operating systems, such as Linux, Windows and MacOS. You put whatever you want into the image – or blueprint of your application: your code for practically any language or technology, together with everything that goes with it. You then set them up as required in containers, either via ‘continuous integration’ platforms like GitLab or Jenkins, or by some other means. Each new container automatically acquires the very latest changes and parameters. That results in a colossal time-saving and minimises the chance of human error.

A further benefit is that containers run perfectly on your server infrastructure, hosting or infrastructure cloud, and equally well on your laptop. Thanks to containers, software developers can themselves roll out, test and improve new applications with minimum difficulty. This greatly lowers the risks involved in the final go-live stage.

Kubernetes orchestration platform for containersKubernetes is the most popular orchestration platform for containers. Once you have placed your application in images, using Docker for example, Kubernetes ensures that everything hangs together and keeps working. Orchestration enables you to scale up effortlessly, even in an instantaneous and fully automatic fashion, based on increased traffic or longer response times. User traffic is uniformly distributed across the active containers and the back-end storage systems in a fully secure way. Provided at least 2 containers are running, a new version of an active container can be rolled out without interruption.

To reap the full benefits of container orchestration, it is best to thoroughly consider all the options. There is no shortage of powerful new tools, and the consequences for your software architecture are profound. Without doubt, it is a very worthwhile process and Combell can help you progress down that route.

Using managed Kubernetes saves you a substantial part of the learning curve, and you can immediately achieve impressive results. With Combell at your side, you have an experienced partner at hand to assist in crucial design choices. Our specialists let you focus more on the applications and the needs of your business. They know the pitfalls and dare to ask the right questions.

Combell’s enterprise storage system can also provide your storage solution. For the application, this looks like local storage thanks to its integration with Kubernetes. Data security is maintained by updating it all on a separate system. In this way, we combine reliability with performance.

Combell does not leave you to do all this on your own. Together, we can make the move toward software development for the age of the cloud.

Discover managed Kubernetes at Combell