The wonderful world of containers – LinuxCon recap

on Sep 13, 16 • by Aldin Basic • with No Comments

Recapping the cool things people are doing with Linux - even after 25 years! In particular, we look at containers and why they are so popular...

Home » Open Source » The wonderful world of containers – LinuxCon recap

Just over two weeks ago, my colleague, Justin Reock, and I had the opportunity to go to Toronto to attend and celebrate the 25th anniversary of Linux at LinuxCon North America.


It was August 25, 1991 when Linus Torvalds released his famous message announcing that the project was “just a hobby” and mentioned that it would not grow into anything professional like the GNU. However, here we are 25 years later and Linux has grown far bigger and become more professional than Linus had ever imagined. Linux now powers huge portions of the infrastructure on the internet, data centers, stock exchanges, and almost all of the world’s fastest supercomputers.

We both felt the event was surreal from meeting some of the Linux Foundation folks, to attending keynotes – like Cory Doctorow’s DRM presentation. We wanted to take a moment to share what cool things people are doing with Linux and in particular the effect that containers are having and will continue to in the future.

Why containers are popular


It was no surprise that LinuxCon and ContainerCon were held together. Going back to Solaris Zones, fast forwarding a little to the LXC project, and looking at the present state of modern implementations such as Docker and Kubernetes, the flexibility and adaptability of UNIX technology has paved the way for the container revolution. The ability to build large-scale, elastic, and distributed systems requires a resilient system kernel capable of operating within volatile environments.

A container environment is usually the most minimal Linux install possible, emulated within a filesystem that is then compartmentalized into a subsection of the container host’s filesystem. There is no “Windows-lite, or OSX-lite.” You get all or nothing. Since what we think of as a Linux OS is just the Linux kernel surrounded by a suite of supporting software, the layers can be stripped away trivially until you have all the basic functionality you need to host an application. A well-designed container starts at this minimal stage and then only the components necessary to support the application are added into the environment. Coupled with the “everything is a file” UNIX philosophy, a design like this is simply not possible in any other modern operating system.

We had the opportunity to see what container companies, both large and small, are doing with these new options for the deployment of applications. The land rush is all about container management and there are a lot of approaches to this. What is setting the container management and microservices trend apart from things like the virtualization shift is that the concepts really aren’t very new. Developers have known for decades that a modular approach to application design lends a lot of benefits. The ability to modify parts of an application independently from others, or give more resources to a more heavily-used portion of an application looks a lot like Service Oriented Architecture at first glance. When you peer beneath the surface, though, you see very new technology making it possible to implement modular development in a completely different way. Automation frameworks like Chef and Puppet, CI technologies like Jenkins and yes, management solutions like Kubernetes allow us to reap the benefits of modular development in a way we’ve never seen before.

One thing is clear, 25 years after Linus Torvalds gave us Linux, it’s freedom and flexibility continues to open new doors for creativity and innovation across almost every facet of software engineering.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top