Have you heard of Container Wars? It’s an entertaining documentary series about gangs of adventure-seekers holding auctions for the content of unclaimed storage containers. The fun part is they have to place their bid before opening the container – therefore they never know in advance if it’s a hidden treasure worth big cash or just a pile of junk.
Ironically something slightly similar is happening in the world of Linux containers these days.
The Docker ecosystem is buzzing with tension. Enterprise software vendors and technology thought leaders express their dissatisfaction with Docker Inc. and call for open container standards (whatever that may mean). And there are even some talks of a fork… Part of this negativity is caused by actual bugs and deficiencies found in the latest Docker 1.12 release, although other opinions and suggestions have a longer history.
I would argue that bugs per se aren’t a real issue. As we all know – there is no software without bugs. In the age of Continuous Delivery the quality of software is measured not by the number of bugs but rather by the speed with which they are getting fixed. It’s ok to move fast and break things, as long as we listen to our customers and respond fast when things do break.
As somebody correctly put on a very emotional discussion thread on HackerNews – “don’t go for the bleeding edge if you have issues with the bleeding”. Enterprises have in fact had this attitude since as long as I remember – always lagging a couple of versions behind the latest tech. Just to stay on the safe side.
And – to be fair – Docker 1.12 didn’t really break anything for most mainstream use cases. The complaints were mostly caused by very high expectations. After all – features like integrated swarm mode and routing mesh are clearly exciting. Which made many of the early adopters jump right ahead and try them in their playgrounds. Well, a few things didn’t work as described, and a few bugs were discovered. No big deal, right?
So even though engineers may complain and have heated (and somewhat justified) discussions around this or that bug, the real wars happen because of two things: ideology or money.
Let’s look at ideological (or rather architectural) concepts first. When we say ‘Docker’ we actually mean 2 things. The Docker container image format and the docker engine that takes care of instantiating containers from the images and managing their lifecycle.
On the surface, this is a clean separation which allowed other infrastructure tools and frameworks to choose whether to support the image format only or also interface with the engine for running containers.
It’s just that Docker the company didn’t quite endorse the ‘image only’ model. Docker founder Solomon Hykes famously claimed that all tools outside the Docker engine “have partial, broken support” of the image format (in a heated Twitter discussion). It definitely sounds like the Docker image is inseparable from the engine, the changes in both are correlated and coupled which means no other container execution framework will ever be as good as the Docker engine for running the containers. Commercial interests aside (we’ll deal with those in the next paragraph) – this approach can be seen as going against the guiding principles of Unix philosophy of modular decoupled software components. This alone has caused quite a few software architecture purists to raise their eyebrows. And then Docker 1.12 release further enhanced this controversy by going along the ‘batteries included’ path and integrating Swarm orchestration, DNS routing and other goodies in the engine itself.
All of this led many professionals to feel like the architectural decisions of Docker Inc. are being directed by business incentives rather than technological correctness or open-source community best interests. (Let’s not forget that docker the software is still an open source project, enjoying the contributions of quite a number of independent engineers all over the world.)
And this brings us to the commercial considerations of the conflict:
In just a few years Docker has become a super-hot technological brand that every software vendor now wants a piece of. But the brand belongs to Docker Inc. and they rightfully want to preserve the full ownership of both the name and the technology. After all – they are a commercial organization and their main focus should be serving customers and earning money for their stakeholders. With the brand name being their main marketing vehicle.
On the other hand – enterprise software vendors have been building their offerings around this hot tech. Container orchestration/scheduling space is booming. Google’s Kubernetes (which can be rightfully considered the most mature existing container orchestration solution) is claiming dominance with support from RedHat, OpenStack Foundation, and CoreOs to name a few. And there are other contenders – Marathon, Nomad, Kontena. The built-in, easy-to-use swarm mode is a serious potential threat to all of them.
CoreOs folks were probably the first ones to publicly criticize Docker’s architectural and strategical decisions when they launched rkt – the competing containerization format. Then the industry’s pull for standardization and control created OCI – the Open Container Initiative. Paradoxically – Docker has been the main contributor to the project codebase, donating its container format and runtime – runC – to the OCI to serve as the cornerstone of this new effort.
Which in my eyes makes all this heated discussion around forking the Docker project quite meaningless. Whoever wants standardized, stable, bare-bones containers – can just build their solutions around runC. And let Docker Inc. continue to innovate and make money.
Emotions aside – it’s clear that things will continue to change and evolve. The container saga is only starting. New players will come and disrupt the market, new games will lead to new battles. Some of the latter Docker fork proponents have been talking about how infrastructure should be boring and stable. Stable – yes. But boring – no way! Nobody wants boring anymore. We live in the age of technological idealism, where innovation at all levels is the ultimate goal of existence. Especially so in IT. This requires us to be flexible, adaptive and resilient to chaos.
That’s the reason why at Codefresh we don’t enforce any kind of container deployment system but instead, strive to provide a flexible way to interact with whatever orchestration/scheduling mechanism you may choose. Our recent addition of JFrog Artifactory/Bintray, Quay.io and private registry support is a step in that direction. We sincerely believe in flexibility and freedom of choice. Use the tech you like, use the tech that feels right, experiment and innovate. And if you wanna fork – fork, don’t talk.