(Docker Compose + Docker Swarm) or Kubernetes

Recently the docker team release docker swarm 1.0 been production ready, and been my self an enthusiast that loves to test and try new things i gave it a try.

Some time ago i wrote a post about how to use kubernetes as a developer environment and i use php and symfony for the example. In this case i will only talk about the pros and cons of this two container cluster solutions. So:

Kubernetes

Developed and supported by Google, recently this year released also with the version 1.0 as production ready; brings its own perspective on what and how a container cluster should work. Between its concepts, Pods, Controllers, Services, Labels; they allow you to orchestrate your apps infrastructure and relationships between they, and deploy this topology to any environment you want, just pointing the kubectl to the right environment. There are some other tutorials that suggest to use labels to differentiate this environments if you only want to manage just one cluster.

The Good:

It brings the experience from google on how to orchestrate and deploy application on containers, and with it a really production ready full of options to easy the work of the DevOps.

Keep Alive:

A really nice feature its that you can configure you Replication Controller to always keep alive the same exact amount of containers for that specific app. That means that if for some reason any container stops, it will create a new container with a fresh copy allowing your system to be up almost 99% of the time in front of this disaster situations; this also means that if for some reason you have more that the expected amount of containers it will tear down the exact amount of extra containers, so you don’t need to worry about expending on extra resources.

Load Balancing:

This is another really great feature, the services will configure a load balancing in front of all your app containers so you will use it as entry point for it, this also help with the previous feature, if some container its created from scratch it will automatically added to the pool of the load balance.

Scale Up/Down:

Scale you app up or down its really easy just specify the amount of new apps you wanna keep alive and it will do it for you. For me its verbose enough since you need to specify through the command if you want to scale up or down, there is no way you can make a mistake, example:

if you have 3 running copies of you app, you can tell to scale up 2 more apps and you will end with 5, or you can tell to scale down 2 apps and end with just 1 running copy of your app.

This is really important for me, since on production you wanna make 0 mistakes.

Rolling Update:

And the feature that i love the most, this is really amazing, you can deliver updates to all your running instances of your app with a single command with 0 down time, and i think that it just enough explanation on this point. You can check it out on this demo video:

The Bad:

API and Configuration:

K8s bring a new whole API and file definition, so if you are use to work with containers before using docker and docker compose for example you will need to learn a new whole set of commands, and file configuration in order to create and orchestrate you cluster and topology. Depending on you topology it will maybe be a little complicated.

Cluster configuration:

K8s has an specific set of configuration for each kind of cloud (google cloud, Amazon EC2, etc), that introduce a complexity level that its really not needed or wanted at first install. Another consequence of this is that you cant, at least easily; configure your k8s cluster with multiple providers, so you will be anchor to the one cloud solution you choose from start.

Docker Compose and Docker Swarm

Directly from the docker team we get Compose and Swarm, tools that following the UNIX paradigm

Write programs that do one thing and do it well

allow us to design ours application topology and deploy it into our cluster.

Docker Compose:

Expose an easy API to design our application topology, creating all our services, configuring them, connecting them and even scale them.

The Good:

Compose follow the same API as Engine and with that gave us the same set of tools that we are used to work directly on the docker cli, it expand it with some other commands. Its really easy to configure on the docker-compose.yml file and you can even extend those files so you can have a set of configurations for each env.

The Bad:

Scale:

The scale command, this point may be a little controversial since there its not really a reason to put it on the bad section, it does the job as expected; but on my perspective it could be a little more semantically design. The commands take the number of running services you want running, lets say you scale=3 the app services you will end running 3 instances of the app services as expected, but what if you forgot how many instances you have running of some service and you wanna to scale down only 2 instances, if you type scale=2 you will end with 2 instances and not with 1 as expected, the same goes for the scale up idea. Ok you can say that learn the tool then use it, right; but it will be really great if the command explain it self and help the ops, some scale-down=2 will easy your life.

Keep Alive:

Another missing option its a supervisor that keeps track of how many instances of one services suppose to be running all the time, this kind of solution to prevent disaster will really helps the devops lives. Yes you can say that using he restart police this can be accomplish, but what if it was one of your cluster nodes that meltdown? This kind of feature will start or missing instances on a new node keeping our app running as expected.

Rolling Update and Load Balance:

Have a rolling update functionality and of course a load balancing feature will rise the docker tools to the production ready stage we all wanted to be.

Swarm:

The cluster solution from the docker team, it does the job really great and like the other tools it follows the Engine API.

The Good:

To create a swarm use the docker Machine its the recommended way, and its because docker Machine does a great job provisioning host instances with the docker Engine. Machine allow us to deploy docker into almost any host out there, on any physical host or cloud. Swarm then allow us to connect through the masters all the other nodes, and yes it can be multi provider, for me this is great, not been anchor to any cloud provider or been able to spread my apps through multiple providers its a dream.

Conclusion:

Maybe is not right to compare this set of tools so each one of them are designed to accomplish one task, but on the devops worlds a set of tools that bring more to us will certainly have more to win.

From my perspective the docker team its doing a marvelous job, theirs tools are just young  but im pretty sure in the near future all of them will converge and stabilize so it will deliver a great set of tools that interact with each other and allow us to create anything we need. But in the mean time i advice to use k8s since its more production ready for all the tests you may encounter in the day by day devops world.

Advertisements

Kubernetes a how to with PHP

Kubernetes
Recently i have been reading a lot about containers and using Docker extensively as my default development environment and for a few projects as the deployment delivery method. About a year a go i also heard from this amazing project from google called kubernetes that aim to be a pilot to help us on the common DevOps tasks that bring pain to us every day. If you are a developer or DevOps engineer that perform this tasks every day you will love kubernetes (k8s).

PHP has been my development prefer language for almost 10 years now, i have used just a few of the growing collection of frameworks, been Symfony the one i prefer and use in all the projects i have the opportunity. So i will use this tools for the example i plan to explain and share in this post.
The aim of this post it to share and example i made to test k8s locally so we can see it on action and plan strategies on how to include it on our current projects and deployment strategies. The code and configuration that will be explained on this post can be found on the k8s_php_test github repository.

What to expect:

  • A working k8s cluster
  • A Symfony2 application that will display all the nodes on the k8s cluster running the same app
  • Scale up and down the app
  • Roll and update

Requirements:

First we need to prepare our machine to run the k8s cluster, for that we will use docker, depending on what OS are you using, been Linux/Ubuntu the prefer one; you should:

Then we will follow the  instructions on k8s official page to set up our k8s cluster. Also ensure you have in your $PATH the kubectl binary.
Lastly get the code from the github repository and move into the directory:
clone git@github.com:bitgandtter/k8s_php_test.git && cd k8s_php_test

And update our Symfony dependencies:

composer update

The fun begins:

Now we can start by building our image
bash development_tools/cluster_config/app/build-image.sh

And start our replica controller and service. As the k8s official web site says about Replication Controller and Service:

“A replication controller ensures that a specified number of pod “replicas” are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. “

“A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service.”

We will start our own using this command:
bash development_tools/cluster_config/app/create.sh

After this we should have one Replication Controller and one Service both with the label k8s-php-test. The RC will bring up one container with our example application development on Symfony and the Service will create a Load Balancer for us in front of our App. We can see all this info by running this command:

bash development_tools/cluster_config/get_pods_and_services.sh

This should output something like this:

Now you can go to your browser and navigate to the ip of the service and see a page with our friend Tux on only one instance running also displaying the ip of the container.

What next:

Great we have successfully created a k8s cluster and populated it with our app using Replication Controllers and Services, but we can do that using just docker with out so much pain, so: What makes k8s a better solution for our common and daily DevOps tasks?

Scale

This is one of the tasks that can be really hard to accomplish and certainly if you don’t have a tool that provide this kind of strategies out of the box you will end writing a lot of code (bash, puppet, ansible, python) scripts that help you on the provisioning task and deployment. So here comes k8s to rescue the day. Lets scale up our app to 6 instances.

bash development_tools/cluster_config/app/scale-up.sh

If you go to the browser you can see how the new instances are been started.

Rolling Update

Another common task its how you deliver updates to your app, certain measures need to be take into consideration, 0 downtime, do not compromise the user experience, gradual and incremental update of the service. Again k8s make this task easy for us, lets test it:
  • First lets make a change on our app, for example the display picture; lets put another custom on our little friend Tux, for that open the twig template and replace the image.png by the image2.png:

before: framework/app/Resources/views/default/display.twig.html

{% image '@AppBundle/Resources/public/images/image.png' %}

after: framework/app/Resources/views/default/display.twig.html

{% image '@AppBundle/Resources/public/images/image2.png' %}
  • Build a new image
bash development_tools/cluster_config/app/build-image.sh
  • Roll the update
bash development_tools/cluster_config/rolling_release.sh

Again go to your browser and you can see how the instances are been updated, a process really nice to see.

Conclusion

K8S can be really helpful for both full job DevOps an for developers, i also plan to make a post and example on how to use k8s as a development environment. In the repository can be found also the examples on how to scale down and delete the k8s cluster.
I really hope this can be helpful and start the spark of curiosity about this new piece of technology if you are a beginner.