Rails 5 and Docker: Lessons Learnt

Rails 5.0 is now just around the corner introducing cool new features for websockets and API-only applications. Along with Rails 4.0 improvements Rails has became mature and prospering platform for web development.

With the modern DevOps development pipelines and bleeding edge container technologies build around Docker, it is relatively easy to scale and run Rails applications in production on any cloud; private, public or even hybrid. On paper. The fact is, Rails is not the easiest framework to run as containers:

  • The size of the official Rails Docker image is gigantic
  • It takes multiple steps to execute when deploying. E.g. create database on a first run and run migrations on every deploy.
  • No way to manage application secrets that are required by Rails application such as SECRET_KEY_BASE, database passwords, 3rd party service access tokens...

The Goal

In this article we'll explore how to create a simple Rails 5.0 application, package it as Docker image and set-up a platform for running it virtually at any scale on any infrastructure while trying to address all the most common pain points. This is what we'll be creating:

link-to-our-live-demo-app

You can grab the full source code from GitHub or check the completed application running in here. We hope you enjoy this article and find it useful! So, let's get started!

Getting Started

Today, most of the developer websites promote various container technologies. It's tempting to start exploring all these fancy technologies, but it is very time consuming. Most importantly, it'll distract you from the most important thing: creating your killer application!

Before thinking any of the container technologies, you should really focus only on implementing your application. It is relatively easy to containerize your stuff afterwards if you have followed Rails best practices and twelve-factor methodology in your application development. So, describe your goals, plan what you want to achieve and enjoy easy developing that Rails has to offer.

You can use your favorite development tools while developing your application. In our example, we've created a complete Rails 5 app powered by:

The app is authenticating users using GitHub OAuth APIs and will store the authenticated user access token in PostgreSQL database. Naturally, you can tweak our example or create something amazing on your own!

Running the Application Locally

Eventually, you'll reach the point when you want run your app in a container. The first step in here is to choose the right Docker base image. Previously we have shown how to build minimal Docker image for Rails based on Alpine Linux. This is very valid approach and we are using that also in our example application.

FROM ruby:alpine

ADD Gemfile /app/  
ADD Gemfile.lock /app/

RUN apk --update add --virtual build-dependencies build-base ruby-dev openssl-dev libxml2-dev libxslt-dev \  
    postgresql-dev libc-dev linux-headers nodejs tzdata && \    
    gem install bundler && \
    cd /app ; bundle config build.nokogiri --use-system-libraries && bundle install --without development test

ADD . /app  
RUN chown -R nobody:nogroup /app  
USER nobody

ENV RAILS_ENV production  
WORKDIR /app

CMD ["bundle", "exec", "rails", "s", "-p", "8080"]  

The easiest way to test application with Docker locally is to use Docker Compose. Required services and configurations can be described in a single docker-compose.yml YAML file.

Our example application consists of db, redis, sidekiq and app containers. To setup db and run migrations can be done by executing docker-compose run app rake db:create and docker-compose run app rake db:migrate db:seed commands and finally start application with docker-compose up command.

Rolling to Production

So far things have been quite a simple. When rolling containerized stack to production, including the application, databases and possibly some other related (micro)services, there are many questions to answer:

  • How big this app will be? How many users it will serve?
  • Do you want your application to be infrastructure agnostic or lean heavily on some cloud provider?
  • How to run databases or save other persistent data?
  • How to scale the application and handle load balancing?
  • How do you pass sensitive data to your application and where to store that data?
  • How the application can be deployed and updated with zero down-time?

No need to worry! We will walk through how to address all these questions in the next few sections.

Setting the Goals

The first thing is to describe your goals and set the levels that are suitable to your application. Not all applications need hundreds of servers and large orchestration platform. If you are hosting a single Rails powered web site with single database instance you don't necessary need Docker at all. On most cases three servers is enough to achieve High Availability setup.

Infrastructure Agnostic vs Vendor Locking

Sometimes the best option might be to rely heavily on single cloud provider. If you really want to take the easy road, you can enjoy wide array of managed services designed to make your life easier. The more managed services you use, more deeply you are engaged with this cloud provider.

Alternatively, if you design your application infrastructure agnostic, you can easily switch your cloud providers, use your own infrastructure or use different cloud providers at the same time. In this model, your application must be designed in a way that it is not depending on any of the managed services offered by cloud providers.

Whatever route you'll take, it's a fundamental decision. With the new container technologies, you may deploy and run some of the most common managed services also yourself. Just remember, the admin and maintenance tasks are still something you need to do even you could run those services on your own.

When we started developing Kontena, we wanted to create infrastructure agnostic container platform. We wanted to empower developer to use whatever infrastructure they like; cloud, on-premises or even hybrid. With built-in overlay network you can easily run applications on different availability zones or even on different cloud providers. Still everything works.

Naturally, it's not always so black and white situation. Sometimes it's ok to mix-in some managed services when it just does not make sense to start creating, managing and administering something similar on your own.

Running Databases

Over the past few years, running stateful services (such as databases) has been very difficult in containers. Therefore, many businesses have decided not to even try running their databases in containers, but instead use some kind of managed service. For example:

By using these services you don't need maintain your own database servers. On the other hand you are at the mercy of the service provider. If something is broken the only thing you can do is to wait until things will be fixed.

Luckily, times are changing and now there are some real alternatives to using above mentioned managed services. Several leading container platforms provide built-in support for running stateful services.

With Kontena using stateful services is piece of cake. You have to just define service as stateful and Kontena takes care the rest. The benefits are obvious: you can have full control to your data, run it on any infrastructure and no need to be locked-in to any managed service.

It's important to note that when running your own database service (in container or traditional), you are responsible of all the maintenance, administration and backup tasks!

Deploying to Production

Containerized Rails applications are not very developer friendly when considering deploy to production. For example, consider how would you use Docker Compose to setup db containers first, run migrations and seed database before you could start your application instance. Furthermore, how to handle this if you have scaled your application? On which application instance you'll run your migrations?

Let's assume we want to run rake db:setup after the first deploy on first application instance. We also want to run rake db:migrate after every deploy, but only on first application instance. From developer's point of view, the deploy should be so simple that it can be made with just single command.

There are few options in here. First option is to automate (write bunch of scripts) to somehow address all these issues. For Rails developers this is not something they are used to. The second option which we prefer, is to use container platform that is capable of automating all these nasty bits.

Kontena is capable of deploying services in multiple steps making it ideal platform to run Rails applications. You can define hooks that are commands triggered at certain steps during deployment process on selected instances. The whole deploy process is triggered by just single command.

With Kontena, services are described in kontena.yml. Kontena.yml is a file in YAML format that defines a Kontena application with one or more services. It uses the same syntax and keys as Docker-compose. You can see our example applications's kontena.yml here.

The deploy is triggered with kontena app deploy command. It will create and/or update application configuration, execute deploy hooks and sets application instances up and running. Under the hood, Kontena is replacing old application containers with new versions. You can even define how many instances you want to keep running at the same time when replacing containers to ensure zero-downtime.

Scaling the Application

Running several containers on a single host is relatively easy. The challenges will show up when dealing with multi-host environment, scaling and trying to route traffic correctly. To make it happen, you'll need good scheduler, service discovery, load balancer and overlay network technology. And this is just to get started. With plain Docker tools you will have rocky road ahead so it is highly recommended to use some fully integrated container platform to take care of this part.

Kontena is fully featured container platform and has everything needed to run and scale applications in multi-host, high availability environments. Just define the deployment strategy and scaling value with optional affinity rules and Kontena will take care of the rest. Kontena is also automatically rebalancing instances when new nodes are provisioned or existing nodes disappear.

Once you have decided how to deal with scaling, it is important to understand how scaling works with containers. We have seen many people having wrong assumption believing that with Docker they can scale their stuff to infinity. Naturally this is not the case. As a rule of thumb, on single host you can run number of instances of a single application equal to CPU cores. After this the instances will start competing with CPU resources.

So, what does scaling mean to Rails applications? From the Rails point of view, try to make your web layer stateless and save your persistent data (file uploads etc) to somewhere else. That way you can scale your web layer very easily.

Load Balancing

One crucial component when scaling applications is a load balancer that will route traffic to your scaled application instances. When choosing a load balancer, typically users select between HA proxy and Nginx.

Both solutions have pros and cons. With the official Docker haproxy image, you can add your config file to the official image, then build and run it, or you can use volumes to mount your host config file to the container. Either way, you will end up updating the config file whenever your applications are changed and then restart haproxy container.

To make it really easy for users Kontena Load Balancer automates all these nasty parts and all the user needs to do is add some load balancer configurations to service description. Kontena will take care of the rest.

After associating application services to the load balancer, you can scale your application up and down and Kontena load balancer will reflect the changes and route traffic correctly with zero-downtime.

Secrets Management

If you are following the twelve-factor app paradigm, it suggests storing application configuration in environment variables. The same method goes for containerized applications, where the environment variables are the de-facto way to pass config variables to applications.

With Docker, environment variables can be described in docker-compose.yml or separated environment files that are referenced in the YAML file. Conceptually, the docker-compose.yml file is a blueprint that people should be able to share, so those sensitive config variables can not be stored in version control systems or Docker images.

Kontena Vault provides a perfect solution to this challenge. Vault is a secure key/value storage that can be used to manage secrets in Kontena. It secures, stores, and tightly controls access to sensitive data you want to use with your containerized applications. In practise you store your sensitive data to Kontena Vault and reference them later in kontena.yml and Kontena will attach that data to your application.

About Kontena

Kontena is a new open source Docker platform including orchestration, service discovery, overlay networking and all the tools required to run your containerized workloads. Kontena is built to maximize developer happiness. It works on any cloud, it's easy to setup and super simple to use. Give it a try! If you like it, please star it on Github and follow us on Twitter. We hope to see you again!

Image Credits: Rails at Night by Tristan Schmurr