Automated Deployments Using CircleCI

To continue on my trials with different providers and tools for setting up a deployment pipeline, I'll be focusing on CircleCI today. CircleCI is provided both as a SaaS solution and can also be set up as self-managed. Interestingly, the Enterprise version (the self-managed one) uses Nomad as the job scheduler.

As my sample projects are always open-source, I'll be using the SaaS version as it's free for open source projects.

The complete sample can be found at my GitHub repo here.

Why CircleCI?

CircleCI, with the 2.0 version, continues the pattern of using containers in the build process as the build jobs. This is now becoming the de-facto standard way to run build jobs which I think is a really great direction. What it allows for the users is to use arbitrary containers, with the exact runtimes they need, in their build process without needing to worry about setting up specific build agent servers or such with proper tooling for each project. I think this is becoming more and more important since companies are taking microservices seriously now and building such architecture might end up building services in many different runtimes.

The workflow

The process I used in this example is a slightly simplified version of the one I set up in my previous blog with Gitlab CI. In this case I went and configured the deployment of my application into production just by merging a pull request into the master branch. So my master branch is what I will have running in prod always.

For setting up these kind of processes CircleCI offers a concept called workflow.

A workflow is a set of rules for defining a collection of jobs and their run order. Workflows support complex job orchestration using a simple set of configuration keys to help you resolve failures sooner.

In this case, My workflow looks like this, visually:

For the yaml configuration, it looks like this:

workflows:  
  version: 2
  build-and-deploy:
    jobs:
      - test
      - build_image:
          filters:
            branches:
              only: master
      - deploy:
          requires:
            - test
            - build_image
          filters:
            branches:
              only: master

So I've restricted the Docker image building and actual deployment only for commits in the master branch.

Test Job

The first job to be executed, on every branch and commit, is a test job.

test:  
    docker:
      - image: circleci/ruby:2.3-jessie-node-browsers
      - image: circleci/mongo:3.2-ram

    working_directory: ~/repo

    steps:
      - checkout

      # Download and cache dependencies
      - restore_cache:
          keys:
          - v1-dependencies-{{ checksum "Gemfile.lock" }}
          # fallback to using the latest cache if no exact match is found
          - v1-dependencies-

      - run:
          name: install dependencies
          command: |
            bundle check || bundle install --jobs=4 --retry=3 --path vendor/bundle

      - save_cache:
          paths:
            - ./vendor/bundle
          key: v1-dependencies-{{ checksum "Gemfile.lock" }}

      # Database setup
      - run:
          name: Wait for DB
          command: dockerize -wait tcp://localhost:27017 -timeout 1m

      # run tests
      - run:
          name: run tests
          command: |
            mkdir /tmp/test-results
            TEST_FILES="$(circleci tests glob "spec/**/*_spec.rb" | circleci tests split --split-by=timings)"

            bundle exec rspec --format progress \
                            --format RspecJunitFormatter \
                            --out /tmp/test-results/rspec.xml \
                            --format progress \
                            $TEST_FILES

The most interesting part is the first docker section. It defines that I'll run all the commands in a container spun from a circleci/ruby:2.3-jessie-node-browsers image. That has all that I need to run Ruby tests, so I don't need to go and set up Ruby envs on any agents or anything. The CircleCI folks have been kind enough to maintain a hefty list of ready-made runtime images for many different languages and runtimes, see the complete list here. The other image will be used as a service container, something that will be available during my tests in this case.

The other steps are pretty self explanatory, I think. First we run bundling, in the Ruby world we make sure we have got all the dependencies in place. Next, save the bundle state in a cache so we don't need to go and download all the dependencies in every single build. And then finally run the actual tests.

Build Images Job

The job to build and push the application image is executed as part of the workflow only for commits in my master branch. Again, the job configuration is pretty straight-forward:

build_image:  
    machine: true
    steps:
      - checkout
      - run: |
         docker login -u $DOCKER_USER -p $DOCKER_PASSWORD

      # build the application image
      - run: docker build -t jnummelin/todo-example:$(echo $CIRCLE_SHA1 | cut -c1-7) .

      # deploy the image
      - run: docker push jnummelin/todo-example:$(echo $CIRCLE_SHA1 | cut -c1-7)

I'm executing the build on a machine type of executor, which gives a bit more power to build the images faster. :) You could also run the docker build in a docker environment, see docs at https://circleci.com/docs/2.0/building-docker-images/.

I've inserted the needed Docker Hub credentials into the CircleCI configuration as environment variables. I take that they are secure enough to be used for secrets like these as well:

You can add sensitive data (e.g. API keys) here, rather than placing them in the repository.

I'm using part of the SHA of the commit to give the image a unique tag. It gives me two big advantages:
1. The image tag is unique, no shenanigans with the Docker latest
2. The image tag uniquely specifies the Git commit that was used to produce the image.

Deploy Job

To go and deploy the app into production, I'm using containerized Kontena CLI:

deploy:  
    docker:
      - image: kontena/cli:1.3.4

    steps:
      - checkout
      - run:
          environment:
            # Variables for the stack
            VHOST: todo-app.kontena.works
            LOADBALANCER: ingress-lb/lb
          command: |
            export TAG=$(echo $CIRCLE_SHA1 | cut -c1-7)
            echo "Using tag: $TAG"
            kontena stack install || kontena stack upgrade todo

Now when we instruct Kontena to upgrade the stack the CLI tool, running in a container, it will throw the stack information to the Kontena Master as the new desired state. In most cases, I would be changing only the application code which means that we'd go and change the image tag on the stack definition. But of course we could change any other thing also on the stack.

When Kontena Master rolls out the new desired state, it will automatically make a rolling deployment. Rolling deployment means that it will not take down all the containers for your application at once, but instead goes and changes one or a few containers at a time. How many depends on your deployment min_health attribute, you can define what level (percentage) of containers need to be running at all times during deployment.

In 1.4 release we added stop_grace_period which allows you to define for how long the containers are allowed to shutdown gracefully. With that and proper handling of termination signals you'll get to zero downtime deploys for your applications. More on that in later posts.

The stack for the application takes the inputs as env variables:

variables:  
  release:
    type: string
    from:
      env: TAG
  vhost:
    type: string
    from:
      env: VHOST
  lb:
    type: string
    from:
      env: LOADBALANCER
      service_link:
        prompt: Choose a loadbalancer
        image: kontena/lb

Using variables in the stacks makes them easily deployable across many environments and configurations. I usually use env variables as the source for the values so that it is easy to use things like CircleCI for the deployment.

Summary

Having the ability to run the whole process of building and deploying your application as a set of containers just makes everyone's life so much easier. No need to fiddle around with plugins or such or set up separate build environments for specific languages or runtimes.

With CircleCI workflows it's pretty easy to define the build process as a set of jobs where some of them can run in parallel. It makes your build and deploy process much faster and also allows easier retries.

And yeah, as you see, deploying and running your application in containers on top of sophisticated container management platform makes it a breeze to deploy things. :)

Want to try it out?

Sign in at Kontena Cloud to get some free credits to run our hosted Kontena Container Platform. You'll be up-and-running in minutes. As you see, it's fairly easy to build a simple automated deployment pipeline with containers and Kontena.


Image Credits: Todd Quackenbush on Unsplash.

Jussi Nummelin

Read more posts by this author.

Tampere, Finland
comments powered by Disqus