Modern Web Development Setup – The Testing Environment Using Kontena (1/2)

Read the third article in a series of guest blogs by Kontena Community blogger Juha Kärnä. In this post, Juha talks about the testing environment setup.

Testing and Deployment Overview

Before setting up the needed environments, it’s good to define the high level targets.

alt Deployment and Testing Overview

The general idea in this setup is to make the deployments and testing automated where feasible, yet keep the system simple enough to implement and maintain. The target is to set up the system in such a way that acceptance testing and production environment updates are triggered manually even if the deployments are automated.

Development Environment

The local development environment is where the development is done. Created pull-requests, merges and other actions trigger jobs for the CI system.

Version Control System

The version control system keeps track of the source code and other content changes and triggers CI jobs based on configured rules.

Continuous Integration

Component specific unit and API tests are run by the continuous integration service. The CI will also build and push container images into a container registry, if the tests are passed, and deploy new components to the testing environment. The target is to also trigger the automated acceptance tests to see if the changes affect the system as a whole.

Container Registry

The container registry stores and shares the container images built by the CI service.

Testing Environment

The testing environment is the first environment where the whole system runs in a similar manner as in production. The environment is meant for various testing activities and the system state isn’t automatically reset when the components are updated.

Acceptance Testing Continuous Integration

This environment is meant for running automated acceptance tests. The system starts from the initial state for every test round.

Acceptance Testing Environment

The environment for performing additional manual acceptance tests before the service is accepted for the production deployment.

Production Environment

The environment where the service is available for the end users.

The target for the development and deployment architecture is set for now. Later in the acceptance test related articles we’ll see how close to the target we actually get ;).

Testing Environment Setup

In the following chapters we’ll take a few new web services into use. Now it’s a good time to create accounts for Kontena, UpCloud, Docker Hub, GitLab and Shippable before continuing forward.

UpCloud and Kontena Setup

The setup will be based on Kontena Container and Microservices Platform and hosted in UpCloud IAAS cloud platform.

So let’s install the Kontena CLI application. For macOS the installation procedure is very straightforward: we’ll download the latest stable Kontena CLI release, open the downloaded .pkg file, and follow the installer guidelines to complete the installation.

After the installer program completes, we check the installed Kontena version and log in into Kontena cloud:

$ kontena version  
$ kontena cloud login

It’s possible to use already existing ssh-keys, but here we create a new pair of keys to be used with the UpCloud servers and set the key file name to id_rsa_upcloud:

$ ssh-keygen -t rsa
Generating public/private rsa key pair.  
Enter file in which to save the key (/Users/<username>/.ssh/id_rsa): /Users/<username>/.ssh/id_rsa_upcloud  
Enter passphrase (empty for no passphrase):  
Enter same passphrase again:  

These two commands do quite a lot for us; we’ll end up having two CoreOS-based servers in the UpCloud: one server is the master and the other one is the node. The master is used for controlling the node(s) and all of our application containers will run in the node server.

With the used parameters we’ll create the servers into Frankfurt with the default pre-configured option (1xcpu, 1 GB ram & 30 GB, at the time of writing), but these parameters can be modified.

Let’s venture forth by creating the master server:

$ kontena upcloud master create --username <upcloud_username> --password <upcloud_password> --ssh-key ~/.ssh/id_rsa_upcloud.pub --zone de-fra1

And then the node server:

$ kontena upcloud node create --username <upcloud_username> --password <upcloud_password> --ssh-key ~/.ssh/id_rsa_upcloud.pub --zone de-fra1

Initially Kontena CLI will keep reminding us about server using a certificate signed by an unknown authority. It’s possible to set SSL_IGNORE_ERRORS=true environment variable, set a proper certificate for the Kontena master, or copy the default self-signed server certificate into ~/.kontena/certs/ipaddress.pem file from the master server. We’ll go with the last option:

$ scp -i ~/.ssh/id_rsa_upcloud root@<master_ip_address>:/etc/kontena-server.pem ~/.kontena/certs/<master_ip_address>.pem

Just to be sure Kontena CLI is working now properly, let’s give it a go:

$ kontena grid list
Name                           Nodes    Services     Users  
test *                         0        0            2  
$

After the servers have been created, we can access them via ssh as a root user if needed:

$ ssh root@<server_ip_address> -i ~/.ssh/id_rsa_upcloud
Docker Image Registry Config

Docker Hub is used for hosting the created docker images in these articles. For these articles we created articleprojects organisation and front-service, app-service and auth-service repositories under it.

Once the setup is done, let’s log in to docker:

$ docker login
Dockerfiles

The Dockerfiles we defined in previous article won’t work outside of the local development environment. We need to define new files for the production environment.

Dockerfile.prod for the Front Service:

FROM node:6.10.2-alpine

# Create app directory
RUN mkdir /src

# Install app dependencies
ADD package.json npm-shrinkwrap.json /src/

# Update npm to a new version in a slightly complicated way to go around several issues
# More info: https://github.com/npm/npm/issues/15611#issuecomment-289133810
RUN cd ~ && npm install npm@4.5.0 && rm -rf /usr/local/lib/node_modules && mv node_modules /usr/local/lib/

# Change the ownership of the copied files and change to node user
RUN chown -R node:node /src  
USER node

# Install production dependencies
RUN cd /src && npm install --only=production --no-optional --loglevel warn && npm cache clean

# Copy all the files into the container to make the container independent from local files
USER root  
COPY . /src  
RUN cd /src && npm run build  
RUN chown -R node:node /src/.next/*  
USER node

WORKDIR /src

# Define an open port for the container
EXPOSE 3300

# Defined in package.json
CMD [ "npm", "start" ]

Compared to the development version, the major differences are:

  • Source files are copied into the container
  • Only production module dependencies are installed
  • Containers are run by the node user instead of the root

The Dockerfile.prod files for App and Auth Services are almost identical and available in GitLab:

Smoke Test Deployment

We’ll do a smoke test run to make sure the basic scripts are working and we’re able to run our containers in the testing environment.

Let’s begin by defining the smoke test deployment descriptor for Kontena:

stack: coodari/article-project  
version: 1.0.0  
description: Article project testing environment smoke stack

variables:  
  front_service_version:
    type: string
    from:
      vault: FRONT_SERVICE_VERSION
  app_service_version:
    type: string
    from:
      vault: APP_SERVICE_VERSION
  auth_service_version:
    type: string
    from:
      vault: AUTH_SERVICE_VERSION

services:  
  load-balancer-proxy:
    image: kontena/lb:latest
    ports:
      - 80:80

  front-service:
    image: "articleprojects/front-service:${front_service_version}"
    environment:
      - KONTENA_LB_MODE=http
      - KONTENA_LB_BALANCE=roundrobin
      - KONTENA_LB_INTERNAL_PORT=3300
    links:
      - load-balancer-proxy
    depends_on:
      - app-service
      - auth-service
    secrets:
      - secret: FRONT_SERVICE_DOMAIN
        name: KONTENA_LB_VIRTUAL_HOSTS
        type: env

  app-service:
    image: "articleprojects/app-service:${app_service_version}"
    environment:
      - NODE_ENV=production
      - KONTENA_LB_MODE=http
      - KONTENA_LB_BALANCE=roundrobin
      - KONTENA_LB_INTERNAL_PORT=3000
    links:
      - load-balancer-proxy
    depends_on:
      - db-app
    secrets:
      - secret: APP_SERVICE_DOMAIN
        name: KONTENA_LB_VIRTUAL_HOSTS
        type: env

  auth-service:
    image: "articleprojects/auth-service:${auth_service_version}"
    environment:
      - NODE_ENV=production
      - KONTENA_LB_MODE=http
      - KONTENA_LB_BALANCE=roundrobin
      - KONTENA_LB_INTERNAL_PORT=3100
    links:
      - load-balancer-proxy
    depends_on:
      - db-auth
    secrets:
      - secret: AUTH_SERVICE_DOMAIN
        name: KONTENA_LB_VIRTUAL_HOSTS
        type: env

  db-app:
    image: postgres:9.6.2
    stateful: true
    volumes:
      - db_volume_app:/var/lib/postgresql/data
    secrets:
      - secret: APP_POSTGRES_USER
        name: POSTGRES_USER
        type: env
      - secret: APP_POSTGRES_PASSWORD
        name: POSTGRES_PASSWORD
        type: env
      - secret: APP_POSTGRES_DB
        name: POSTGRES_DB
        type: env

  db-auth:
    image: postgres:9.6.2
    stateful: true
    volumes:
      - db_valume_auth:/var/lib/postgresql/data
    secrets:
      - secret: AUTH_POSTGRES_USER
        name: POSTGRES_USER
        type: env
      - secret: AUTH_POSTGRES_PASSWORD
        name: POSTGRES_PASSWORD
        type: env
      - secret: AUTH_POSTGRES_DB
        name: POSTGRES_DB
        type: env

volumes:  
  db_volume_app:
    external:
      name: db-volume-app
  db_valume_auth:
    external:
      name: db-volume-auth

With the file above we defined the following things:

  • Service stack including the used services
  • Environment variables from the Kontena Vault
  • To connect services to the Kontena load balancer we need to pass some values via environment variables and link the services with the load balancer service
  • Volumes for the database services: volumes aren’t necessarily needed in the testing environment, but to keep the environments consistent we’ll use them also here

Speaking of the volumes, we need to define them before the stack can be installed:

$ kontena volume create --scope grid --driver local db-volume-app
$ kontena volume create --scope grid --driver local db-volume-auth

In the testing, acceptance testing and production environments we’re going to refer to the images using their tagged version numbers instead of using tags like latest. So let’s build the components and tag them with version 1:

front-service$ docker build -f Dockerfile.prod -t articleprojects/front-service:1 .  
front-service$ docker push articleprojects/front-service:1  
app-service$ docker build -f Dockerfile.prod -t articleprojects/app-service:1 .  
app-service$ docker push articleprojects/app-service:1  
auth-service$ docker build -f Dockerfile.prod -t articleprojects/auth-service:1 .  
auth-service$ docker push articleprojects/auth-service:1  

Let’s also write the used image version numbers and domain names into Kontena Vault to make them available for the deployment descriptor:

$ kontena vault write FRONT_SERVICE_VERSION 1
$ kontena vault write APP_SERVICE_VERSION 1
$ kontena vault write AUTH_SERVICE_VERSION 1
$ kontena vault write FRONT_SERVICE_DOMAIN example.com,www.example.com
$ kontena vault write APP_SERVICE_DOMAIN app.example.com
$ kontena vault write AUTH_SERVICE_DOMAIN auth.example.com

We’ll use example.com here for presentation purposes, but any domain will do as long as you have access to its DNS records. We set A records for app.example.com, auth.example.com and example.com names to point to our node server ip address and a CNAME record to direct www.example.com to example.com. Depending on the set TTL and other factors it usually takes from a few minutes to a few hours for the changes to take effect.

We also need to set some configs for the PostgreSQL database services. Here we’ll use postgres as a username, database_prod as a database name and unique strong passwords for both services:

$ kontena vault write APP_POSTGRES_USER postgres
$ kontena vault write APP_POSTGRES_PASSWORD <postgres_password>
$ kontena vault write APP_POSTGRES_DB database_prod
$ kontena vault write AUTH_POSTGRES_USER postgres
$ kontena vault write AUTH_POSTGRES_PASSWORD <postgres_password>
$ kontena vault write AUTH_POSTGRES_DB database_prod

Now, let’s give it a try:

deployment-scripts$ kontena stack install kontena-testing-smoke.yml

In case everything went fine, and the DNS changes are in place, all the services should be available via their respective domains. It’s good to make sure everything is working well at this point, before continuing to SSL/TSL certificate configuration.

SSL/TSL Certificates via Let’s Encrypt

Kontena provides built-in integration with Let’s Encrypt for easy certificate management. Let’s start by registering an email address. Let’s Encrypt will connect the given email to the domain and send reminders about expiring domain certificates to the address:

$ kontena certificate register <example@example.com>

The domain authorisation is DNS-based so we need to tweak the DNS records once again. Since Let’s Encrypt doesn’t support wild card certificates we need to do the authorisation for all four domains (example.com, www.example.com, app.example.com and auth.example.com).

We’ll repeat the following authorisation part for each domain and add a TXT record using the name and content information the commands return.

$ kontena certificate authorize example.com

After completing the above steps for all four domains we’ll request a single combined certificate:

$ kontena certificate get --secret-name SSL_LOAD_BALANCER_CERT example.com www.example.com app.example.com auth.example.com

Kontena automatically creates the following vault keys with the cert-related content so we don’t need to handle the cert using any files:

  • SSL_LOAD_BALANCER_CERT_PRIVATE_KEY
  • SSL_LOAD_BALANCER_CERT_CERTIFICATE
  • SSL_LOAD_BALANCER_CERT_BUNDLE

For our Kontena load balancer, the SSL_LOAD_BALANCER_CERT_BUNDLE is the vault key we need. To enable the SSL certificate we need to pass the bundle as a SSL_CERTS secret. To force the http -> https redirect we use KONTENA_LB_CUSTOM_SETTINGS env variable. We also enable 443 port for the load balancer. Here is the config at this point in GitHub:

https://gitlab.com/article-projects/deployment-scripts/blob/master/kontena-testing-smoke-cert.yml

So let’s give it a try:

deployment-scripts$ kontena stack upgrade article-project kontena-testing-smoke-cert.yml  

If things went as they should, our services are now properly equipped with https-connection features. We need to check that all three services are available via their https-addresses and we also need to make sure the http->https redirection works by trying out unsecured http-addresses. Good!

Let’s Encrypt certificates will expire in three months and at the moment Kontena won’t handle the update automatically. However I’ve understood that the people at Kontena are thinking about implementing the support for the http-based domain validation model, which would allow automatic certificate updates.

Anyways, the certificate update can be done by redoing the authorisation via DNS, and then running the kontena certificate get –secret-name… again to update the vault variables with new certificates. Once the certificates are updated the Kontena load balancer should take the updated certificates into use automatically.

Basic Auth

Since the testing environment is basically available for any internet user, we’re going to set a basic auth to give it some protection against unwanted visitors. Depending on the situation, better protection may be needed, but this is a good start.

In this example we set user1:pass1234 credentials for Front, App and Auth Services:

$ kontena vault write BASIC_AUTH_FRONT_SERVICE << EOF
→ user user1 insecure-password pass1234
→ EOF
$ kontena vault write BASIC_AUTH_APP_SERVICE << EOF
→ user user1 insecure-password pass1234
→ EOF
$ kontena vault write BASIC_AUTH_AUTH_SERVICE << EOF
→ user user1 insecure-password pass1234
→ EOF

We enable the basic auth feature by passing the credentials to the services as KONTENA_LB_BASIC_AUTH_SECRETS secret. Also encrypted passwords are supported, more information about those are available in the Kontena load balancer documentation.

Our Kontena configuration for testing environment is now ready – and available in GitLab – so let’s give it a shot and make sure the basic auth is working as expected:

$ kontena stack upgrade article-project kontena-testing.yml

Continuous Integration Service Setup

Continuous Integration (CI) Overview

The Idea is to run component specific tests for every master branch change, for front and back end service code repositories. If the code passes the tests and other requirements, the system builds new images, pushes them into Docker Hub, and commands Kontena to update the testing environment service stack with the latest containers.

Depending on the component, CI tests will contain unit tests, API tests, test coverage checks and ESLint coding convention checks.

GitLab Integration

A GitLab access token is needed for granting the Shippable access to the GitLab repositories and to automatically trigger CI-builds when merges, commits and pull-requests happen. The access token can be created in User Settings -> Access Tokens menu in GitLab. With the access token it’s possible to enable the integration in Shippable via Account Settings -> Integrations.

After creating the integration, the synced projects and repositories have to be selected in Shippable. By clicking the Subscriptions hamburger menu in the top left corner the list of available Git repo groups/projects/repos becomes visible. After selecting a group from the list, related available repositories are shown in the screen. By clicking Enable for a repository, the integration between Shippable and GitLab is enabled for it. If expected Git-projects aren’t visible in the Subscriptions list, the Sync button in Account Settings may solve the issue.

alt alt

Docker Hub Integration

Docker Hub credentials are needed for enabling the Shippable integration. Integration can be added via Account Settings -> Integrations and creating a new Docker integration. The integration must be also enabled for each repository group or specific repositories.

alt alt

Shippable Configurations

Let’s start by creating a never-expiring token for accessing Kontena Master CLI, since we’ll need it in the following shippable.yml scripts:

$ kontena master token create --expires-in 0 --token

Next we define the shippable.yml configuration for the Front Service:

# Language setting
language: node_js

# Version number
node_js:  
  - 6.10.2

branches:  
  only:
    - master

env:  
  global:
    - NODE_ENV=continuous_integration
    - SSL_IGNORE_ERRORS=true
    - KONTENA_VERSION_VARIABLE=FRONT_SERVICE_VERSION
    - KONTENA_DEPLOYMENT_DESCRIPTOR=kontena-testing.yml
    # DOCKER_HUB_REPO
    - secure: ag26m132q80wCTT0D86lXd71nXKDOz+JOomq3fjXag6VVP6xlV4+BQ/KLCLo7xaLnVwwoItBERP7hjLmcmEH6ZsMXYLsbFByMY9NKneSKIwHKak/aMGj5baM+i88lOiuWWZqYjXUwWSqoDD+npJsjgD9qrep6lG+/XS5dHPT1Spi8O881fGTfH12rKP4YyBl46KkcnVgcr6obADp1gcH7ZM2Llz5IXUD+AdQZ5QVHFhk0HZtIEqIz4N/mpI7xGDqM+WLy6Iofo6pPUmFZwqy+yYi+/DtecWXWbScYC4gPkPAFltVd9cCltuRJFyKNCPJ3DT/IkgR/J7G77kVK8hNrg==
    # KONTENA_MASTER_URL
    - secure: BvV0TNY/UT9w6RmE4LcgSnC+vhSCY4RwuXzc90un6bDPJdkztvnDOupZm5QpgfJkEUJhktolEk1A1Ap1HKgZj9k4AQLxIDjDXHcMA6BLO2Npi8GBsM0TRBJ8pzwhTN4yt1Lzfn2Imy1ao9Tklfx3CJGpohdWtjlC7sDZSOJd17gcG4YN/sBeOlKgg8HSX6P81gXWju99dW5mTKJk1rvqQyK1MXawPW5GjSOqTrgdX+IuqH1pG4epDbM3kL1kXW8twwC1CD+r8tU785NH2Et74t0TMxkEviStUHUjW8VZMa9sZs+LroEcxezFZz+xcmNx6Y16z872SmFJuaQp/107kw==
    # KONTENA_LOGIN_TOKEN
    - secure: FnIYGmfZVB5z58vVT6y97O3Z0mABHXdE1WIEUOQM0BR5ChaJhazgBqStQKUtEQRZcEP+0NrueMsJ9z0DL/104HrOeBufZAZWw/19FLnav+d9lYnASVDPhSNE1ol97HOGflhEp7KcLMTyn5kXgSP8ofCGfDyajpt+OKboV8jl/1gJdqIL6rRQXT/jDnaYHXSPoDQggiaHfHdNnGHNDbGoON9EZfEXQ2mOH/PRfiT4fE8mfTUmTKCqCQ7DASC7C1yYgT73B4ACVmqMLtuEQgQriV3FdY1tTpCHKZtBLwv9qxi2/8VuAmwM4nToQHei3BjWwlfHNUkCsSg+kPYnXuURJA==
    # KONTENA_GRID
    - secure: QV6iRP1Eq3wHOvfDlEvoAc5P5R4l51UtEXlhe8A3tR4pV/5MCVHjCDEeEvLnUcEG8L7e3SES7IO+F6bP//igcoEWGO5Q7u98oFsCxyFbrFftDFBlRFeP6OTGWWc6zOzNnoUN1sdg8VqGu3eCyUlWVIQvfL9lQhVLY5PGxmXy+JcpX4Y10bel5A7zyca9uzX5paf4O32/4i+vV7sEhHrrYGNyV6HTWlZegKrgeHZHoVKIQhdrqU88dDNnG1ICRuDPt6BYFex+3JrxDjpC5wp4UAYQ1H6mPlSA7VYfaGlhMOBVs2XkZhr4zwIDHdWyj/xdobO32Njb5vJRJeRiGnfQCw==

build:  
  ci:
    # Specific version is used to avoid issues due to version differencies
    - npm install npm@4.5.0 -g
    - npm -v
    - npm install
    - npm test

  on_success:
    # Ruby is needed for kontena-cli
    - if [ "$PULL_REQUEST" == "false" ] ; then add-apt-repository ppa:brightbox/ruby-ng ; fi
    - if [ "$PULL_REQUEST" == "false" ] ; then apt-get update ; fi
    - if [ "$PULL_REQUEST" == "false" ] ; then apt-get install ruby2.3 ; fi
    - if [ "$PULL_REQUEST" == "false" ] ; then ruby -v ; fi

    - if [ "$PULL_REQUEST" == "false" ] ; then gem install kontena-cli ; fi
    - if [ "$PULL_REQUEST" == "false" ] ; then kontena version ; fi

    - if [ "$PULL_REQUEST" == "false" ] ; then docker build -f Dockerfile.prod -t $DOCKER_HUB_REPO:$BUILD_NUMBER . ; fi
    - if [ "$PULL_REQUEST" == "false" ] ; then docker push $DOCKER_HUB_REPO:$BUILD_NUMBER ; fi

    - if [ "$PULL_REQUEST" == "false" ] ; then git clone https://juhakarna@gitlab.com/article-projects/deployment-scripts.git ; fi
    - if [ "$PULL_REQUEST" == "false" ] ; then kontena master login --token $KONTENA_LOGIN_TOKEN --grid $KONTENA_GRID $KONTENA_MASTER_URL ; fi
    - if [ "$PULL_REQUEST" == "false" ] ; then kontena vault update -u $KONTENA_VERSION_VARIABLE $BUILD_NUMBER ; fi
    - if [ "$PULL_REQUEST" == "false" ] && kontena stack list | grep -q article-project ; then kontena stack upgrade article-project ./deployment-scripts/$KONTENA_DEPLOYMENT_DESCRIPTOR ; fi
    - if [ "$PULL_REQUEST" == "false" ] && ! kontena stack list | grep -q article-project ; then kontena stack install ./deployment-scripts/$KONTENA_DEPLOYMENT_DESCRIPTOR ; fi

integrations:  
  hub:
    - integrationName: DockerHubIntegration
      type: docker
      branches:
        only:
          - master

  notifications:
    - integrationName: email
      type: email
      on_success: never
      on_failure: never
      on_pull_request: never

The configuration file defines following things:

  • The CI environment is a Node.js environment with node version 6.10.2
  • The script is run only for master branch related activities
  • A few visible environment variables as well as secret environment variables are set
  • Npm test runs the tests for the Front Service
  • Ruby (for Kontena CLI) and Kontena CLI are installed
  • In case the tests were successful and the CI run isn’t a pull-request related run, the container is built and pushed to Docker Hub, and the new container is deployed to the testing environment
  • In the integrations section the integration with Docker Hub is defined so that Shippable knows to automatically use Docker Hub credentials.
  • Notification e-mails are disabled

With the secret environment variables, it’s quite safe to store Shippable secrets in Git, since the encryption key is Shippable user or project group specific. It seems to be possible to use the same secrets with projects belonging to the same repository group, but not with projects belonging to another group. The secrets can be set via repository group Settings -> Encrypt menu.

For the App and Auth Services we’ll create very similar shippable.yml configurations:

The only major difference compared to the Front Service is the added PostgreSQL service which is needed for the API tests.

Kontena also provides a HTTP API interface, but based on the discussion on the Kontena Community Slack channel the CLI is the recommended way to access the master from CI environments; in the end the API calls would only replicate the already existing CLI functionality in a more complicated way.

Everything should now be in place for trying out the Shippable builds, so let’s commit and push those shippable.yms files into GIT-repositories and Shippable should start running CI for them. We don’t have much to test just yet, but we’ll add more code and test tools in coming articles.

GIT strategy and Setup

There are various strategies and processes for managing branching and merging in GIT repositories. I’m not going very deep into the subject here, but something along the line of Git Flow, GitHub Flow or GitLab Flow are the most common ones based on my experiences.

alt alt alt

In the article setup we’re going to keep things simple and configure a light weight workflow resembling the GitHub Flow with a few key principles in mind:

  • One constant branch: master
  • Content that is merged to master is meant to be released immediately or very soon
  • Direct pushes into master are forbidden, only merges via pull-requests are allowed
  • Merges to master are possible only after successful CI builds
  • Master is automatically tested when a merge operation is triggered

In coming articles we may complement the workflow with acceptance testing and production environment related details.

Enforcing Workflow Rules

It’s very straightforward to configure simple workflow rules in GitLab. The following settings are set for Front, App and Auth Service repositories via Settings:

General tab:
  • Activate merge request approvals is checked; the details like approvers and required approval count can be set to meet different needs
  • Only allow merge requests to be merged if the pipeline succeeds is checked
Repository tab:

Master branch is set to be protected, merge permissions are given to Developers + Masters and push permissions to No one

Trial Run

The CI should now be in place so it’s time to see if the system is working as expected. Let’s create a new feature branch for the Front Service repository and change the Howdy there! text a bit:

export default () => (  
  <h1>Hi there!</h1>
);

Then we’ll commit the change and make a pull-request to the master in GitLab. At this point we see that Shippable CI is triggered to make sure the proposed PR won’t break the tests. We don’t have any tests in place yet, so the CI job should run through successfully.

Once the pull-request related CI job is ready, we’ll merge the PR into the master. This will trigger another CI run which should also pass, push the Docker image into Docker Hub and update the testing environment.

Once the CI job is finished we’ll check the Hi there! text is also visible via the browser.

alt

Summary

We’ve now setup a testing environment together with the CI integration. We’ve also set some Git rules to guide us in the upcoming development endeavours. There are still quite a few things to be done until we’re ready to deploy the services into the production environment, but step by step we’re getting there.

In the next Testing Environment (2/2) article we’ll take a closer look into component and system level testing and do some test planning.

References

Git
Kontena

About Kontena

Want to learn about real life use cases of Kontena, case studies, best practices, tips & tricks? Need some help with your project? Want to contribute to a project or help other people? Join Kontena Forum to discuss more about Kontena Platform, chat with other happy developers on our Slack discussion channel or meet people in person at one of our Meetup groups located all around the world. Check Kontena Community for more details.

Image Credits: Juha Kärnä.

Juha Kärnä

Read more posts by this author.

comments powered by Disqus