Using Wasabi S3 Object Storage with containers

There are cases where storage like S3 is ideal to store your data, but in some cases it's not an option to use AWS services. So is the user out of luck, or are there some other services or solutions that could be used? Yes, luckily there are solutions to use S3-like storage without using AWS services. In this blog I'll introduce a service called Wasabi which offers S3 compatible storage and makes some quite bold claims:

Wasabi is 1/5th the price and 6x faster than Amazon S3

Sounds pretty good to me. :)

There's also possibility to run a self-hosted S3-like solution using Minio, which even has some community developed stacks to install it.

$ kontena stack reg search minio
NAME                                     VERSION    DESCRIPTION  
jakolehm/minio                           0.2.0      Distributed Minio (S3 compatible object storage server)  
matti/minio                              0.1.0      jakolehms minio with health_check, wait_for_port, https force  

Wasabi with containers

Of course the first question is that how do I integrate Wasabi, or any other S3 compatible service, with my containers? The simplest way is to utilize Docker's volume plugin mechanism so you can mount the data buckets as volumes for your containers. To do this, we'll need to install a volume plugin to manage the integration with Wasabi. For that I'll use a driver called RexRay, and naturally the s3fs flavor of it.

Of course it can also be done directly, without volumes, using some S3 capable libraries in your app. However, the benefit of using volumes is that you separate the application from the storage, because when using volumes the storage is kind of abstracted away. Your app stores the data into some specific directory (in the container) and how that is mounted and mapped to some external storage is not the app's problem. Volumes also allow you to easily change the storage "backend" based on different requirements.

Setting up Wasabi

As with AWS S3, or any other AWS service, it's not really a good idea to use your root account to service integrations. So we need to create an IAM user which we'll use to access Wasabi buckets from our nodes using the RexRay driver.

Once the IAM user is created, grab the keys, as you'll need them in the next steps.

Setting up RexRay for Docker

I'm going to use rexray/s3fs driver to integrate my Docker engines with Wasabi. I can use the same driver I'd use for AWS S3, as Wasabi API is fully compliant with S3.

As we're about to setup the driver using the new Docker plugin mechanism, make sure you're running Docker version 1.13+. If you're running CoreOS it means that you might have to update the OS, as the Docker version supporting plugins is shipped only from version 1576.4.0 (released at December 6, 2017) onwards.

To install rexray/s3fs for Docker, us following command:

docker plugin install rexray/s3fs \  
S3FS_ACCESSKEY=YOUR_ACCESS_KEY \  
S3FS_SECRETKEY=YOUR_SECRET_KEY \  
S3FS_ENDPOINT=https://s3.wasabisys.com \  
S3FS_OPTIONS=url=https://s3.wasabisys.com  

Naturally, replace the keys with the ones you grabbed from the Wasabi web console when you created the IAM user.

Also, the last S3FS_OPTIONS is really needed to configure rexray/s3fs properly, without it rexray/s3fs behaves really odd.

Now your Docker engine should see all the buckets on Wasabi as usable volumes.

Kontena volumes

Starting from the 1.2 release, Kontena supports management of volumes, as well. This means that you can now use rexray/s3fs volumes, stored as Wasabi buckets, in your Kontena stacks. Remember the how the volume scoping affects the volume naming, and in this case bucket creation and data sharing:

  • scope: instance; each service instance will get their own bucket to store data
  • scope: stack; each stack will get own bucket to store data, services within the same stack will share buckets and thus the data also
  • scope: grid; each grid will get only one bucket, thus all services using the same volume will share the same data

To deploy a service that stores data on Wasabi buckets, I'll use a demo NodeJS app that handles file uploads. The uploads are then stored on the volume and thus also automatically to Wasabi bucket.

Creation of volumes

Now when (some of) the nodes are running the rexray plugin, to integrate docker volumes with Wasabi service, we can create the volume definition on Kontena:

kontena volume create --driver rexray/s3fs --scope stack uploads  

I'm using the scope stack so that each of the services in the same stack will mount to the same bucket. In this case, there's only one service in the stack, but it also means that each of the instances will mount to the same bucket.

Deploy the services

As always, we'll deploy the services as Kontena stack. First make sure you have at least one public facing loadbalancer service setup in your platform. If not, I'd highly suggest to setup one using:

kontena stack install kontena/ingress-lb  

That'll deploy Kontena loadbalancer in daemon mode (one instance per node) for your platform.

To install the NodeJS file upload sample, use:

$ kontena stack install jussi/nodejs-file-upload
> How many upload instances : 3
> Domain for the service : files.kontena.works
> Choose a loadbalancer ingress-lb/lb
 [done] Creating stack nodejs-file-upload      
 [done] Triggering deployment of stack nodejs-file-upload     
 [done] Waiting for deployment to start     
 [done] Deploying service upload     

I'm using the handy variables to make the stack highly re-usable and not tied into any specific environment.

It works?

To test the service, head over to the domain you gave during the stack installation with your browser. Given that you have that DNS pointing to the loadbalancer you selected during the installation.

Click the button to select a file to upload.

Once the file has finished uploading, you can check that it actually got stored in the bucket by navigating to the bucket on the Wasabi side.

You'll also notice that when Kontena created the volume during deployment, using rexray driver, it named the bucket as nodejs-file-upload.uploads. This is because we defined the scope as stack, so you can re-use the same bucket for all services and instances in the stack. This way, I could install the same stack with different name, say demo, to the same platform. In that case, it creates a new bucket as demo.uploads so that my stacks don't interfere with each other.

Header image credit: Courtney Boydston.

Want to try it out?

Sign up to Kontena Cloud now to get $25 of free credits to run a fully hosted Kontena Platform to try it out. Kontena provides the most easy-to-use, fully integrated solution for DevOps and software development teams to deploy, run, monitor and operate containers on the cloud. It is used by hundreds of startups and software development teams working for some of the biggest enterprises in the world. www.kontena.io

Jussi Nummelin

Read more posts by this author.

Tampere, Finland
comments powered by Disqus