4 minute read Published: Author: Christopher Gervais
Aegir , Aegir5 , Automation , DevOps , Drupal , Drupal Planet , Kubernetes , OpenStack , Terraform



Aegir5 and Kubernetes

Lately we’ve been working with clients ranging from large Canadian government departments to small commercial SaaS companies, who have asked us to deploy CMS apps to Kubernetes (K8S) clusters running on Openstack. In spite of our continued feeling that most of the time Kubernetes Won’t Save You, we’ve found it to be surprisingly useful in certain contexts. In fact, we’ve started to think that K8S will prove an extremely valuable backend to plug in to our existing Aegir5 front-end and queue system.

We started developing this system for a project that required a Django app paired with a React frontend. This presented an opportunity to prototype a system capable of hosting a fairly complex application. Since then, we have deployed multiple Drupal sites in a similar fashion. This transition was surprisingly straight-forward; partly due to not needing to support headless scenarios, and thus being able to simplify quite a bit.

Drupal on Kubernetes

As shown in the diagram below, traffic enters the system via the router that directs traffic to the cluster. The router exposes a public IP address, to which we point our DNS entries. From there the K8S ingress service directs traffic within the cluster.

Drupal on Kubernetes Application Hosting Architecture

In addition, the ingress controller automatically generates Let’s Encrypt HTTPS certificates, and acts as the HTTPS endpoint, handling the TLS handshake, etc.

Traffic gets directed to the Drupal deployment, which in turn, connects to the database deployment. These are running Docker containers. In addition, temporary job containers can be launched to perform specific tasks, such as installing the site, or running cron.

In the case of the Drupal containers, we’re running a custom Drupal image that bakes our code base in. The database deployment, for its part, is using a stock mariadb image.

Our custom Drupal image, by including the project code base, provides us with reproducible builds. We can deploy this same image to multiple environments with confidence that it will be consistent across all of them.

Both the database, and the Drupal site are connected to storage via Persistent Volume Claims (PVCs). More on this later.

Terraform & Kubernetes Clusters

We use Terraform to deploy the clusters themselves to OpenStack service providers. With Terraform, we can define the size of the Kubernetes clusters that we want to run and launch them programmatically.

Because the initial application was intended to be a SaaS product, we ended up engineering a multi-cluster multi-environment architecture. For this project, we had a development cluster and a production cluster. We have since renamed them “unstable” and “stable” respectively, to indicate which types of environments are appropriate to host on each.

Drupal on Kubernetes Cloud Architecture

Our client was targeting their products at large financial institutions. As such, we were anticipating fairly stringent security requirements. Each financial institution wants their client data isolated from any other users of the system. By supporting multiple clusters and multiple environments per cluster, we provided a multi-tenant system that could isolate data at a number of different levels.

While that capability is certainly interesting, we have primarily been implementing and using simpler development workflows. In our Drupal projects, we have primarily used a single cluster with testing, staging and production environments. These environments all run within a single Kubernetes cluster. However, they isolate resources effectively by using K8S namespaces.

Namespaces and storage (Environments)

Once we have a cluster running, most of the workflows happen within K8S. We keep these logically segmented using namespaces. Together with dedicated storage, namespaces define environments. Within these environments, we deploy the configuration for the Drupal app itself.

In designing this architecture, we also wanted to take into account persistent data. In both the Django and the Drupal scenarios, we had data being stored in a database and files that were uploaded into the application and stored on the filesystem.

In order for this to persist between rebuilds of the application itself, we extended our concept of an environment to include not only the name space, but also the Persistent Volume Claims (PVCs), where we were storing this data. These are all defined as Kubernetes resources, but are handled separately, so as not to never mistakenly delete files or data.

Next Steps

This architecture basically comes down to generating various configuration files and instantiating the resources that they describe. We have wrapped these in Drumkit (GNU Make) targets that template the configuration files (using Mustache), and run all necessary commands in the correct order, with the appropriate options and arguments.

This mirrors the functionality of Provision in Aegir. Our intention is to use this system as a backend for Aegir5. Watch for an upcoming blog post which tackles our planning and vision for next steps!


The article Kubernetes backend for Aegir5 first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!