<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  
  <channel>
    
    
    
    
    <title>Terraform on Consensus Enterprises Blog</title>
    <link>https://consensus.enterprises/tags/terraform/</link>
    <description>Recent content tagged 'Terraform' on the Consensus Enterprises Blog.</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-CA</language>
    <managingEditor>info@consensus.enterprises (The Consensus Team)</managingEditor>
    <webMaster>tech@consensus.enterprises (Consensus Infrastructure)</webMaster>
    <copyright>Copyright 2025 Consensus Enterprises International Inc.</copyright>
    <lastBuildDate>Tue, 16 May 2023 09:00:01 -0500</lastBuildDate>
    
    
        <atom:link href="https://consensus.enterprises/tags/terraform/index.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>Aegir5: Kubernetes Backend integration</title>
      <link>https://consensus.enterprises/blog/aegir5-kubernetes-backend/</link>
      <pubDate>Tue, 16 May 2023 09:00:01 -0500</pubDate>
      
      
      <guid isPermaLink="false">Aegir5: Kubernetes Backend integration on Consensus Enterprises Blog published Tue, 16 May 2023 09:00:01 -0500</guid>
      
      <description>In previous posts we covered how the Frontend and queue mechanisms can talk with the Backend. We also covered the stand-alone work we’ve been doing within Drumkit to support Drupal on Kubernetes. In this post, we’ll discuss how we plan to integrate this new Backend into the existing Aegir 5 architecture.
To integrate the Kubernetes Backend into Aegir 5, we will need to build new top-level entities (see this earlier post about Clusters, Projects, Releases, and Environments) for the Frontend. …</description>
      <content:encoded>In previous posts we covered how the Frontend and queue mechanisms can talk with the Backend. We also covered the stand-alone work we’ve been doing within Drumkit to support Drupal on Kubernetes. In this post, we’ll discuss how we plan to integrate this new Backend into the existing Aegir 5 architecture.
To integrate the Kubernetes Backend into Aegir 5, we will need to build new top-level entities (see this earlier post about Clusters, Projects, Releases, and Environments) for the Frontend. These entities are composed almost entirely of Operations.
The Tasks on the Frontend will primarily gather the variables needed to generate the templates in Drumkit on the Backend. These are passed through the Queue system to the Backend. The Backend in turn instantiates the configuration files, and runs the appropriate kubectl commands to deploy the various components into the Kubernetes cluster.
Drumkit currently has make targets that initialize an Environment and deploy a Project Release to it, among other things. We will need to add a Drumkit Backend to extend dispatcherd to issue these make targets, passing in the Frontend variables. Having dispatched the job, the Backend will then gather output and send it back to the Frontend.
In previous editions of Aegir, we sometimes saw the split-brain problem arise as a result of the Frontend and Backend sharing responsibility for the state of the system.
In Aegir5, the Frontend is intended to be canonical with regards to the data about the state of a running application. Aegir manages the Environment, consisting of the application state (files &#43; database).
The Project incorporates the infrastructure Kubernetes resources required to host the application. Drumkit provides commands to initialize a project with such resource definitions.
We are working on getting more specific about the tasks required to implement this in a new roadmap ticket. Our next post in the series will compare Aegir3 and Aegir5 from a features standpoint, after which we’ll discuss the plan to get from here to there, once we’ve established a clear roadmap.
</content:encoded>
    </item>
    
    <item>
      <title>Kubernetes backend for Aegir5</title>
      <link>https://consensus.enterprises/blog/aegir5-kubernetes/</link>
      <pubDate>Mon, 20 Mar 2023 09:00:01 -0500</pubDate>
      
      
      <guid isPermaLink="false">Kubernetes backend for Aegir5 on Consensus Enterprises Blog published Mon, 20 Mar 2023 09:00:01 -0500</guid>
      
      <description> Lately we’ve been working with clients ranging from large Canadian government departments to small commercial SaaS companies, who have asked us to deploy CMS apps to Kubernetes (K8S) clusters running on Openstack. In spite of our continued feeling that most of the time Kubernetes Won’t Save You, we’ve found it to be surprisingly useful in certain contexts. In fact, we’ve started to think that K8S will prove an extremely valuable backend to plug in to our existing Aegir5 front-end and queue …</description>
      <content:encoded> Lately we’ve been working with clients ranging from large Canadian government departments to small commercial SaaS companies, who have asked us to deploy CMS apps to Kubernetes (K8S) clusters running on Openstack. In spite of our continued feeling that most of the time Kubernetes Won’t Save You, we’ve found it to be surprisingly useful in certain contexts. In fact, we’ve started to think that K8S will prove an extremely valuable backend to plug in to our existing Aegir5 front-end and queue system.
We started developing this system for a project that required a Django app paired with a React frontend. This presented an opportunity to prototype a system capable of hosting a fairly complex application. Since then, we have deployed multiple Drupal sites in a similar fashion. This transition was surprisingly straight-forward; partly due to not needing to support headless scenarios, and thus being able to simplify quite a bit.
As shown in the diagram below, traffic enters the system via the router that directs traffic to the cluster. The router exposes a public IP address, to which we point our DNS entries. From there the K8S ingress service directs traffic within the cluster.
In addition, the ingress controller automatically generates Let’s Encrypt HTTPS certificates, and acts as the HTTPS endpoint, handling the TLS handshake, etc.
Traffic gets directed to the Drupal deployment, which in turn, connects to the database deployment. These are running Docker containers. In addition, temporary job containers can be launched to perform specific tasks, such as installing the site, or running cron.
In the case of the Drupal containers, we’re running a custom Drupal image that bakes our code base in. The database deployment, for its part, is using a stock mariadb image.
Our custom Drupal image, by including the project code base, provides us with reproducible builds. We can deploy this same image to multiple environments with confidence that it will be consistent across all of them.
Both the database, and the Drupal site are connected to storage via Persistent Volume Claims (PVCs). More on this later.
We use Terraform to deploy the clusters themselves to OpenStack service providers. With Terraform, we can define the size of the Kubernetes clusters that we want to run and launch them programmatically.
Because the initial application was intended to be a SaaS product, we ended up engineering a multi-cluster multi-environment architecture. For this project, we had a development cluster and a production cluster. We have since renamed them “unstable” and “stable” respectively, to indicate which types of environments are appropriate to host on each.
Our client was targeting their products at large financial institutions. As such, we were anticipating fairly stringent security requirements. Each financial institution wants their client data isolated from any other users of the system. By supporting multiple clusters and multiple environments per cluster, we provided a multi-tenant system that could isolate data at a number of different levels.
While that capability is certainly interesting, we have primarily been implementing and using simpler development workflows. In our Drupal projects, we have primarily used a single cluster with testing, staging and production environments. These environments all run within a single Kubernetes cluster. However, they isolate resources effectively by using K8S namespaces.
Once we have a cluster running, most of the workflows happen within K8S. We keep these logically segmented using namespaces. Together with dedicated storage, namespaces define environments. Within these environments, we deploy the configuration for the Drupal app itself.
In designing this architecture, we also wanted to take into account persistent data. In both the Django and the Drupal scenarios, we had data being stored in a database and files that were uploaded into the application and stored on the filesystem.
In order for this to persist between rebuilds of the application itself, we extended our concept of an environment to include not only the name space, but also the Persistent Volume Claims (PVCs), where we were storing this data. These are all defined as Kubernetes resources, but are handled separately, so as not to never mistakenly delete files or data.
This architecture basically comes down to generating various configuration files and instantiating the resources that they describe. We have wrapped these in Drumkit (GNU Make) targets that template the configuration files (using Mustache), and run all necessary commands in the correct order, with the appropriate options and arguments.
This mirrors the functionality of Provision in Aegir. Our intention is to use this system as a backend for Aegir5. Watch for an upcoming blog post which tackles our planning and vision for next steps!
</content:encoded>
    </item>
    
    <item>
      <title>Moving Terraform State from OpenStack Swift to GitLab</title>
      <link>https://consensus.enterprises/blog/moving-terraform-state-from-openstack-swift-to-gitlab/</link>
      <pubDate>Mon, 09 Jan 2023 09:00:00 -0400</pubDate>
      
      
      <guid isPermaLink="false">Moving Terraform State from OpenStack Swift to GitLab on Consensus Enterprises Blog published Mon, 09 Jan 2023 09:00:00 -0400</guid>
      
      <description>For our cloud computing, we typically use an OpenStack provider because of its open-source nature: There’s no vendor lock-in, and the IaaS code is peer-reviewed unlike providers such as AWS, Azure, GCP, etc. (Shout out to Vexxhost for having great support!) As such, we’ve been using OpenStack’s Swift object storage service for storing Terraform’s state, which allows Terraform to track all of the resources it manages for automating infrastructure.
Recently, however, support for the Swift backend …</description>
      <content:encoded>For our cloud computing, we typically use an OpenStack provider because of its open-source nature: There’s no vendor lock-in, and the IaaS code is peer-reviewed unlike providers such as AWS, Azure, GCP, etc. (Shout out to Vexxhost for having great support!) As such, we’ve been using OpenStack’s Swift object storage service for storing Terraform’s state, which allows Terraform to track all of the resources it manages for automating infrastructure.
Recently, however, support for the Swift backend has been removed. If you’re still using Swift for this purpose, you’ll need to migrate your Terraform state files to another backend. Because the official migration documentation is sparse, I’ll describe how to migrate from Swift to GitLab-managed Terraform state. GitLab is a fantastic option because it can be used to manage so many other aspects of your project that you need anyway: Git repository hosting, issue tracking, CI/CD, etc. We use GitLab for all of our projects so it’s a great fit for us.
The actual step of migrating the data is well supported, but there’s some required set-up before and after.
 When you change backends, Terraform gives you the option to migrate your state to the new backend. This lets you adopt backends without losing any existing state.
  Downgrade to the latest pre-1.3 Terraform version.  e.g. sudo apt install terraform=1.2.9   Navigate to your Terraform directory.  cd /path/to/git/repository/terraform   Move your local state files out of the way as they could be set up for a different environment.  mv .terraform /tmp   Set up your environment variables to connect to use your existing state backend.  e.g. source ../openstackrc/vexxhost-...-staging-ca-ymq-1.openrc.sh   Initialize Terraform from the remote state.  terraform init   Back up your current state.  cp .terraform/terraform.tfstate terraform.tfstate.backup-staging   In your Terraform code, in your backend stanza, replace swift with http. Unset the old state environment variables.  export TF_CLI_ARGS_init=   Fetch one of your Gitlab personal access tokens with the api permission. If you don’t have any that aren’t expired, create a new one in your settings.  To actually migrate the data, the GitLab documentation says to set a single environment variable, and then manually run terraform init with many options. Given that this is error-prone and not easily repeatable, I’d recommend using a shell script (or similar) instead.
Create a file named setup-terraform-variables, and populate it like so:
#!/usr/bin/env bash ############################################################################# ## Set up Terraform variables. ############################################################################# # Set up the Gitlab.com state backend. export OS_PROJECT_CLOUD=&#34;$OS_PROJECT_DESCRIPTION-$OS_PROJECT_ENVIRONMENT-$OS_REGION_NAME&#34; export TF_GITLAB_PROJECT_ID=&#34;&#34; echo &#34;Please enter your gitlab.com username: &#34; read -r TF_GITLAB_USERNAME export TF_GITLAB_USERNAME echo &#34;Please enter your gitlab.com personal access token: &#34; read -sr TF_GITLAB_PASSWORD export TF_GITLAB_PASSWORD export TF_STATE_ADDRESS=&#34;https://gitlab.com/api/v4/projects/${TF_GITLAB_PROJECT_ID}/terraform/state/${OS_PROJECT_CLOUD}&#34; export TF_CLI_ARGS_init=&#34;\ -backend-config=&#39;address=$TF_STATE_ADDRESS&#39; \ -backend-config=&#39;lock_address=$TF_STATE_ADDRESS/lock&#39; \ -backend-config=&#39;unlock_address=$TF_STATE_ADDRESS/lock&#39; \ -backend-config=&#39;username=$TF_GITLAB_USERNAME&#39; \ -backend-config=&#39;password=$TF_GITLAB_PASSWORD&#39; \ -backend-config=&#39;lock_method=POST&#39; \ -backend-config=&#39;unlock_method=DELETE&#39; \ -backend-config=&#39;retry_wait_min=5&#39;&#34; You can set other Terraform variables in here as well, and include it in other deployment-environment-specific shell scripts that you run to set up each one. For example, if you’re using OpenStack generally, these would be your openstackrc files, which contain your credentials for accessing the API.
For a further optimization, you can write the GitLab credentials to a local file so as not to have to enter them every time, but I’ll leave this as an exercise to the reader. (If I get a chance, I’ll come back here and update it.)
In your Terraform configuration files, it’s necessary to change the backend type from Swift to HTTP.
terraform { - backend &#34;swift&#34; { &#43; backend &#34;http&#34; {  # Must be read from environment variable `TF_CLI_ARGS_init` because normal # variables cannot be used here. } } If you’re wondering why we need to use TF_CLI_ARGS_init, and can’t use Terraform variables in the stanza, see my earlier article Setting Deployment Environments’ Terraform State Backends with Environment Variables .
You can now run:
 terraform init  You should now see something like this, which requires your confirmation part-way through.
Initializing the backend... Terraform detected that the backend type changed from &#34;swift&#34; to &#34;http&#34;. Acquiring state lock. This may take a few moments... Acquiring state lock. This may take a few moments... Do you want to copy existing state to the new backend? Pre-existing state was found while migrating the previous &#34;swift&#34; backend to the newly configured &#34;http&#34; backend. No existing state was found in the newly configured &#34;http&#34; backend. Do you want to copy this state to the new &#34;http&#34; backend? Enter &#34;yes&#34; to copy and &#34;no&#34; to start with an empty state. Enter a value: yes Releasing state lock. This may take a few moments... Successfully configured the backend &#34;http&#34;! Terraform will automatically use this backend unless the backend configuration changes. Initializing provider plugins... [...] Terraform has been successfully initialized! You may now begin working with Terraform. Try running &#34;terraform plan&#34; to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.  This will purge your old state, and set it up to match the new remote state.
 Move your local state files out of the way as they reflect the old state backend.  mv .terraform /tmp   Upgrade Terraform to the latest version.  e.g. sudo apt install terraform   Set up your environment variables again, using the updated code, to connect to your desired cloud environment. This script must include setup-terraform-variables as discussed above.  source ../openstackrc/vexxhost-...-staging-ca-ymq-1.openrc.sh   Initialize Terraform from the remote state.  terraform init    You can now see any of your state files in the GitLab Web UI on your Gitlab project’s page. Simply navigate to Infrastructure - Terraform, and they’ll be listed.
</content:encoded>
    </item>
    
    <item>
      <title>Setting Deployment Environments&#39; Terraform State Backends with Environment Variables</title>
      <link>https://consensus.enterprises/blog/setting-environments-terraform-state-backends-with-environment-variables/</link>
      <pubDate>Mon, 12 Dec 2022 14:00:00 -0400</pubDate>
      
      
      <guid isPermaLink="false">Setting Deployment Environments&#39; Terraform State Backends with Environment Variables on Consensus Enterprises Blog published Mon, 12 Dec 2022 14:00:00 -0400</guid>
      
      <description>Terraform is an essential tool for automating cloud-computing infrastructure and storing it in code (IaC). While there are several ways to navigate between deployment environments (e.g. Dev, Staging &amp; Prod), I’d like to talk about how this can be done with environment variables, and explain why it can’t be done more naturally with Terraform variables.
I originally wrote this as a comment on the feature request Using variables in terraform backend config block, which explains Terraform’s design …</description>
      <content:encoded>Terraform is an essential tool for automating cloud-computing infrastructure and storing it in code (IaC). While there are several ways to navigate between deployment environments (e.g. Dev, Staging &amp; Prod), I’d like to talk about how this can be done with environment variables, and explain why it can’t be done more naturally with Terraform variables.
I originally wrote this as a comment on the feature request Using variables in terraform backend config block, which explains Terraform’s design limitations preventing it from allowing any variables in backend configurations, which is what Terraform uses to determine where to store its state data files (which track the cloud resources it manages).
To illustrate, this is where variables can’t be used, even though the configuration must change when the environment changes:
terraform { backend &#34;whatever&#34; { [...] } }  Alternatively, it’s possible to have a single backend state configuration that stores data for multiple environments using Workspaces. You can then use Terraform CLI commands to switch between them. However, this solution is not not appropriate for all use cases, as per the documentation:
 Workspaces are not appropriate for system decomposition or deployments requiring separate credentials and access controls.
 If this isn’t a problem for you, Workspaces is a good option. Otherwise, please keep reading.
In demonstrating a solution, I’m going to be working with OpenStack, which is the IaaS run by many cloud providers, but the same concept can be applied to other IaaS, such as AWS, Azure, GCP, etc. Here, the backend state data will be stored in OpenStack’s Swift object storage service.
Update (2023-04-11): While the example below is still valid conceptually, Swift can no longer be used as a Terraform backend because support for it has been removed. If you’re still using Swift for this purpose, the official migration documentation is sparse. However, I provide guidance in a more recent article: Moving Terraform State from OpenStack Swift to GitLab.
Typically, you’d download an OpenStack RC file, which contains the credentials needed to access the API. In each of these (e.g. vexxhost-abc-prod-ca.openrc.sh, where the format is provider-project-environment-region), I append the following lines to the end:
[...] ############################################################################# ## Additional set-up not included with IaaS provider&#39;s OpenStack RC files ### ############################################################################# export OS_PROJECT_ENVIRONMENT=&#34;abc-prod-ca&#34; source $(dirname &#34;$0&#34;)/setup-terraform-variables.sh The sourced (included) file then looks like:
#!/usr/bin/env bash ############################################################################# ## Set up Terraform variables. ############################################################################# # Set up backend state container names. export OS_PROJECT_STATE=&#34;$OS_PROJECT_ENVIRONMENT-terraform-state&#34; export OS_PROJECT_STATE_ARCHIVE=&#34;$OS_PROJECT_STATE-archive&#34; export TF_CLI_ARGS_init=&#34;\ -backend-config=&#39;container=$OS_PROJECT_STATE&#39; \ -backend-config=&#39;archive_container=$OS_PROJECT_STATE_ARCHIVE&#39;&#34; # You can set other TF variables in here as well. echo &#34;Please enter the outgoing e-mail account&#39;s password: &#34; read -sr TF_VAR_smtp_password_unquoted export TF_VAR_smtp_password=&#34;\&#34;$TF_VAR_smtp_password_unquoted\&#34;&#34; The special environment variable here is TF_CLI_ARGS_init. This is what Terraform uses to configure the backend, if it’s set. So by changing that environment variable, you can change the backend configuration. All that’s needed in the code is the following, basically just the backend type, with details being pulled in from the environment.
In main.tf (or wherever), the backend definition:
terraform { backend &#34;swift&#34; {# Must be read from environment variable `TF_CLI_ARGS_init` because normal # variables cannot be used here.  } }  To switch between environments, I simply source another OpenStack RC file; I have one for every environment that I care about. With the above set-up, it also switches the Swift object storage containers used for backend state. You can then run terraform init.
I’ve found that sometimes it’s necessary to delete your .terraform directory before rerunning terraform init though, or it’ll get confused and you’ll get strange error messages. So as standard practise, I run the following command to remove it first:
mv .terraform /tmp   You cannot use Terraform variables to vary the backend state configuration. You can, however, use the environment variable TF_CLI_ARGS_init instead. Run a script to set this environment variable with the configuration matching the deployment environment you’d like to work with. Simply specify the backend type in the Terraform code, and TF_CLI_ARGS_init along with your script(s) will take care of the rest.  </content:encoded>
    </item>
    
    <item>
      <title>Protecting your cloud networks with WireGuard VPN and Ansible</title>
      <link>https://consensus.enterprises/blog/protecting-cloud-networks-wireguard-ansible/</link>
      <pubDate>Fri, 24 Jul 2020 11:02:26 -0400</pubDate>
      
      
      <guid isPermaLink="false">Protecting your cloud networks with WireGuard VPN and Ansible on Consensus Enterprises Blog published Fri, 24 Jul 2020 11:02:26 -0400</guid>
      
      <description> Within cloud computing, there are various types of sites and services not meant for public consumption (e.g. analytics software, databases, log servers, etc.). For security reasons, it’s best to keep these accesssible only via the private network, which is behind the firewall.
To provide access to these resources, a virtual private network (VPN) should be used, with network access granted only to trusted individuals within the organization.
Traditionally, OpenVPN, IPsec and other solutions were …</description>
      <content:encoded> Within cloud computing, there are various types of sites and services not meant for public consumption (e.g. analytics software, databases, log servers, etc.). For security reasons, it’s best to keep these accesssible only via the private network, which is behind the firewall.
To provide access to these resources, a virtual private network (VPN) should be used, with network access granted only to trusted individuals within the organization.
Traditionally, OpenVPN, IPsec and other solutions were the go-to options within the open-source software space. However, these are often complex to set up and have relatively massive code bases making them difficult to maintain.
For example, OpenVPN requires that a certificate authority (CA) be set-up. This is a complex piece of software, which shouldn’t be necessary for running a VPN. WireGuard simply requires the exchange of public keys in order to set up a secure connection, much like SSH and PGP.
There’s been a lot of interest in WireGuard lately, notably because:
 It was recently added to the Linux kernel. NordVPN, one of the major VPN service providers, has started using it. Mozilla, the company behind the Firefox Web browser, just started offering it as a service. It’s recommended in The Definitive 2019 Guide to Cryptographic Key Sizes and Algorithm Recommendations  However, in the Why Not WireGuard article, some opposition was raised. Let’s tackle some of the points raised there.
Fine, but we’re doing the opposite. The clients can have dynamic IP addresses, but the server never will. So it’s irrelevant for this use case.
That’s precisely the purpose of this article: To introduce an easy way to set it up and maintain it with Ansible.
Lacking cipher agility is actually a good thing. A better approach is to use versioned protocols. And it’s actually no more difficult to upgrade WireGuard clients than anything else. Both of these non-issues are discussed very nicely in the article Against Cipher Agility in Cryptography Protocols.
As it was recently added to the Linux kernel, this is no longer an issue.
While performance can always be improved, this doesn’t appear to be a critical issue for the application. For most use cases, it’s perfectly usable.
None of the above “issues” are actually a problem here.
Ansible allows for automated deployment of configuration, which removes the need for manually installing, configuring and maintaining applications. It provides tonnes of modules, including those for files, storage, system, networking and even cloud provisioning (although I would generally recommend Terraform for this purpose).
Its units of work are called “tasks” that are run sequentially (procedural) in “roles” and “playbooks” to perform operations such as installing server software, its configuration, and handling various other types of system administration. Ansible strives for simplicity, resulting in playbooks that are essentially self-documenting. It can safely be run multiple times (as it strives to be idempotent), running tasks only when necessary, leaving already-configured items as-is.
While there were several WireGuard roles available for installing and maintaining the application, they either:
 didn’t cater to the cloud gateway VPN use case, lacked documentation, and/or intentionally omitted critical elements (e.g. packet forwarding to internal hosts) for implementation by the user.  As such, I’ve written a comprehensive one.
It’s packaged as a collection as this is the newer distribution format that doesn’t concern itself with the location of source control repositories. Traditionally, it was necessary for roles to be hosted on GitHub for them to be published on Galaxy, the site for sharing Ansible contributions. As I prefer GitLab for hosting code repositories, this seemed more natural. The project is therefore hosted on GitLab.com.
The collection contains the single WireGuard role, and can be installed with ansible-galaxy (Ansible 2.9&#43;). For older versions of Ansible, simply clone the Git repository and create a symbolic link to roles/wireguard_cloud_gateway within it.
Documentation can be found in the role’s README.
Some of the other roles I researched didn’t provide much support for configuring the client side of the VPN, meaning the devices which connect to the cloud gateway server to access private network resources.
My role, on the other hand, can be run in either client or server mode: the same role can be used for configuring both. Running it in server mode configures the server (on the gateway VM), and running it in client mode configures the client devices (who connect to the server to gain access to the private network).
For cloud security akin to traditional firewalls, security groups are essential for protecting virtual-machine (VM) compute instances. While these can be configured manually, ideally such configuration would be infrastructure as code (IaC) implemented via a tool such as Terraform, stored in a version control system (VCS) such as Git.
In a typical VPN-server set-up, the incoming (“ingress”) rules for such a “VPN” security group would block access to all ports except the one upon which the VPN communicates. This configuration should be applied to the VM that will be running WireGuard. However, if this is the case, using Ansible to install it won’t work because Ansible uses SSH to connect to the VM, and the SSH port is blocked.
In order to allow for such a secure set-up, a security group ID (e.g. “public_ssh”) can be provided to the role as a variable, which will be used to temporarily allow SSH access. Once the installation is complete, this temporary access will be revoked.
For those of us that rely on VPN technology, it’s often necessary to connect to multiple VPNs at the same time, or at least prevent network resource IDs from overlapping. For example, you want to avoid having two VMs on different networks from having the same IP address.
WireGuard supports this by allowing multiple interfaces. By default, wg0 is used as the first one, but wg1, wg2, etc. can all coexist. If each remote network can exist on a different subnet, there’s no conflict from the client perspective. For example:
 wg0 can be used to access network A, with subnet 10.1.0.0/16 wg1 can be used to access network B, with subnet 10.2.0.0/16  To set this up in your Ansible playbook when calling the role, set the service_interface variable. By default, it’s wg0. Also, be sure to set the client_accessible_ips properly; this defines the subnet. For details, see the default variables file.
The role was originally written for specific systems, notably OpenStack networks and Ubuntu VMs. However, we’d like to see the role support as many systems as possible (e.g. the IaaS platforms Amazon Web Services (AWS), Microsoft’s Azure, Google Cloud Platform (GCP), Digital Ocean, etc. and non-Debian-based operating systems (OSes) such as CentOS, Red Hat, etc.).
If you work with these systems, we’re more than happy to accept your supporting code via merge requests. Otherwise, if you’re able to provide funding, we can add support for these systems on your behalf.
To get in touch with us for that, or for any other reason, please use our contact form. We provide consulting in several areas, such as:
 Enterprise cloud architecture Cloud computing infrastructures as a service (IaaS) Infrastructure automation with Terraform Automating full-stack configuration with Ansible OpenStack, Amazon Web Services (AWS), Google Cloud Platform (GCP) &amp; Microsoft Azure consulting Virtual private networking (VPNs) with OpenVPN and WireGuard  </content:encoded>
    </item>
    
  </channel>
</rss>
