Configure EC2 as a Consul Client for HCP Consul
HashiCorp Cloud Platform (HCP) Consul is a fully managed Service Mesh as a Service (SMaaS) version of Consul. After you deploy an HCP Consul server cluster, you must deploy Consul clients into your network so you can leverage Consul's full feature set including service mesh and service discovery. HCP Consul supports Consul clients running on EKS, EC2, and ECS workloads.
In this tutorial, you will deploy and provision a Consul client running on an EC2 instance that connects to your HCP Consul cluster. In the process, you will review the provisioning script to better understand the steps required to properly configure an EC2 instance to connect and interact with an HCP Consul cluster.
Prerequisites
For this tutorial, you will need:
- the Terraform 0.14+ CLI installed locally.
- An HCP account configured for use with Terraform
- an AWS account with AWS Credentials configured for use with Terraform.
Clone example repository
In your terminal, clone the example repository. This repository contains Terraform configuration to deploy different types of Consul clusters, including the one you will need in this tutorial.
Navigate to the project directory in the cloned repository.
Review configuration
The project directory contains two sub-directories:
The
1-vpc-hcp
subdirectory contains Terraform configuration to deploy an AWS VPC and underlying networking resources, an HCP HashiCorp Virtual Network (HVN), and an HCP Consul. In addition, it uses thehashicorp/hcp-consul/aws
Terraform module to set up all networking rules to allow a Consul client to communicate with the HCP Consul servers. This includes setting up the peering connection between the HVN and your VPC, setting up the HCP routes, and creating AWS ingress rules.datacenter-deploy-ec2-hcp/1-vpc-hcp/main.tfNote
The
hashicorp/hcp-consul/aws
Terraform module creates a security group that allows TCP/UDP ingress traffic on port8301
and allows all egress. The egress security rule lets the EC2 instance download dependencies required for the Consul client including the Consul binary and Docker.The
2-ec2-consul-client
subdirectory contains Terraform configuration that creates an AWS key pair and deploys an EC2 instance. The EC2 instance uses acloud-init
script to automate the Consul client configuration. In the Review Consul client configuration for EC2 section, you will review the automation scripts in more detail.
This tutorial intentionally separates the Terraform configuration into two discrete steps. This process reflects Terraform best practices. By dividing the HCP Consul cluster management from the Consul client management, you can separate the duties and reduce the blast radius.
Deploy HCP Consul
Navigate to the 1-vpc-hcp
directory.
Initialize the Terraform configuration.
Next, apply the configuration. Respond yes
to the prompt to confirm.
Notice that Terraform displays the outputs created from the apply.
Create terraform.tfvars file for Consul client directory
Since you created the underlying infrastructure with Terraform, you can use the outputs to help you deploy the Consul clients on an EC2 instance.
Create a terraform.tfvars
file in the 2-ec2-consul-client
directory with the
Terraform outputs from this project.
Review Consul client configuration for EC2
Navigate to the 2-ec2-consul-client
directory.
Review Terraform configuration
Open main.tf
. This Terraform configuration creates an AWS key pair, a security
group, an EC2 instance. The EC2 instance uses a cloud-init
script to automate
the Consul client configuration so it can connect to your HCP Consul cluster.
The AWS key pair and security group lets you SSH into your EC2 instance.
Notice that the Terraform configuration uses data sources to retrieve information about your AWS and HCP resources.
The aws_instance.consul_client
resource defines your EC2 instance that will
serve as a Consul client. Notice the following attributes:
- The
count
attribute lets you easily scale the number of Consul clients running on EC2 instances. Thecloud-init
script lets you automatically configure the EC2 instance to connect to your HCP Consul cluster. - The
vpc_security_group_ids
attribute references a security group that allows TCP/UDP ingress traffic on port8301
and allows all egress. The ingress traffic lets the HCP Consul server cluster communicate with your Consul clients. The egress traffic lets you download the dependencies required for a Consul client, including the Consul binary. - The
key_name
attribute references a key pair that will let you SSH into the EC2 instance. - The
user_data
attribute references thescripts/user_data.sh
andscripts/setup.sh
automation scripts that configure and set up a Consul client on your EC2 instance. Notice that the automation scripts references the HCP Consul's CA certificate, configuration, and ACL tokens. These values are crucial for the Consul client to securely connect to your HCP Consul cluster.
Review client configuration files
The client configuration file contains information that lets your Consul client connect to your specific HCP Consul cluster.
The Terraform configuration file retrieves the values directly from the HCP Consul data source.
The following is a sample client configuration file.
Notice these attributes in the client configuration file:
- The
acl.enabled
setting is set totrue
which ensures that only requests with a valid token will be able to access resources in the datacenter. To add your client, you will need to configure an agent token. The automation script automatically configures this. - The
ca_file
setting references theca.pem
file. The automation script will update this to point to/etc/consul.d
. - The
encrypt
setting is set to your Consul cluster's gossip encryption key. Do not modify the encryption key that is provided for you in this file. - The
retry_join
is configured with the private endpoint address of your HCP Consul cluster's API. This is the address that your client will use to interact with the servers running in the HCP Consul cluster. Do not modify the value that is provided for you in this file. - The
auto_encrypt.tls
setting is set totrue
to ensure transport layer security is enforced on all traffic with and between Consul agents.
Review provisioning scripts
The 2-ec2-consul-client/scripts
directory contains all the automation scripts.
- The
user_data.sh
file serves as an entrypoint. It loads, configures, and runssetup.sh
. - The
service
file is a template for a systemd service. This will let the Consul client to run as a daemon (background) service on the EC2 instance. It will also automatically restart the Consul client if it fails. - The
setup.sh
contains the core logic to configure the Consul client. First, the script sets up container networking (setup_networking
), then downloads the Consul binary and Docker (setup_deps
).
The setup_consul
function creates the /etc/consul.d
and /var/consul
directories, the Consul configuration and data directories respectively.
The EC2 instance Terraform configuration defines the Consul configuration and data directory path.
Next, the setup_consul
function configures and moves the CA file and client
configuration files in their respective destinations in /etc/consul.d
. Notice
that the script updates the client configuration's ca_file
path, ACL token,
ports, and bind address.
Finally, the setup.sh
file enables and starts the Consul service.
Create SSH key
The configuration scripts included in the AMIs rely on a user named
consul-client
. Create a SSH key to pair with the user so that you can securely
connect to your instances.
Generate a new SSH key named consul-client
. The argument provided with the
-f
flag creates the key in the current directory and creates two files called
learn-packer
and consul-client.pub
. Change the placeholder email address to
your email address.
When prompted, press enter to leave the passphrase blank on this key.
Deploy Consul client on EC2
Find the terraform.tfvars
file. This file contains information about your VPC
and HCP deployment and should look like the following.
If you do not have this file, go to the step to create the file.
Initialize the Terraform configuration.
Next, apply the configuration. Respond yes
to the prompt to confirm.
Verify Consul client
Now that you have deployed the Consul clients on an EC2 instance, you will verify that you have a Consul deployment with at least 1 server and 1 client.
Retrieve your HCP Consul dashboard URL and open it in your browser.
Next, retrieve your Consul root token. You will use this token to authenticate your Consul dashboard.
In your HCP Consul dashboard, sign in with the root token you just retrieved. After you sign in, click on Nodes to find the Consul client.
Note
If your Consul client is unable to connect to your HCP Consul server cluster, verify that your VPC, HVN, peering connection, and routes are configured correctly. Refer to the example repository for each resource's configuration.
You can also SSH into your EC2 instance to verify that it is running the Consul client and connected to your HCP Consul cluster.
First, SSH into your EC2 instance.
Then, view the members in your Consul datacenter. Replace ACL_TOKEN
with the
Consul root token (consul_root_token
output). Notice that the command returns
both the HCP Consul server nodes and client nodes.
Next steps
In this tutorial, you deployed a Consul client and connected it to your HCP Consul cluster. To learn more about Consul's features, and for step-by-step examples of how to perform common Consul tasks, complete one of the Get Started with Consul tutorials.
- Register a Service with Consul Service Discovery
- Secure Applications with Service Sidecar Proxies
- Explore the Consul UI
- Create a Consul service mesh on HCP using Envoy as a sidecar proxy
If you encounter any issues, please contact the HCP team at support.hashicorp.com.