How to automate HashiCorp Vault backup and restoration in AWS EKS with Terraform
So you've moved your organization's secret management process to Hashicorp Vault on Kubernetes? everything is working well, but you are about to promote to production, this brings a lot of questions about stability, recovery and fully operational vault servicing your deployments.
That being said, how do you achieve this, since you have an HA(High Availability) Vault working in your cluster already, that brings us to Vault snapshots, periodically taking and storing the vault snapshots in storages like AWS s3 is the way.
Prerequisiteโ
- A working vault deployment in your cluster provisioned with Terraform
- A working S3 storage bucket to store your snapshots
So Let's jump into it;
Setting Authentication with Vault and S3 Bucketโ
If I guess right, you probably think you are going get your AWS secret key and that's what you will be using in authenticating Vault and AWS s3.
Well, No you won't be doing that, since your goal is to eliminate the usage of secrets in config or plain form in the first place, time to set the auth process up.
๐ Create an S3 Policyโ
You need to create a new file named vault-backup.tf
and add the following code.
resource "AWS_iam_policy" "vault_backup_access_policy" {
name = "VaultBackupPolicyS3"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
]
Effect = "Allow"
Resource = [
"arn:AWS:s3:::YOUR BUCKET NAME",
"arn:AWS:s3:::YOUR BUCKET NAME/*",
]
},
]
})
}
๐ Provision IAM Role Service Account (IRSA) for Vault S3 Access in EKSโ
before you proceed to creating an IRSA, you need to create a ServiceAccount that would be trusted by the assumed role you will be creating next.
resource "kubernetes_service_account_v1" "this" {
metadata {
name = "vault-snapshotter"
namespace = "vault"
annotations = {
"eks.amazonAWS.com/role-arn" = module.vault_irsa_role.iam_role_arn
}
}
# automount_service_account_token = "true"
}
time to create the IRSA, as you can see, I am using an IRSA for eks module instead of going through the route of creating roles and attaching policy documents, this module makes creating IRSA for EKS clean and fast to create
module "vault_irsa_role" {
source = "terraform-AWS-modules/iam/AWS//modules/iam-role-for-service-accounts-eks"
version = "5.20.0"
role_name = "hashicorp-vault-snapshot"
role_policy_arns = {
policy = AWS_iam_policy.vault_backup_access_policy.arn
}
oidc_providers = {
ex = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["vault:vault-snapshotter"]
}
}
}
That's all, we are done with configuring the auth process for the AWS s3 bucket access from the Kubernetes cluster.
Setting Up Vault Kubernetes Auth Engineโ
Hope you are aware there are different ways to authenticate with Vault, your root token, approle, GitHub and more, but we won't be going for any of this, we will be proceeding with Vault Kubernetes auth engine.
First, you need to create a new namespace called vault-client.
resource "kubernetes_namespace" "vault-client" {
metadata {
name = "vault-client"
}
}
then you will create a ServiceAccount in the vault-client namespace that vault can use to authenticate within the cluster, the ServiceAccount is what you will give to vault to get the vault token(jwt) for access to carry out the actions you need.
resource "kubernetes_service_account_v1" "vault_auth" {
metadata {
name = "vault-auth"
namespace = kubernetes_namespace.vault-client
}
automount_service_account_token = "true"
}
you will also need to create a cluster role binding with a service account attachment to authenticate with other Service Accounts within the cluster.
resource "kubernetes_cluster_role_binding" "vault_auth_role_binding" {
metadata {
name = "role-tokenreview-binding"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "system:auth-delegator"
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account_v1.vault_auth.metadata[0].name
namespace = kubernetes_namespace.vault-client.id
}
}
You will have to create a Kubernetes secret with the ServiceAccount annotations which you will make available through the data block, which will then be used to authenticate Kubernetes in Vault in the next process.
resource "kubernetes_secret_v1" "vault_auth_sa" {
metadata {
name = kubernetes_service_account_v1.vault_auth.metadata[0].name
namespace = kubernetes_namespace.vault-client.id
annotations = {
"kubernetes.io/service-account.name" = kubernetes_service_account_v1.vault_auth.metadata[0].name
}
}
type = "kubernetes.io/service-account-token"
wait_for_service_account_token = true
}
The below codes are how you will get the secrets created from the ServiceAccount accessible for the next step in the Vault Kubernetes Engine.
data "kubernetes_secret_v1" "vault_auth_sa" {
metadata {
name = kubernetes_service_account_v1.vault_auth.metadata[0].name
namespace = kubernetes_namespace.vault-client.id
}
}
๐ Configure Vault Kubernetes Engineโ
So you have created and permitted the Service Account to access other Service Accounts through the cluster role binding, it's time to configure authentication with Vault via the Kubernetes auth engine option.
But before you proceed, you need to update your provider.tf file to include the Vault provider and your existing vault server URL.
vault = {
source = "hashicorp/vault"
version = "3.15.2"
}
provider "vault" {
skip_tls_verify = true
address = "https://vault.YOURDOMAIN.com" //you can also replace it with your localhost server which you port forwarded https://localhost:8200
}
Once you have added your vault provider config, you can proceed to the next process, which is enabling the Kubernetes auth engine and authenticating the cluster access to Vault.
Since you might have this process available already in your vault, you would only notice a modification instead of the new creation of the Kubernetes vault engine when you run terraform plan
resource "vault_auth_backend" "kubernetes" {
type = "kubernetes"
path = "kubernetes"
}
resource "vault_kubernetes_auth_backend_config" "config" {
backend = vault_auth_backend.kubernetes.path
kubernetes_host = module.eks.cluster_endpoint
kubernetes_ca_cert = data.kubernetes_secret_v1.vault_auth_sa.data["ca.crt"]
token_reviewer_jwt = data.kubernetes_secret_v1.vault_auth_sa.data["token"]
issuer = "api"
disable_iss_validation = "true"
}