Secure Secrets Management - Using HashiCorp Vault with GitLab CI/CD
· ☕ 12 min read
· 🐧 sysadmin
Here is a video tutorial
Introduction
Below is an integrated tutorial covering the installation of HashiCorp Vault on a separate server and its integration with GitLab Runners. This will ensure secure storage of secrets and their use in GitLab CI/CD pipelines.
The error “tls: failed to verify certificate: x509: cannot validate certificate for 10.10.0.150 because it doesn’t contain any IP SANs” means that the SSL certificate used by Vault does not include the IP information in the Subject Alternative Name (SAN) field. It is recommended to generate a correct SSL certificate with the appropriate SAN field. After performing these steps, Vault should correctly accept HTTPS connections and allow further configuration and initialization.
2a. Create an OpenSSL Configuration File for Certificate with IP SAN
You need to generate a new certificate that includes the IP address as a SAN.
When you run the command sudo journalctl -u vault.service you see warn:
1
WARN[0000]log.go:244 gosnowflake.(*defaultLogger).Warn DBUS_SESSION_BUS_ADDRESS envvar looks to be not set, this can lead to runaway dbus-daemon processes. To avoid this, set envvar DBUS_SESSION_BUS_ADDRESS=$XDG_RUNTIME_DIR/bus (if it exists) or DBUS_SESSION_BUS_ADDRESS=/dev/null.
The warning message DBUS_SESSION_BUS_ADDRESS envvar looks to be not set, this can lead to runaway dbus-daemon processes indicates that the DBUS_SESSION_BUS_ADDRESS environment variable is not set. This can lead to runaway dbus-daemon processes, which might cause resource issues.
To solve this problem, you need to set the DBUS_SESSION_BUS_ADDRESS environment variable in .bashrc file.
Define Environment Variables in CI/CD Project Settings:
VAULT_ADDR = https://<vault_server_ip>:8200
VAULT_TOKEN = <vault_token>
If you add the VAULT_TOKEN and VAULT_ADDR environment variables in the CI/CD project settings in GitLab, you do not need to declare them again in the .gitlab-ci.yml file. GitLab will automatically pass these variables to all jobs in the pipeline.
Add Vault Certificate in GitLab Runners
Download SSL Certificate from Vault:
First, download the SSL certificate used by Vault:
variables:# Defines repository URLREPO_URL:'git@gitlab.sysadmin.homes:developers/taiko.git'# Defines branch to useBRANCH:'main'# Path to store reportsREPORT_PATH:'/workspace'# Report nameREPORT_NAME:'TAIKO_AUTOMATED_TESTS'# Docker image to useDOCKER_IMAGE:"taiko"# Git strategy to useGIT_STRATEGY:clone# Skips Chromium download for TaikoTAIKO_SKIP_CHROMIUM_DOWNLOAD:"true"stages:# Defines stages for CI/CD pipeline- clean- build_and_test- cleanupbefore_script:# Checks if ssh-agent is installed, if not, installs openssh-client- 'which ssh-agent || ( apk update && apk add openssh-client )'# Starts ssh-agent in the background- eval $(ssh-agent -s)# Creates .ssh directory if it doesn't exist- mkdir -p ~/.ssh# Sets permissions of .ssh directory to 700- chmod 700 ~/.ssh# Creates an empty known_hosts file if it doesn't exist- touch ~/.ssh/known_hosts# Sets permissions of known_hosts file to 644- chmod 644 ~/.ssh/known_hosts# Adds private key from environment variable to file and removes carriage return characters - echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_ed25519# Sets permissions of private key file to 400- chmod 400 ~/.ssh/id_ed25519# Adds private key to ssh-agent- ssh-add ~/.ssh/id_ed25519# Creates SSH configuration file with settings for GitLab host- echo -e "Host gitlab.sysadmin.homes\n\tUser git\n\tHostname gitlab.sysadmin.homes\n\tIdentityFile ~/.ssh/id_ed25519\n\tIdentitiesOnly yes\n\tStrictHostKeyChecking no" > ~/.ssh/config# Adds GitLab server IP address to /etc/hosts- echo "10.10.0.119 gitlab.sysadmin.homes" >> /etc/hosts# Installs OpenSSL, jq, and curl if not already installed- apk add --no-cache openssl jq curl# Downloads SSL certificate from GitLab server and saves it to a file- echo -n | openssl s_client -connect gitlab.sysadmin.homes:443 -servername gitlab.sysadmin.homes | openssl x509 > gitlab.crt# Copies the downloaded certificate to the trusted certificates directory- cp gitlab.crt /usr/local/share/ca-certificates/gitlab.crt# Downloads SSL certificate from HashiCorp Vault server and saves it to a file- echo -n | openssl s_client -connect 10.10.0.150:8200 -servername 10.10.0.150 | openssl x509 > vault.crt# Copies the downloaded certificate to the trusted certificates directory- cp vault.crt /usr/local/share/ca-certificates/vault.crt# Updates the trusted certificates list- update-ca-certificates# Exports AWX credentials from HashiCorp Vault- | export AWX_SECRET=$(curl --silent --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/secret/data/gitlab/awx)
export AWX_USERNAME=$(echo $AWX_SECRET | jq -r '.data.data.login')
export AWX_PASSWORD=$(echo $AWX_SECRET | jq -r '.data.data.password')# Exports ArgoCD credentials from HashiCorp Vault- | export ARGOCD_SECRET=$(curl --silent --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/secret/data/gitlab/argocd)
export ARGOCD_USERNAME=$(echo $ARGOCD_SECRET | jq -r '.data.data.login')
export ARGOCD_PASSWORD=$(echo $ARGOCD_SECRET | jq -r '.data.data.password')# The NPM_USER and NPM_PASS variables are not properly passed to the Dockerfile when exported in the before_script section. # Docker builds the image before running the before_script, so these variables are not available when building the Docker image. # To solve this problem, the NPM_USER and NPM_PASS variables should be defined as CI/CD variables at the project level in GitLab, # and then passed as arguments when building the Docker image.build_and_test_awx:stage:build_and_testtags:# Use runner with tag 'docker1'- docker1image:docker:latestservices:# Use Docker-in-Docker service- name:docker:dindvariables:# Use overlay2 driver for DockerDOCKER_DRIVER:overlay2# Set Docker hostDOCKER_HOST:"tcp://docker:2375"# Disable TLS certificates directory for DockerDOCKER_TLS_CERTDIR:""script:# Clone repository- git clone --single-branch --branch $BRANCH $REPO_URL# Build Docker image with NPM credentials- docker build --build-arg NPM_USER="${NPM_USER}" --build-arg NPM_PASS="${NPM_PASS}" -t $DOCKER_IMAGE -f Dockerfile .# Run tests inside Docker container- | docker run --rm -v ${CI_PROJECT_DIR}:/workspace $DOCKER_IMAGE bash -c '
server_address="awx.sysadmin.homes"
username="${AWX_USERNAME}"
password="${AWX_PASSWORD}"
rm -rf /workspace/node_modules
rm -rf /lib/node_modules
ln -s /usr/local/lib/node_modules/ /workspace/node_modules
ln -s /usr/local/lib/node_modules/ /lib/node_modules
rm -f *.tar downloaded//*
rm -rf reports .gauge logs
gauge run /workspace/specs/test-awx.spec
'# Archive reports if they exist- if [ -d "${CI_PROJECT_DIR}/reports/" ]; thenformattedDate=$(date +"%d_%m_%Y_%H_%M");filename="PASS_${REPORT_NAME}_${formattedDate}_AWX.tar";tar -cf ${filename} ${CI_PROJECT_DIR}/reports/ ${CI_PROJECT_DIR}/logs/;mv ${filename} ${CI_PROJECT_DIR}/;fi# Clean up Docker system- docker system prune -af# Clean up Docker volumes- docker volume prune -fartifacts:# Define artifact pathspaths:- "${CI_PROJECT_DIR}/*.tar"build_and_test_argocd:stage:build_and_testtags:# Use runner with tag 'docker2'- docker2image:docker:latestservices:# Use Docker-in-Docker service- name:docker:dindvariables:# Use overlay2 driver for DockerDOCKER_DRIVER:overlay2# Set Docker hostDOCKER_HOST:"tcp://docker:2375"# Disable TLS certificates directory for DockerDOCKER_TLS_CERTDIR:""script:# Clone repository- git clone --single-branch --branch $BRANCH $REPO_URL# Build Docker image with NPM credentials- docker build --build-arg NPM_USER="${NPM_USER}" --build-arg NPM_PASS="${NPM_PASS}" -t $DOCKER_IMAGE -f Dockerfile .# Run tests inside Docker container- | docker run --rm -v ${CI_PROJECT_DIR}:/workspace $DOCKER_IMAGE bash -c '
server_address="argocd.sysadmin.homes"
username="${ARGOCD_USERNAME}"
password="${ARGOCD_PASSWORD}"
rm -rf /workspace/node_modules
rm -rf /lib/node_modules
ln -s /usr/local/lib/node_modules/ /workspace/node_modules
ln -s /usr/local/lib/node_modules/ /lib/node_modules
rm -f *.tar downloaded//*
rm -rf reports .gauge logs
gauge run /workspace/specs/test-argocd.spec
'# Archive reports if they exist- if [ -d "${CI_PROJECT_DIR}/reports/" ]; thenformattedDate=$(date +"%d_%m_%Y_%H_%M");filename="PASS_${REPORT_NAME}_${formattedDate}_ArgoCD.tar";tar -cf ${filename} ${CI_PROJECT_DIR}/reports/ ${CI_PROJECT_DIR}/logs/;mv ${filename} ${CI_PROJECT_DIR}/;fi# Clean up Docker system- docker system prune -af# Clean up Docker volumes- docker volume prune -fartifacts:# Define artifact pathspaths:- "${CI_PROJECT_DIR}/*.tar"clean_workspace:stage:cleanupparallel:matrix:# Use runners with tag 'docker1' and 'docker2'- RUNNER:docker1- RUNNER:docker2tags:- ${RUNNER}script:# Clean up workspace directory- rm -rf $CI_PROJECT_DIR/*
Summary
Installing Vault on a separate server provides greater flexibility and scalability in managing secrets. With the above tutorial, you have configured Vault to securely store and manage secrets, and integrated it with GitLab, enabling secure use of these secrets in CI/CD pipelines. Ensure that network configuration and security settings are properly configured to allow secure connections between GitLab runners and Vault.