How to deploy n8n in Kubernetes - k3s
Quick implementation:
- Install git
|
|
- Clone the repository
|
|
- Enter the directory
|
|
- Deploy n8n in k3s
|
|
The article details setting up n8n, a workflow automation solution, on Kubernetes. It focuses on the use of YAML files for Kubernetes deployment.
Key Points:
-
n8n Overview: n8n is a fair-code workflow automation tool, similar to Zapier or IFTTT, suitable for self-hosting or using the paid n8n.cloud service.
-
Kubernetes Setup for n8n:
- Namespace Creation: Initially, a Kubernetes namespace ’n8n’ is created using
kubectl create namespace n8n
. - Deployment and Service Configuration:
n8n-deployment.yaml
: Defines a deployment with one replica of the n8n container, exposing port 5678. It includes liveness and readiness probes at/healthz
, environment variables from a ConfigMap (n8n-configmap
) and a Secret (n8n-secrets
), and resource limits.n8n-service.yaml
: Sets up a NodePort service for n8n, mapping port 80 to the container’s port 5678.
- Namespace Creation: Initially, a Kubernetes namespace ’n8n’ is created using
-
PostgreSQL StatefulSet:
postgres-statefulset.yaml
andpostgres-service.yaml
: Define a StatefulSet and a Service for PostgreSQL, exposing port 5432, and linked to a Secret (postgres-secrets
).
-
ConfigMaps and Secrets:
n8n-configmap.yaml
: Includes environment variables like NODE_ENV, database configurations, and webhook settings.n8n-secrets.yaml
: Contains secrets for n8n, including database password and encryption key.postgres-secrets.yaml
: Holds PostgreSQL configuration data.
-
Deployment Process:
- Applying the configuration files (
kubectl apply -f
) for n8n and PostgreSQL. - Using
kubectl rollout restart
to restart the StatefulSet and Deployment after applying the configurations. - The final setup includes checking services (
kubectl get svc -n n8n
) and accessing n8n via a browser using the NodePort or a custom domain through NGINX Proxy Manager.
- Applying the configuration files (
The article emphasizes the importance of label alignment in Kubernetes configurations and provides a comprehensive guide for setting up n8n with PostgreSQL on Kubernetes, leveraging ConfigMaps and Secrets for configuration management.
TLDR
For those who would like to read and know more see the article:
What is n8n
In a nutshell, n8n is a fair-code distribution model-based workflow automation solution that is free and extensible. If you are familiar with low-code/no-code solutions such as Zapier or IFTTT , n8n may be operated independently and is rather similar.
Being fair-code implies that you are always free to use and distribute the source code. The only drawback is that you might have to pay for a license to use n8n if you make money with it. There are some excellent illustrations of how this methodology differs from conventional “open-source” initiatives in this community discussion topic .
Workflows are executed via a NodeJS web server behind the hood. It was founded in June 2019 and has already amassed +280 Nodes and +500 Workflows . The community is also quite vibrant; it often takes a few days for a patch to be merged and made available, which is awesome!
This implies that self-hosting n8n is quite simple. Additionally, there is n8n.cloud , a hosted version that is paid for and eliminates the need for you to worry about scaling, security, or upkeep.
Why use n8n and not Zapier?
For me, it’s the expensive price. 💸
Although Zapier is fantastic and most likely capable of handling any use-case that is thrown at it, eventually the free tier will become unusable. You can only design extremely basic “two-step” processes and you quickly reach your “zap” limits. Only clients who make payments can access more intricate flows.
Furthermore, it’s not housed on “your environment” (self-hosted or on-premises deployment), which could be problematic if you have tight policies in place regarding the sharing of data with outside providers, for example. Because Zapier is enterprise ready, you can be sure you’ll get all the functionality you need and exceptional customer support 💰.
If you don’t mind controlling your own instance (scalability, durability, availability, etc.), then n8n will function just as well as any of its competitors that aren’t free.
However, the purpose of this piece is to demonstrate how we’ve set up n8n on our Kubernetes cluster, not to discuss the benefits of using Zapier or n8n.
How to setup n8n in Kubernetes
Rather than starting from scratch with a completely new Kubernetes configuration, we will be utilizing the examples given by @bacarini , a community forum user of n8n who offered his configuration .
K3s are being used by me to set up my cluster locally. If this is your first time using it, you may simply follow the Kubernetes series .
After installing K3s, use the following command to deploy n8n in the cluster:
You will begin with the n8n Deployment configuration and expose it with its Service configuration.
But first you have to create a namespace n8n:
|
|
Then create the below files:
|
|
|
|
Important things to remember here:
- To prevent us from copying and pasting the same labels inside of each other, I allocated a &labels variable.yaml configuration (this is the same for the majority of my settings).
- on my n8n-deployment.yaml file, I have the n8n container port set to 5678, which corresponds to the n8n default port. By rerouting traffic from port 80 (http) to the container’s targetPort, my n8n-service.yaml exposes the container on that port.
- My n8n container is linked to both my n8n-configmap.yaml and n8n-secrets.yaml files, albeit I still need to build them. I’ll take care of that right now.
- In order to verify whether the service is operational, n8n offers a /healthz endpoint. I use that endpoint to set up the Liveness and Readiness Probes for my deployment.
- In the end, I optimize my container’s resources to utilize a maximum of 1 CPU and 1 GB of RAM on my cluster.
Pay close attention to the n8n-service selection.We won’t be able to reach our n8n server if the yaml configuration on the n8n-deployment container doesn’t match the same labels.
You may go here to learn more about selectors and how deployments work with them: creating Kubernetes deployment
Using those two arrangements, I will therefore have the following:
|
|
You can observe that the pod kubectl get pods -n n8n
is currently in the “CreateContainerConfigError” state if you check it out. This is a result of the ConfigMap and Secrets configuration still being lacking. I will quickly address that.
PostgreSQL StatefulSet
The Postgres Statefulset configuration is very similar to our previous Deployment configurations, and looks like this:
|
|
|
|
As you can see, the primary variations are:
- The default port for a PostgreSQL server is 5432, which is the port that I am currently exposing in both my postgres-statefulset.yaml and postgres-service.yaml files.
- Similar to the last setup, pay close attention to the service selector, since it needs to line up with the stateful set container’s labels.
Using both K8S configurations, the following commands needs to be executed:
|
|
ConfigMaps & Secrets
I just need to bootstrap all of the basic PostgreSQL and n8n setups to bind things together:
- My n8n deployment is connected to a Secrets configuration called “n8n-secrets” and a ConfigMap called “n8n-configmap”;
- The Postgres Statefulset is simply connected to a Secrets configuration called “postgres-secrets.”
|
|
|
|
|
|
The majority of these setups are copies and pastes from the n8n manual or the example provided by @bacarini , but all of them are very common. The key is, as before:
- The n8n-configmap’s
DB_POSTGRESDB_HOST
. For my PostgreSQL service, the yaml configuration needs to match the service name. - Additionally, the
WEBHOOK_TUNNEL_URL
environment needs to be updated. This will primarily be used to call webhooks, however it won’t work unless the host url is legitimate.
The n8n FAQs advise using ngrok to set up that url , but I discovered that in k3s it will function without requiring the installation of any further services on your system. I will just use the service port that will be displayed using the below command:
|
|
In my environment I use a NGINX Proxy manager where I defined the URL the way I presented on the video. So instead of http://10.10.0.112:31600 I am using n8n.local.
|
|
I can now simply apply all three configuration files, restart Statefulset and Deployment, and these configurations will be reloaded:
|
|
All together now 🚀
You ought to have something resembling this in your working directory if you have been saving the configurations mentioned above:
|
|
All configurations might be deployed simultaneously by executing:
|
|
The time has finally come for me to launch our n8n server in our browser!
So, check the service
|
|
and the IP address of the machine, where K3s is running
|
|
and use the NodePort that exposes the service.
So finally it should be eg. http://10.10.0.112:31600 or just http://n8n.local because I am using my own DNS server (Adguard Home).