Introduction of Different Kong Deployment Methods on Kubernetes

In my previous article I demonstrated how to deploy Kong with different deployment methods using docker. In today’s article I will be exploring all these deployment methods on Kubernetes via Helm.

Let’s get started.

Prerequisites:

You must have a running cluster and have helm and kubectl installed.

I will be using kind in my demo.

Add Kong and Bitnami repo

Before we start, we need to add Bitnami and Kong repo to helm.

1
2
3
helm repo add kong https://charts.konghq.com
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

Classic Deployment with Postgres

Kong’s helm chart uses bitnami/postgres as sub-chart to deploy Postgres database. I prefer some separation between Kong and the database so I will use bitnami/postgres to deploy the database.

Create Namespace

I will be installing everything inside namespace kong.

1
kubectl create namespace kong

Create database secret

Let me create our secrete as follow.

1
2
3
4
kubectl create secret generic kong-db-password \
-n kong \
--from-literal=postgresql-password=kong \
--from-literal=postgresql-postgres-password=kong

Install Postgres release

The structure of existingSecret can be found at this GitHub issue. This is pretty straight forward, we are creating a Postgres release called kong-db and create data base Kong, user Kong when it starts.

1
2
3
4
helm install kong-db bitnami/postgresql -n kong \
--set postgresqlUsername=kong \
--set existingSecret=kong-db-password \
--set postgresqlDatabase=kong

Prepare values.yaml for Kong

As Postgres is a separate release, we need to tell kong how to connect to it.

Let me explain what I use on the values.yaml file .

  • env: This is where we tell Kong what database we are using and where to find it.
  • admin: I am enabling Admin API for the demo. Normally you don’t need Admin API because you will use kubernetes object with Kong ingress controller to create your kong objects.
  • image: I always use the latest image of the time I write this article. Currently 2.5 is the latest version.
  • ingressController: For Helm 3, we don’t need to install CRDs.

Please save below to values.yaml file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
env:
database: "postgres"
pg_host: "kong-db-postgresql.kong.svc.cluster.local"
pg_port: 5432
pg_user: kong
pg_password:
valueFrom:
secretKeyRef:
name: kong-db-password
key: postgresql-password

admin:
enabled: true
http:
enabled: true

image:
repository: kong
tag: "2.5"

ingressController:
installCRDs: false

Install Kong release

Now that we have values.yaml file in the same folder, we can use below command to install. If release my-kong does not exist it will create a new one, otherwise it will upgrades release to a new version.

1
helm upgrade -i my-kong kong/kong -n kong --values values.yaml

Port-forwarding to test

If you are installing Kong on a cloud cluster like EKA, AKS, GKE or if you are using metallb, you should get an external IP for proxy service (by default the proxy service type is LoadBalancer). If you are testing locally like I do, you can port forward your services as below.

  • Port forward Proxy

    1
    kubectl port-forward -n kong service/my-kong-kong-proxy 8000:80
  • Port forward Admin API

    1
    kubectl port-forward -n kong service/my-kong-kong-admin 8001:8001

DBless Deployment

This is the easiest and the default deployment method for Kong ingress controller. All configuration will be done with Kubernetes resources and stored in etcd.

As we don’t need to connect to database in this mode, we can use --set to set our configs. Same as above, I am changing the image tag to the latest one, enable admin api (just for demo) and not installing CRDs because I am using helm 3.

1
2
3
4
5
6
helm upgrade -i my-kong kong/kong -n kong \
--set image.tag=2.5 \
--set admin.enabled=true \
--set admin.http.enabled=true \
--set ingressController.installCRDs=false \
--create-namespace

Hybrid Deployment

If you’ve read my previous article, you know Hybrid deployment has the benefit of separating control plane and data plane which helps to start and scale kong easily. You can find more information of hybrid deployment via Kong official GitHub repo.

Create Certificate and Key

Connection between control plane and data planes is always protected with Mutual TLS (mTLS) encryption. We need to generate a certificate pair for this purpose.

First we create a folder in current directory to store our cluster certificates.

1
mkdir cert

Then we generate certificate and key.

Note that I am using shared mode here which means both control plane and data planes are using the same certificate. You must use kong_clustering as the common name of the certificate in this mode.

1
2
3
openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) \
-keyout cert/cluster.key -out cert/cluster.crt \
-days 1095 -subj "/CN=kong_clustering"

Create Namespace

I will install all my resources in namespace kong.

1
kubectl create namespace kong

Put cluster certificate in secret

1
kubectl create secret tls kong-cluster-cert -n kong --cert=cert/cluster.crt --key=cert/cluster.key

Use cert-manager to generate clustering certificate

If you have cert-manager in your system, you can use below yaml file to create the self-sign certificate.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-issuer
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: kong-cluster-cert
namespace: kong
spec:
secretName: kong-cluster-cert
duration: 21600h # 900d
renewBefore: 360h # 15d
commonName: kong_clustering
privateKey:
rotationPolicy: Always
algorithm: ECDSA
encoding: PKCS8
size: 384
issuerRef:
name: selfsigned-issuer
kind: ClusterIssuer

Create database secret

Same as above, we need to create secret to store database password and then deploy a helm release in kong namespace.

1
2
3
4
kubectl create secret generic kong-db-password \
-n kong \
--from-literal=postgresql-password=kong \
--from-literal=postgresql-postgres-password=kong

Install Postgres release

1
2
3
4
helm install kong-db bitnami/postgresql -n kong \
--set postgresqlUsername=kong \
--set existingSecret=kong-db-password \
--set postgresqlDatabase=kong

Install control plane release

For our control plane, we need a little bit more configs. On top of what I have covered above, we need to mount our cluster cert, enable cluster port and tell kong where to publish configs. (This is only needed if you are using Kong ingress controller) Please save below to kong-cp-values.yaml.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
image:
repository: kong
tag: "2.5"

env:
role: control_plane
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
database: "postgres"
pg_host: "kong-db-postgresql.kong.svc.cluster.local"
pg_port: 5432
pg_user: kong
pg_password:
valueFrom:
secretKeyRef:
name: kong-db-password
key: postgresql-password

secretVolumes:
- kong-cluster-cert

cluster:
enabled: true
tls:
enabled: true

admin:
enabled: true
http:
enabled: true

proxy:
enabled: false

ingressController:
installCRDs: false
env:
publish_service: kong/my-kong-dp-kong-proxy

We install our control plane release my-kong-cp with below command.

1
helm upgrade -i my-kong-cp kong/kong -n kong --values kong-cp-values.yaml

Install data plane release

If you are using Kong Ingress controller, before data plane is up, Control plane pod will not be ready because it will complain it can’t connect to data plane. You will see below error message from ingress controller.

1
ingress-controller time="2021-07-16T14:34:03Z" level=fatal msg="failed to fetch publish-servi ││ ce: services \"my-kong-dp-kong-proxy\" not found" service_name=my-kong-dp-kong-proxy service_ ││ namespace=kong      

Please save below to kong-dp-values.yaml.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
env:
database: "off"
role: data_plane
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
cluster_control_plane: my-kong-cp-kong-cluster.kong.svc.cluster.local:8005

secretVolumes:
- kong-cluster-cert

image:
repository: kong
tag: "2.5"

ingressController:
enabled: false

Now we can install our data plane release.

1
helm upgrade -i my-kong-dp kong/kong -n kong --values kong-dp-values.yaml

That’s all I want to show you in this post, see you next time.