How to Use GitHub Action to Manage Kong Configs in CI/CD Pipelines

I think there is no need to spend too much time explaining why DevOps or GitOps is important. Here is a good article from spectralops explaining this very well. In today’s post, I would like to walk you through the structure of my kong configs and what my GitHub Action workflows do.

This post is by no means the best practice or the only solution. I am hoping it will inspire you to start your DevOps journey with Kong. If possible, improve the workflows and give back your solution back to the community.

Let’s get started.

You can find all codes on this repo.

Why Kong?

Besides I am very familiar with Kong products, Kong provides all tools to incorporate Kong products with any CI/CD pipelines nicely. Kong is also one of the most famous and performant API gateways on the market.

Why GitHub Action?

There are three main reasons.

  1. I host all my code on GitHub.
  2. There are A LOT of tutorials out there about using GitHub Actions.
  3. nektos/act being such a great tool allows me to run GitHub Actions locally. I don’t have to push my code to the repo for testing the workflows.

Design

There are a lot of thoughts put into designing current folder structure and I will try my best to capture the reasons for the design.

Below diagram is a high level overview:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
                        ┌───────────┐
│ CA │
└─────┬─────┘
Cert ┌──────┐
┌─────▼─────┐ ┌────────────┐ ┌──► SVC1 │
│ │ │ ├──┘ └──────┘
│ │ │ ┌────────┐ │ ┌──────┐
┌──────────┐ │ │ │ │ proxy │ ├─────► SVC2 │
│ API │ │ │ │ └────────┘ │ └──────┘
│ Consumer ├──Requests──► ALB ├────► │ ┌──────┐
└──────────┘ │ │ │ ┌────────┐ ├─────► SVC3 │
│ │ │ │ Admin │ │ └──────┘
│ │ │ │ API │ │ ┌──────┐
│ │ │ └────────┘ ├─────► SVC4 │
│ │ │ ┌────┤ └──────┘
│ │ │ │Kong├──┐ ┌──────┐
└─────▲─────┘ └───────┴────┘ └──► SVC5 │
▲─────────┴──────────────◄──────────┐ └──────┘
┌───────────────────┼───────────────────────────────────┼──────────┐
│ ┌──────────┐ ┌────┴─────┐ ┌──────────┐ ┌──────────┐ ┌─┴────────┐ │
│ │ Kong │ │ Sync │ │ diff │ │ validate │ │ Ping │ │
│ │ Configs │ │ ◄─┤ ◄─┤ ◄─┤ │ │
│ └─────┬────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │ ┌──────┘ │
│ │ ┌───────┴────────────┴────────────┴───┐ │ │
│ └───► decK ├─┘ ┌───────────┤
│ └─────────────────────────────────────┘ │ GitHub │
│ │ Actions │
└──────────────────────────────────────────────────────┴───────────┘

Certificate management

As someone particularly interested in PKI, I am disappointed that Kong does not provided a good solution to handle TLS certificates life cycle.

What features are missing?

  • ACME plugin only supports HTTP-01 validation and the certificates this plugin creates can only be used for proxy.
  • Certificates for the other endpoints (Admin API, Status listen etc) are static. You can only deploy them when you start Kong.
  • You must manage certificate life cycle manually. Reload is required after uploading the new certificate files.

Because I don’t want to manage certificates myself and I only need to proxy HTTP requests, I use a reverse proxy in front of kong. (In this case I am using Traefik) You can put Kong behind any reverse proxy or load balancer, for example AWS application load balancer. These L7 load balancer should handle certificates for you.

Admin API Security

Admin API endpoint MUST be protected.

This demo use Kong OSS which does not support RBAC. I am using Traefik BasicAuth middleware to protect the Admin API endpoint. If you are running Kong enterprise or Konnect, you can create RBAC token to access Admin API. You can also lock down Admin API by network policies.

Create config flow

The core of this workflow is decK. We utilize GitHub Action to run different deck sub-commands to ping, validate, diff and sync kong configs via Kong Admin API.

Here are the steps.

  1. API developers write kong declarative config locally. It can also be deck dump from local instances
  2. API developers push config to GitHub and create pull request.
  3. GitHub action validate these configs and display the changes of the new configs.
  4. Platform owner review these config changes, make sure the test pass and then merge PR.
  5. GitHub action sync config to Kong via Admin API.

There are some principles we need to follow:

  • Pipeline design consideration

    • No sensitive information should be stored in plain text on all workflow and config files.
    • There should be a different pipeline to run tests. Platform owner are not allowed to merge if any tests failed.
    • Folder structure should not be changed.
  • Privilege separation

    • No ONE should be allowed to push config to Kong manually. GitHub Action is the only tool that can sync configs to Kong.
    • API developer must NOT have merge privilege.
    • At least one review is required for approving PR.

Folder structure

Here is the folder structure.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
.
├── .github
│ └── workflows
│ ├── offboarding_pr.yaml
│ ├── offboarding_push.yaml
│ ├── onboarding_pr.yaml
│ └── onboarding_push.yaml
├── .gitignore
├── LICENSE
├── README.md
├── consumers
│ ├── consumers.yaml
│ └── meta.yaml
├── global-plugins
│ ├── correlation-id.yaml
│ ├── meta.yaml
│ ├── proxy-cache.yaml
│ └── rate-limiting.yaml
├── meta.yaml
├── plugin-conf
│ └── rate-limit-redis.yaml
└── services
├── acme
│ ├── configs.yaml
│ └── plugins
│ └── acme.yaml
├── catch-all
│ ├── configs.yaml
│ └── plugins
│ └── request-termination.yaml
├── echo
│ ├── configs.yaml
│ └── plugins
│ ├── basic-auth.yaml
│ ├── jwt.yaml
│ └── key-auth.yaml
└── httpbin
├── configs.yaml
└── plugins
└── key-auth.yaml

Let me break it down:

  • Workflows
    • There are two types of workflow here. onboard and offbourd.
    • Our main focus will be the onboard workflow. It can be extended easily to support multiple environments.
    • offboard workflow is a demo to show you how to remove a service and its related configs. You should make sure this is ONLY triggered when you are 100% sure the service should be removed and the related config is excluded in the onboard flow.
  • Share Plugin config
    plugin-conf folder stores plugin configs that can be reused. In the demo I put redis related config in this folder and reference this config on my other rate-limiting plugin instances (global and per consumer).
  • Global entities
    • consumers and global-plugins are considered global entities so I put them in their own folders under the same global GHA job.
    • Different meta.yaml are used in these folders and I run separate deck commands in different steps to make sure config isolation.
    • If you need to manage other global entities like ca_certificate, you can create a new folder in the same fashion.
  • Services
    • Use metal.yaml in the root folder to set up some defaults for ALL services and routes.
    • Service tag will be generated dynamically based on the folder name in the format of -svc
    • Each folder contains all configuration (route, upstream, plugins) related to that service.
    • Plugins can be configured either on service or route level by reference service/route name on the plugin instance.
    • Users can break down config.yaml further to services.yaml, upstreams.yaml, routes.yaml if you need to.

Workflow Examples

Let me use onboarding_pr.yaml as an example to show you how the workflow works.

Action Trigger

This workflow is triggered by any pull request to the main branch except for README file. I also allow this workflow to be triggered manually.

1
2
3
4
5
6
7
8
9
10
name: Onboard APIs checks

on:
pull_request:
branches:
- main
paths-ignore:
- '**/README.md'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:

Global entities checks

As the global entities will be run once per environment, I use a separate job to sync global configs.

  1. Matrix defines how many environments this job needs to run and provide different admin API and Token per env.
  2. Check out code.
  3. Install decK.
  4. deck ping to make sure control plane is reachable.
  5. deck validate the config files for consumers and global-plugins.
  6. deck diff output the global plugin and consumer differences between config files and what’s in database.

Some environment variables that I am using:

  • ADMIN_API_URL is pretty self-explanatory.
  • ADMIN_API_AUTH_HEADER can be authorization:basic header that I am using with Traefik or RBAC token if you are running this flow for Konnect or Kong enterprise. The design allows you to use different token per environment as well.
  • DECK_VERSION sets the version of decK I am running. It is easier to upgrade or downgrade all decK version.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
check-global-configs:
name: Check global kong configs
runs-on: ubuntu-latest
strategy:
matrix:
env: [dev]
include:
- env: dev
ADMIN_API_URL: ${{ secrets.DEV_ADMIN_API_URL }}
ADMIN_API_AUTH_HEADER: ${{ secrets.DEV_ADMIN_API_AUTH_HEADER }}
env:
DECK_VERSION: ${{ env.DECK_VERSION }}
steps:
- name: Checkout branch
uses: actions/checkout@v3
- name: Install Deck
run: |
curl -sL https://github.com/Kong/deck/releases/download/v${DECK_VERSION}/deck_${DECK_VERSION}_linux_amd64.tar.gz -o deck.tar.gz
tar -xf deck.tar.gz -C /tmp
sudo cp /tmp/deck /usr/local/bin/
- name: Check Control plane is reachable
run: |
deck ping \
--headers "${{ matrix.ADMIN_API_AUTH_HEADER }}" \
--kong-addr "${{ matrix.ADMIN_API_URL }}"
- name: Validate Global Plugins
run: |
deck validate \
--headers "${{ matrix.ADMIN_API_AUTH_HEADER }}" \
--kong-addr "${{ matrix.ADMIN_API_URL }}" \
-s plugin-conf/ \
-s global-plugins/
- name: Validate Consumers
run: |
deck validate \
--headers "${{ matrix.ADMIN_API_AUTH_HEADER }}" \
--kong-addr "${{ matrix.ADMIN_API_URL }}" \
-s plugin-conf/ \
-s consumers/
- name: Check differences for Global Plugins
run: |
deck diff \
--headers "${{ matrix.ADMIN_API_AUTH_HEADER }}" \
--kong-addr "${{ matrix.ADMIN_API_URL }}" \
-s plugin-conf/ \
-s global-plugins/
- name: Check differences for Consumers
run: |
deck diff \
--headers "${{ matrix.ADMIN_API_AUTH_HEADER }}" \
--kong-addr "${{ matrix.ADMIN_API_URL }}" \
-s plugin-conf/ \
-s consumers/

Services entities checks

These are two matrix being used here.

  • env: This is which environment you want to sync your configs to.
    • include is used here to set different admin API address for different env.
    • exclude is used to control which services should NOT be synced to the environment.
  • folders: We use build-matrix job to output all folder names here to sync.

The flow is pretty similar to the global config one, it runs on every single folder and one extra environment variable I am using here.

DECK_SERVICE_TAG is used to create kong tags for each service. decK manage kong entities with different tags separately. This means every time you run deck sync, it only checks the entities with the same tag. Currently I set this kong tag to ${{ env "DECK_SERVICE_TAG" }}-svc which is essentially ${foldername}-svc on the meta.yaml file in root folder.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
check-services-configs:
needs: build-matrix
name: Check Kong configs
runs-on: ubuntu-latest
strategy:
matrix:
env: [dev]
include:
- env: dev
ADMIN_API_URL: ${{ secrets.DEV_ADMIN_API_URL }}
ADMIN_API_AUTH_HEADER: ${{ secrets.DEV_ADMIN_API_AUTH_HEADER }}
exclude:
- env: dev
folders: acme
folders: ${{ fromJson(needs.build-matrix.outputs.folder_matrix) }}
env:
DECK_VERSION: ${{ env.DECK_VERSION }}
DECK_SERVICE_TAG: ${{ matrix.folders }}
steps:
- name: Checkout branch
uses: actions/checkout@v3
- name: Install Deck
run: |
curl -sL https://github.com/Kong/deck/releases/download/v${DECK_VERSION}/deck_${DECK_VERSION}_linux_amd64.tar.gz -o deck.tar.gz
tar -xf deck.tar.gz -C /tmp
sudo cp /tmp/deck /usr/local/bin/
- name: Check Control plane is reachable
run: |
deck ping \
--headers "${{ matrix.ADMIN_API_AUTH_HEADER }}" \
--kong-addr "${{ matrix.ADMIN_API_URL }}" \
- name: Validate configs
run: |
deck validate \
--headers "${{ matrix.ADMIN_API_AUTH_HEADER }}" \
--kong-addr "${{ matrix.ADMIN_API_URL }}" \
--parallelism 1 \
-s meta.yaml \
-s plugin-conf/ \
-s services/${{ matrix.folders }}/configs.yaml \
-s services/${{ matrix.folders }}/plugins/
- name: Check difference between current and new configs
run: |
deck diff \
--headers "${{ matrix.ADMIN_API_AUTH_HEADER }}" \
--kong-addr "${{ matrix.ADMIN_API_URL }}" \
--parallelism 1 \
-s meta.yaml \
-s plugin-conf/ \
-s services/${{ matrix.folders }}/configs.yaml \
-s services/${{ matrix.folders }}/plugins/

Get folder names

As you can see the service sync job heavily relies on folder names that we get from folders matrix. If you want to control exactly what to be synced, you can put the folder names in an array.

Here I am using a dynamic matrix build to get all folder name.

1
2
3
4
5
6
7
8
9
10
jobs:
build-matrix:
runs-on: ubuntu-latest
steps:
- name: Checkout branch
uses: actions/checkout@v3
- id: get-folders
run: echo "folder_matrix=$(find services/* -maxdepth 0 -type d | cut -d"/" -f2 | jq -R -s -c 'split("\n")[:-1]')" >> $GITHUB_OUTPUT
outputs:
folder_matrix: ${{ steps.get-folders.outputs.folder_matrix }}

Write declarative config

Another important aspect of this workflow is to write declarative kong configs. You can check the official doc for more information and I will show you how I write it.

Consumers and certs

Consumer, certificate, SNI and ca_certificate are considered global entities. We need to put these entities in their own folder. As for the format, you can check respective official doc. Let me use consumer object as an example.

On the doc we see

1
2
3
4
5
6
7
8
{
"id": "ec1a1f6f-2aa4-4e58-93ff-b56368f19b27",
"created_at": 1422386534,
"username": "my-username",
"custom_id": "my-custom-id",
"tags": ["user-level", "low-priority"]
}

We know that id, created_at can be auto generated and tags are managed by the meta.yaml file. We only need to care about username and custom_id. To create a consumer object, I can write it as simple as below when I don’t need to use custom_id.

1
2
consumers:
- username: test-user

Same rule applies to ca_certificate object. On the doc, it is listed as below:

1
2
3
4
5
6
7
{
"id": "04fbeacf-a9f1-4a5d-ae4a-b0407445db3f",
"created_at": 1422386534,
"cert": "-----BEGIN CERTIFICATE-----...",
"cert_digest": "c641e28d77e93544f2fa87b2cf3f3d51...",
"tags": ["user-level", "low-priority"]
}

We can write it as.

1
2
3
4
5
6
ca_certificates:
- id: 04fbeacf-a9f1-4a5d-ae4a-b0407445db3f
cert: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

Please note I am generating an id manually for ca_certificate here because I need to reference this id on my service object. The same rule also applies to certificate objects. The reason is these objects do not have a name that we can reference on other objects.

Services and Routes

Service, route or upstream objects are the foundation of Kong. You can use the same method as above to write these object. Let me use configs.yaml of echo service as an example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
services:
- name: ${{ env "DECK_SERVICE_TAG" }}-service
enabled: true
url: http://echo
routes:
- name: ${{ env "DECK_SERVICE_TAG" }}-test-route
paths:
- /echo
- name: ${{ env "DECK_SERVICE_TAG" }}-basic-auth-route
paths:
- /basic
- name: ${{ env "DECK_SERVICE_TAG" }}-key-auth-route
paths:
- /key
- name: ${{ env "DECK_SERVICE_TAG" }}-jwt-auth-route
paths:
- /jwt

Here I only write this service with name ${{ env "DECK_SERVICE_TAG" }}-service and URL and a few routes under this service.

Combined the default I defined on meta.yaml file in root folder, this service will be created as.

1
2
3
4
5
6
7
8
9
10
services:
- connect_timeout: 60000
enabled: true
host: echo
name: echo-service
port: 80
protocol: http
read_timeout: 60000
retries: 5
write_timeout: 60000

And one of the routes looks like below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
services:
- name: echo-service
...
routes:
- https_redirect_status_code: 426
name: echo-basic-auth-route
path_handling: v0
paths:
- /basic
preserve_host: false
protocols:
- https
regex_priority: 0
request_buffering: true
response_buffering: true
strip_path: true

As you can see here, you only need to write the part that you want to set and leave everything as default or set a common rule for all related object. If you need to overwrite default value, you just write it under the object directly. This greatly reduce repetitive configs and make our kong object clean and easy to read.

Plugins

Plugins configs are also very easy to write. You can go to Kong official documentation, choose the plugin you want to use and you should find example there. Let me use basic auth as an example.

When we click Declarative (YAML) on the page we should see

1
2
3
4
5
plugins:
- name: basic-auth
route: ROUTE_NAME
config:
hide_credentials: true

Let’s compare it with the one I applied on my echo-basic-auth-route.

1
2
3
plugins:
- name: basic-auth
route: ${{ env "DECK_SERVICE_TAG" }}-basic-auth-route

The final plugin config looks like this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
services:
- name: echo-service
...
- name: echo-basic-auth-route
...
plugins:
- config:
anonymous: null
hide_credentials: false
enabled: true
name: basic-auth
protocols:
- grpc
- grpcs
- http
- https
tags:
- echo-svc
tags:
- echo-svc

As you can see, the plugin was applied under the route correctly. You can also put reusable plugin configs in the plugin-conf folder and reference different plugin config for different environment when you need to.

That’s all I want to show you today. See you on the next one.