How to Use GitHub Action to Manage Kong Configs in CI/CD Pipelines
I think there is no need for me to spend too much time explaining why DevOps or GitOps is important. Here is a good article from spectralops explaining this very well. In today’s post, I would like to talk you through how I structure my folders, what does my GitHub Action workflow do and WHY I design it this way.
This post is by no means the best practice or the only solution. I am hoping it will inspire you to start your DevOps journey with Kong, improve the workflows and when it is possible, give back your solution to the community.
Let’s get started.
You can find all codes on this repo.
Why Kong?
Besides the fact that I am very familiar with Kong products, Kong provides all tools for users to incorporate Kong products with their workflows nicely. Kong is also one of the most famous and most performant API gateways on the market as well.
Why GitHub Action?
There are three main reasons.
- I host all my code on GitHub and GitHub Action is integrated so nicely.
- There are A LOT of resources/tutorials online about GitHub Actions.
- nektos/act being such a great tool that allows me to run GitHub Actions locally on docker. I don’t have to push my code to the repo to test the workflows.
Design
There are a lot of thoughts put into designing the folder structure and I will try to capture the reason for making these decisions.
Below diagram is a high level overview:
1 | ┌───────────┐ |
Certificate management
As someone particularly interested in PKI, I am disappointed that Kong does not provided a good solution to handle TLS certificates life cycle.
What features are missing?
- ACME plugin only supports HTTP-01 validation and the certificates this plugin creates can only be used for proxy.
- Certificates for the other endpoints (Admin API, Status listen etc) are static. You can only deploy them when you start Kong.
- You must manage certificate life cycle manually. Reload is required after uploading the new certificate files.
Because I don’t want to manage certificates myself and I only need to proxy HTTP requests, I put Kong behind a reverse proxy. (In this case I am using Traefik) You can put Kong behind any reverse proxy or load balancer like AWS application load balancer. These L7 load balancer can handle certificate for me.
Admin API Security
Admin API endpoint MUST be protected.
The demo on this post is using Kong OSS which does not have RBAC out of box. I am using Traefik BasicAuth middleware to protect the Admin API endpoint.
If you are running Kong enterprise or Konnect, you can create RBAC token to access Admin API. You can also lock down Admin API by network policies.
Create config flow
The core of this workflow is built on top of decK. We utilize GitHub Action to run different deck sub-commands to ping
, validate
, diff
and sync
kong configs via Kong Admin API.
Here are the steps.
- API developers write kong declarative config locally. It can also be a local deck dump for this particular usage.
- API developers push config to GitHub and create pull request to main.
- GitHub action validate these configs and display what changes the new configs are making.
- Platform owner review these config changes, make sure the test pass and them merge PR.
- GitHub action sync config to Kong via Admin API.
There are some principles we need to follow:
- Pipeline design consideration
- No sensitive information should be stored in plain text on all workflow and config files.
- There should be a different pipeline to run tests. Platform owner are not allowed to merge if any tests failed. Since tests are unique from API to API hence it is not covered on this post)
- Folder structure should not be changed.
- Privilege separation
- No ONE should be allowed to push config to Kong manually. GitHub Action is the only tool that can sync configs to Kong.
- API developer must NOT have merge privilege.
- At least one review is required for approving PR.
Folder structure
Here is our folder structure.
1 | ├── .github |
Let me break it down:
- Workflows
- There are two types of workflow here. onboard and offbourd.
- Our main focus will be on onboard which can be extended easily to support multiple environments.
- offboard workflow is just a demo on how to remove a service and its related configs. You should make sure this is ONLY triggered when you are sure onboarding process exclude the service.
- Global entities
- consumers and global-plugins are considered global entities so I put them in their own folders under the same global GHA job.
- Different meta.yaml are used in these folders and I run separate deck commands in different steps to make sure config isolation.
- If you need to manage other global entities like ca_certificate, you can create a new folder in the same fashion.
- Services
- Use metal.yaml in the root folder to set up some defaults for ALL services and routes.
- Service tag will be generated dynamically based on the folder name in the format of
-svc - Each folder contains all configuration (route, upstream, plugins) related to that service.
- Plugins can be configured either on service or route level by reference service/route name on the plugin instance.
- Users can break down config.yaml further to services.yaml, upstreams.yaml, routes.yaml if you need to.
Workflow Examples
Let me use onboarding_pr.yaml as an example to show you how the workflow works.
Action Trigger
This workflow is triggered by any pull request to the main branch except for README file. I also allow this workflow to be trigger manually.
1 | name: Onboard APIs checks |
Global entities checks
As the globally entities will be run once per environment, I use a separate job to sync global configs.
- Matrix defines how many environment this job needs to run and provide different admin API per env.
- Auth header which can be the basic header I am using or RBAC token if you are running this flow with Konnect or Kong enterprise is fetched from GitHub secret. You can also specify different tokens per environment as well.
- Check out code.
- Install decK.
deck ping
to make sure control plane is reachable.deck validate
the config files for consumers and global-plugins.deck diff
output the global plugin and consumer differences between config files and what’s in database.
1 | check-global-configs: |
Services entities checks
These are two matrix being used here.
- env: This is which environment you want to sync your configs to.
- include is used here to set different host name and admin API address for different env.
- exclude is used to control which services should NOT be synced to the environment.
- folders: As mentioned above, we store all configs of a service to its own folder. Here we sync all folders.
The flow is pretty similar to the global config one, it runs on every single folder.
Some environment variables that I am using:
- KONG_AUTH_HEADER is used to authenticate.
- DECK_SERVICE_TAG is used to create kong tags for each service. decK manage kong entities with different tags separately. This means every time time you run deck sync, it only checks the entities with the same tag. Currently I set this kong tag to
${{ env "DECK_SERVICE_TAG" }}-svc
which is essentially${foldername}-svc
on themeta.yaml
file in root folder. - DECK_PROXY_HOSTNAME is different host name for different environment. For example
uat.li.lan
,prod.li.lan
etc. This host name will be used on all route objects.
1 | check-services-configs: |
Get folder names
As you can see our service sync heavily relies on folder names that we put in folders
matrix. If you want to control exactly what to be synced, you can put the folder names in an array.
Here I am using a dynamic matrix build to get all folder name.
1 | jobs: |
Write declarative config
Another important aspect of this workflow is to write declarative kong configs. You can check the official doc for more information and I will list down how I write it below.
Consumers and certs
Consumer, certificate, SNI and ca_certificate are considered global entities. We need to put these entities in their own folder. As for the format, you can check respective official doc. Let me use consumer object as an example.
On the doc we see
1 | { |
We know that id, created_at can be auto generated and tags are managed by the metal.yaml
file. We only need to care about username and custom_id. To create a consumer object, I can write it as simple as below when I don’t need to use custom_id.
1 | consumers: |
Same rule applies to ca_certificate object. On the doc, it is listed as below:
1 | { |
We can write it as below.
1 | ca_certificates: |
Please note I am generating an id manually for ca_certificate here because I need to reference this id on my service object. The same rule also applies to certificate objects. The reason is these objects do not have a name that we can reference on other objects.
Services and Routes
Service, route or upstream objects are the foundation of Kong. You can use the same method as above to write these object. Let me use configs.yaml
of echo service as an example.
1 | services: |
Here I only write this service with name ${{ env "DECK_SERVICE_TAG" }}-service
and URL and a few routes under this service.
Combined the default I defined on metal.yaml
file in root folder, this service will be stored as below.
1 | services: |
And one of the routes looks like below.
1 | services: |
As you can see here, you only need to write the part that you want to set and leave everything as default or set a common rule for all related object. If you need to overwrite default value, you just write it under the object directly. This greatly reduce repetitive configs and make our kong object clean and easy to read.
Plugins
Plugins configs are also very easy to write. You can go to Kong official documentation, choose the plugin you want to use and there should be example there. Let me use basic auth as an example.
When we click Declarative (YAML) on the page we should see
1 | plugins: |
Let’s compare it with the one I applied on my echo-basic-auth-route.
1 | plugins: |
The final plugin config looks like this.
1 | services: |
As you can see, the plugin was applied under the route correctly. You can also apply plugins on the service level, just make sure to match service name.
That’s all I want to show you today. See you on the next one.