SKIP TO CONTENT

Deploying Kong Gateway in DB-less Mode on Kubernetes

Mahesh Manickam
November 1, 2024
5 min read

Kong Gateway is renowned for its lightweight, high-performing API gateway capabilities. It can be deployed on both self-managed systems and as a Platform as a Service (PaaS) using the KConnect enterprise offering. Among the various deployment methods, the DB-less and declarative deployment stands out for its simplicity and efficiency, especially in Kubernetes environments. This blog will guide you through setting up Kong Gateway in DB-less mode on Kubernetes.

Deployment Methods for Kong Gateway

Before diving into the setup, let's briefly explore the three main deployment methods for Kong Gateway on self-managed systems:

  1. Traditional (Database-Backed Setup): This method requires a database to store all configured entities, such as services, routes, consumers, and plugins. Postgres is the recommended database for this deployment model. Configurations can be managed through the Kong Admin API or decK.
  2. DB-less and Declarative: This method simplifies the deployment by storing configurations in memory on each node where Kong Gateway is provisioned. The configurations are managed externally in a declarative form, making the Admin API read-only and limiting the use of plugins that require a database.
  3. Hybrid: A combination of the above two methods, where a control plane (CP) manages configurations via a database and interacts with the Admin API. Data planes (DP) are connected to the CP to receive real-time configurations.

The focus of this blog is on the DB-less and declarative deployment method using Kubernetes.

Setting Up Kong Gateway on Kubernetes with Helm

To deploy Kong Gateway in DB-less mode, we'll use Helm, a package manager for Kubernetes.

Step 1: Add the Kong Helm Chart Repository

First, add the Kong Helm chart repository and update it to receive the latest chart version:

helm repo add kong https://charts.konghq.com
helm repo update

Step 2: Create a Kubernetes Namespace

Create a new namespace named kong:

kubectl create ns kong

Step 3: Create a ConfigMap with Kong Configuration

Create a ConfigMap object with the declarative Kong configuration. In this ConfigMap, we define a simple service and route configuration that Kong will load at startup.

apiVersion: v1
kind: ConfigMap
metadata:
    name: kong-dbless-config
    namespace: kong
data:
    kong.yml: |
    _format_version: "1.1"
    services:
    - name: demo-service
        url: http://demo-service.demo-ns.svc.cluster.local
        routes:
        - name: demo-service-route
            paths:
            - /demo
        plugins:
        - name: key-auth
            config:
            hide_credentials: true
            key_names:
            - apiKey
    Keyauth_credentials:
    - consumer: demo
        key: BA46-9E88C74170E5
    plugins:
    - name: prometheus
        config:
            per_consumer: false

Note: It's crucial to define the values for the id property for each configuration entry to prevent unexpected behavior when multiple replicas are configured. If the id is not defined, Kong will automatically generate a new value, which could lead to configuration mismatches when requests are distributed across Kong services for Kong Manager.

When deploying Kong Gateway in DB-less mode, one of the critical considerations is managing sensitive information like API keys securely. While Kong's declarative configuration approach simplifies deployment, embedding sensitive data directly in configuration files is not advisable.

Why You Shouldn’t Hardcode API Keys

In a typical Kong Gateway setup, API keys or other credentials might be included in configurations as part of key-auth credentials or other plugins. However, hardcoding these sensitive values poses significant security risks:

  • Exposure in Version Control: Hardcoded keys may inadvertently be committed to version control systems, making them accessible to unauthorized users.
  • Security Breaches: If an attacker gains access to the configuration file, they could extract these keys and potentially misuse them.
  • Compliance Issues: Many regulatory standards require the secure handling of sensitive information, and hardcoding credentials would likely violate these requirements.

Using Kong Init Containers for Secure Key Management

To address these concerns, one effective solution is to use Kong init containers. Init containers are specialized containers that run before the main application containers in a Pod. They can perform setup tasks, such as injecting sensitive data into the environment securely.

Here’s how you can use init containers with Kong Gateway:

  1. Define the Init Container: Configure an init container in your Kubernetes Pod specification to fetch the sensitive data from a secure store like AWS Secrets Manager, Azure Key Vault, or HashiCorp Vault.
  2. Replace Placeholder Strings: During the initialization process, the init container can replace placeholder strings in your Kong configuration with actual API keys or credentials fetched from the secure store.
  3. Environment Variables and Mounts: The secrets can be stored as environment variables or mounted as files, which Kong can then reference in its configuration.

Supported Secrets Providers

Kong Gateway can integrate with various secrets management solutions, allowing for flexible and secure management of sensitive data:

  • AWS Secrets Manager: A fully managed service that helps you protect access to your applications, services, and IT resources without the upfront cost and maintenance of your own hardware security module (HSM).
  • Azure Key Vault: A cloud service for securely storing and accessing secrets. It's designed to safeguard cryptographic keys and secrets used by cloud applications and services.
  • HashiCorp Vault: Provides a unified interface to any secret while providing tight access control and recording a detailed audit log.

Using init containers to handle sensitive data securely in a DB-less Kong Gateway setup is an eCective strategy to prevent the exposure of API keys and other sensitive information. By integrating with secure secret management services like AWS Secrets Manager, Azure Key Vault, or HashiCorp Vault, you can ensure that your deployments are both secure and compliant with best practices and regulatory requirements.

Step 4: Create a Custom Values File

Create a values-custom.yaml file to customize the Helm deployment for DB-less mode:

replicaCount: 3

podDisruptionBudget:
    enabled: true
    minAvailable: "50%"

resources:
    requests:
        cpu: 500m
        memory:512Mi

proxy:
    enabled: true
    type: ClusterIP
    http:
        enabled: true
    tls:
        enabled: false

admin:
    enabled: true
    type: ClusterIP
    http:
        enabled: true
    tls:
        enabled: false

manager:
    enabled: true
    type: ClusterIP
    http:
        enabled: true
    tls:
        enabled: false

status:
    enabled: true
    type: ClusterIP
    http:
        enabled: true
    tls:
        enabled: false

dblessConfig:
    configMap: kong-dbless-config

env:
    database: "off"
    admin_api_uri: "0.0.0.0:8001"
    admin_gui_uri: "0.0.0.0:8002"

In the above configuration, which is used to customize your Helm deployment, you need to set the database environment variable to off. This tells Kong not to expect a database and to operate in DB-less mode. By setting the database variable to off, Kong knows to load its configuration directly from the file specified rather than a database. Also, you need to point Kong to the ConfigMap you've created. This is done by setting the configMap property under the dblessConfig section. This tells Kong to use the configuration stored in the kongdbless-config ConfigMap.

Key Point: To achieve a DB-less setup, disable the database under the env section and point the configMap property under the dblessConfig section to the ConfigMap object created earlier. If the ConfigMap is mounted to a different custom location inside the container, set the value for declarative_config environment variable value to the appropriate custom location.

Step 5: Install Kong Gateway Using Helm

Run the following command to install Kong Gateway:

helm install kong-gw kong/kong --namespace kong --values ./values-custom.yaml
--create-namespace

Step 6: Verify the Deployment

Check the status of the pods to ensure they are running:

kubectl get pods -n kong

Enabling Monitoring Through Prometheus

To enable monitoring of Kong metrics with Prometheus, add the following configuration to the values-custom.yaml file:

serviceMonitor:
    enabled: true
    interval: 30s
    namespace: monitoring
    labels:
        release: prometheus

Enabling TLS

To enable TLS for Kong, create a secret and mount it as a container volume:

kubectl create secret tls kong-certs --cert=path/to/cert/file --
key=path/to/key/file --cacert=path/to/cacert/file

Update the values-custom.yaml file to include the TLS configuration:

secretVolumes:
- kong-certs

env:
    ssl_cert: /etc/secrets/kong-certs/tls.crt
    ssl_cert_key: /etc/secrets/kong-certs/tls.key
    admin_ssl_cert: /etc/secrets/kong-certs/tls.crt
    admin_ssl_cert_key: /etc/secrets/kong-certs/tls.key
    admin_gui_ssl_cert: /etc/secrets/kong-certs/tls.crt
    admin_gui_ssl_cert_key: /etc/secrets/kong-certs/tls.key
    status_ssl_cert: /etc/secrets/kong-certs/tls.crt
    status_ssl_cert_key: /etc/secrets/kong-certs/tls.key
    lua_ssl_trusted_certificate: /etc/secrets/kong-certs/ca.crt
    nginx_proxy_proxy_ssl_trusted_certificate: /etc/secrets/kong-certs/ca.crt

Also, add admin_listen and modify the admin_api_uri and admin_gui_uri values to point to respective ingress secured url.

env:
    database: "off"
    admin_listen: "0.0.0.0:8444 ssl"
    admin_api_uri: "https://kongadmin.mydomain.com"
    admin_gui_uri: " https://kongmanager.mydomain.com "

The ingress object needs to be created with ssl-passthrough. If nginx is used as an ingress controller, here is a sample configuration.

apiVersion: v1
kind: Ingress
metadata:
    annotations:
        nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
        nginx.ingress.kubernetes.io/ssl-redirect: "true"
        nginx.ingress.kubernetes.io/ssl-passthrough: "true"
        nginx.ingress.kubernetes.io/rewrite-target: "/"
    name: kong-ingress
    namespace: kong
spec:
    ingressClassName: nginx
    rules:
    - host: proxy.mydomain.com
        http:
            paths:
            - backend:
                service:
                    name: kong-gw-proxy
                    port:
                        number: 80
    - host: kongadmin.mydomain.com
        http:
            paths:
            - backend:
                service:
                    name: kong-gw-admin
                    port:
                        number: 8444
    - host: kongmanager.mydomain.com
        http:
            paths:
            - backend:
                service:
                    name: kong-gw-manager
                    port:
                        number: 8445

Benefits of Running Kong in DB-less Mode

  • Reduced Dependencies: No need to manage a database if the setup fits entirely in memory.
  • Consistent Configuration: The configuration is always in a known state, with no intermediate states between creating a service and a route using the Admin API.
  • CI/CD Integration: This mode is ideal for automation in CI/CD pipelines, as configurations can be managed via a Git repository.

However, remember that in DB-less mode, the Admin API is read-only, and plugins requiring a database cannot be used.

By following these steps, you can efficiently deploy Kong Gateway in DB-less mode on Kubernetes, providing a lightweight and straightforward API management solution.

Sign up for our newsletter to join our impact-driven mission.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.