CI/CD using Tekton, Argo CD and Knative Serverless Applications

This page is generated using information from: https://github.com/digitalocean/container-blueprints/blob/master/DOKS-CI-CD/README.md.

This blueprint will show you how to implement a CI/CD solution using free and popular open source implementations that run on Kubernetes clusters natively.

DigitalOcean Marketplace provides pre-configured 1-Click apps that you can quickly deploy to your DigitalOcean Kubernetes (DOKS) cluster. You will be using 1-Click apps to provision each software component for your cluster.

You will learn how to use Tekton to build a CI/CD pipeline that continuously fetches code changes from a Git repository, and builds a Docker image for your custom application. Then, Tekton will push the Docker image to a remote registry and notify Argo CD to deploy it to your Kubernetes cluster. This guide will also teach you how to use Knative Eventing to automatically trigger the CI/CD pipeline each time code is pushed to your application GitHub repository. All these steps run automatically.

This tutorial uses the following tools:

  1. Kaniko for building container images directly in a Kubernetes cluster.
  2. Tekton pipelines and Argo CD for implementing the CI process.
  3. Knative for running and exposing applications functionality on Kubernetes.
  4. Cert-Manager for managing TLS termination of Knative Services.

After completing this blueprint, you will have a fully functional CI/CD pipeline that continuously builds and deploys code changes for your custom applications using Kubernetes.

The following diagram shows the complete setup:

DOKS CI/CD Overview

In the tutorial, you will learn:

  1. About components such as Kaniko, Tekton, Argo CD, and Knative.
  2. How to install each component using the DigitalOcean 1-Click apps.
  3. How to configure the required components, such as Knative Serving, Eventing, etc. to react on GitHub events and trigger the CI/CD pipeline.
  4. How to implement and test the CI/CD flow, and deploy a sample Knative application (2048 game).

Prerequisites

To complete this tutorial, you will need:

  1. A working domain that you own. This is required for exposing public services, such as GitHub webhooks, used in this guide. Make sure to also read the DigitalOcean DNS Quickstart Guide.
  2. A working DOKS cluster running Kubernetes version greater than 1.21. The DOKS cluster must have at least 2 nodes, each with 2 CPUs, 4 GB of memory, and 20 GB of disk storage. For additional instructions on configuring a DOKS cluster, see How to Set Up a DigitalOcean Managed Kubernetes Cluster (DOKS).
  3. A git client to interact with GitHub repositories.
  4. doctl for interacting with DigitalOcean API.
  5. kubectl for interacting with Kubernetes. Follow these instructions to connect to your cluster with kubectl and doctl.
  6. Helm for interacting with Helm releases created by the DigitalOcean 1-Click apps used in this tutorial.
  7. Argo CLI to interact with Argo CD using the command line interface.
  8. Tekton CLI to interact with Tekton Pipelines using the command line interface.
  9. Knative CLI for interacting with Knative using the command line interface.
  10. Kustomize is extensively used in this guide, and some basic knowledge is required. You can follow our community tutorial as a starting point.

STEP 1: Prepare the Sample Application Requirements

Before continuing with the tutorial, perform the following steps:

  1. Fork the sample application repository used in this guide.
  2. Provision a DigitalOcean Container Registry to store the sample application images.
  3. Create a dedicated Kubernetes namespace to store all custom resources used in this tutorial.

Fork the Sample Application Repo

To test the Tekton CI/CD flow presented in this blueprint, you need to fork the tekton-sample-app repository first. Also, create a GitHub Personal Access Token (PAT) with the appropriate permissions, as explained here. The PAT is needed to allow the GitHubSource CRD to manage webhooks for you automatically. Make sure to store the PAT credentials somewhere safe because you will need them later.

Create a DigitalOcean Container Registry

You need a DigitalOcean Container Registry to store the sample application images. For detailed steps on how to create one, follow the quickstart guide. A free plan is adequate to complete this guide.

Alternatively, you can run the following command to provision a new registry:

doctl registry create <YOUR_DOCKER_REGISTRY_NAME_HERE> \
  --region <YOUR_DOCKER_REGISTRY_REGION_HERE> \
  --subscription-tier starter

The command includes the following flags:

  • --region: Specifies the region name to provision the registry in. You can list all the available regions via the doctl registry options available-regions command.
  • --subscription-tier: Specifies the subscription tier to use. You can list all the available tiers via the doctl registry options subscription-tiers command. The example above uses the starter tier, which is free to use.

Then, run the following command to verify that the registry was created successfully. Make sure to replace the <> placeholders accordingly in the following command:

doctl registry get <YOUR_DOCKER_REGISTRY_NAME_HERE>

The output looks similar to the following:

Name         Endpoint                               Region slug
tekton-ci    registry.digitalocean.com/tekton-ci    nyc3

In this guide, the registry is named tekton-ci and provisioned in the nyc3 region.

Finally, you need to configure your DOKS cluster to be able to pull images from your private registry. DigitalOcean provides an easy way of accomplishing this task using the control panel. First, navigate to the Settings tab of your container registry. Then, click on the Edit button from the DigitalOcean Kubernetes Integration section. Finally, select the appropriate checkbox, and press the Save button.

DigitalOcean Kubernetes Integration

Create a Dedicated Namespace for Kubernetes Resources

It’s generally a best practice to have a dedicated namespace when provisioning new resources in your Kubernetes cluster to keep everything organized. A dedicated namespace also lets you easily clean up everything later on.

This tutorial uses the doks-ci-cd namespace. Run the following command to create it:

kubectl create ns doks-ci-cd

Then, check if the namespace is successfully created:

kubectl get ns doks-ci-cd

The output looks similar to:

NAME         STATUS   AGE
doks-ci-cd   Active   13m

Clone the Sample Repository

Clone the container-blueprints repo using the following command:

git clone https://github.com/digitalocean/container-blueprints.git

Then, change the directory to your local copy using the following command:

cd container-blueprints

Next, you will install each software component required by this guide using the DigitalOcean Marketplace collection of 1-click apps for Kubernetes.

Step 2: Install Cert-Manager

Cert-Manager is available as a 1-Click Kubernetes application from the DigitalOcean Marketplace. To install Cert-Manager, navigate to the Marketplace tab of your cluster and search for the app. Then, click on the Install App button and follow the instructions.

Cert-Manager 1-click App Install

After the installation finishes, you should see the new application listed in the Marketplace tab of your Kubernetes cluster. The output looks similar to:

Cert-Manager 1-click App Listing

Finally, check if the installation was successful by following the Getting started after deploying Cert-Manager section from the Cert-Manager 1-Click app documentation page.

Next, you will provision Tekton Pipelines on your Kubernetes cluster from DigitalOcean Marketplace.

Step 3: Install Tekton

Tekton installation is divided in two parts:

  1. Tekton Pipelines represents the main component of Tekton and provides pipelines support (as well as other core components, such as Tasks).
  2. Tekton Triggers provides an additional component to support triggering pipelines whenever events emitted by various sources (such as GitHub) are detected.

Tekton Pipelines is available as a 1-Click Kubernetes application from the DigitalOcean Marketplace. On the other hand, you will install Tekton Triggers using kubectl.

Provision Tekton Pipelines

To install Tekton Pipelines, navigate to the Marketplace tab of your cluster and search for the app. Then, click on the Install App button from the right side, and follow the instructions:

Tekton Pipelines 1-click App Install

After the installation completes, you should see the new application listed in the Marketplace tab of your Kubernetes cluster. The output looks similar to:

Tekton Pipelines 1-click App Listing

Finally, check if the installation was successful by following the Getting started after deploying Tekton Pipelines section from the Tekton Pipelines 1-Click app documentation page.

Next, you will provision Tekton Triggers on your Kubernetes cluster.

Provision Tekton Triggers

Tekton Triggers is not available as a 1-Click application, so you will install it using kubectl as recommended in the official installation page. Run the following commands to install Tekton Triggers and dependencies. The latest stable version available at this time of writing is v0.20.1:

kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.20.1/release.yaml
kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.20.1/interceptors.yaml

Note: Tekton Triggers requires Tekton Pipelines to be installed first as a dependency, as described in the Provisioning Tekton Pipelines section. By default, it uses the tekton-pipelines namespace to create the required resources.

Next, check if Tekton Triggers was installed successfully:

kubectl get pods --namespace tekton-pipelines -l app.kubernetes.io/part-of=tekton-triggers

The output looks similar to:

NAME                                                 READY   STATUS    RESTARTS   AGE
tekton-triggers-controller-75b9b7b77d-5nk76          1/1     Running   0          2m
tekton-triggers-core-interceptors-7769dc7cbc-8hjkn   1/1     Running   0          2m
tekton-triggers-webhook-79c866dc85-xz64m             1/1     Running   0          2m

All tekton-triggers pods should be running and healthy. You can also list the installed Tekton components and corresponding version using the Tekton CLI:

tkn version

The output looks similar to:

Client version: 0.24.0
Pipeline version: v0.29.0
Triggers version: v0.19.1

Next, you will provision the Tekton Dashboard on your Kubernetes cluster using kubectl.

Provision Tekton Dashboard

Tekton Dashboard is not available as a 1-Click application yet, so you will install it using kubectl as recommended in the official installation page. Run the following commands to install Tekton Dahsboard and dependencies. The latest stable version available at this time of writing is v0.28.0:

kubectl apply -f https://storage.googleapis.com/tekton-releases/dashboard/previous/v0.28.0/tekton-dashboard-release.yaml

Note: Tekton Dashboard requires Tekton Pipelines to be installed first as a dependency, as described in the Provision Tekton Pipelines section. By default, it uses the tekton-pipelines namespace to create required resources.

Next, check if the Tekton Dashboard installation was successful:

kubectl get pods --namespace tekton-pipelines -l app.kubernetes.io/part-of=tekton-dashboard

The output looks similar to:

NAME                                READY   STATUS    RESTARTS   AGE
tekton-dashboard-56fcdc6756-p848r   1/1     Running   0          5m

All tekton-dashboard pods should be running and healthy. You can also list installed Tekton components and corresponding version using Tekton CLI:

tkn version

The output looks similar to:

Client version: 0.24.0
Pipeline version: v0.29.0
Triggers version: v0.20.1
Dashboard version: v0.28.0

The Tekton Dashboard can be accessed by port-forwarding the associated Kubernetes service. First, check the associated service:

kubectl get svc --namespace tekton-pipelines -l app.kubernetes.io/part-of=tekton-dashboard

The output looks similar to the following:

NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
tekton-dashboard   ClusterIP   10.245.127.170   <none>        9097/TCP   23s

Notice that the Kubernetes service is named tekton-dashboard and listening on port 9097.

Now, port-forward the tekton-dashboard service:

kubectl port-forward svc/tekton-dashboard -n tekton-pipelines 9097:9097

Finally, open a web browser and navigate to localhost:9097. You should see the welcome page:

Tekton Dashboard Welcome Page

You can explore each section from the left menu, and see what options are available. Next, you will install the Argo CD 1-Click app from DigitalOcean marketplace.

Step 4: Install Argo CD

Argo CD is available as a 1-Click Kubernetes application from the DigitalOcean Marketplace. To install Argo CD, navigate to the Marketplace tab of your cluster and search for the app. Then, click on the Install App button, and follow the instructions:

Argo CD 1-click App Install

After the installation finishes, you should see the new application listed in the Marketplace tab of your Kubernetes cluster. The output looks similar to:

Argo CD 1-click App Listing

Finally, check if the installation was successful by following the Getting started after deploying Argo CD section from the Argo CD 1-click app documentation page.

Next, you will install the Knative 1-Click app from the DigitalOcean Marketplace.

Step 5: Install Knative

Knative is available to install as a 1-Click Kubernetes application from the DigitalOcean Marketplace. To install Knative, navigate to the Marketplace tab of your cluster and search for the app. Then, click on the Install App button, and follow the instructions:

Knative 1-click App Install

After the installation finishes, you should see the new application listed in the Marketplace tab of your Kubernetes cluster. The output looks similar to:

Knative 1-click App Listing

Finally, check if the installation was successful by following the Getting started after deploying Knative section from the Knative 1-click app documentation page.

Note: The Knative 1-Click app installs both the Knative Serving and Eventing components in your DOKS cluster, via the Knative Operator.

Next, you will configure each component of Knative to work in conjunction with Tekton Pipelines and trigger the CI automation on GitHub events such as push. You will also publicly expose and enable TLS termination for your Knative services.

Step 6: Configure Knative Serving

In this step, you will learn how to prepare a domain you own to work with Knative Services. Then, you will learn how to configure Knative Serving to use your custom domain, and enable automatic TLS termination for all Knative services. It’s a best practice in general to enable TLS termination for all application endpoints exposed publicly.

Configure DigitalOcean Domain Records for Knative Services

In this section, you will configure DNS within your DigitalOcean account, using a domain that you own. Then, you will create a wildcard record to match a specific subdomain under your root domain and map it to your Knative load balancer. DigitalOcean is not a domain name registrar. You need to buy a domain name first from providers such as Google or GoDaddy.

First, run the following command to register your domain with DigitalOcean, replacing the <> placeholders:

doctl compute domain create "<YOUR_DOMAIN_NAME_HERE>"

The output looks similar to the following:

Domain                TTL
starter-kit.online    0

In this example, we use the domain starter-kit.online.

Note: Ensure that your domain registrar is configured to point to DigitalOcean name servers. For more information, see here.

Next, you will add a wildcard record of type A for the Kubernetes namespace doks-ci-cd. First, you need to identify the load balancer external IP created by Knative:

kubectl get svc -n knative-serving

The output looks similar to the following:

NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                           AGE
activator-service            ClusterIP      10.245.219.95    <none>          9090/TCP,8008/TCP,80/TCP,81/TCP   4h30m
autoscaler                   ClusterIP      10.245.42.109    <none>          9090/TCP,8008/TCP,8080/TCP        4h30m
autoscaler-bucket-00-of-01   ClusterIP      10.245.236.8     <none>          8080/TCP                          4h30m
autoscaler-hpa               ClusterIP      10.245.230.149   <none>          9090/TCP,8008/TCP                 4h30m
controller                   ClusterIP      10.245.13.134    <none>          9090/TCP,8008/TCP                 4h30m
domainmapping-webhook        ClusterIP      10.245.113.122   <none>          9090/TCP,8008/TCP,443/TCP         4h30m
kourier                      LoadBalancer   10.245.23.78     159.65.208.64   80:31060/TCP,443:31014/TCP        4h30m
...

Then, add the wildcard record which maps your subdomain to the Knative load balancer. Knative Services are namespace scoped, and use the following pattern: *.<k8s_namespace>.<your_root_domain>. The doks-ci-cd Kubernetes namespace and the EXTERNAL-IP column value for the kourier service are used for this blueprint. You can also change the TTL value as per your requirement, making sure to replace the <> placeholders accordingly:

doctl compute domain records create "<YOUR_DOMAIN_NAME_HERE>" \
  --record-name "*.doks-ci-cd" \
  --record-data "<YOUR_KOURIER_LOAD_BALANCER_EXTERNAL_IP_ADDRESS_HERE>" \
  --record-type "A" \
  --record-ttl "30"

Note: The DNS record must not contain the root domain value - it will be appended automatically by DigitalOcean. For example, if the root domain name is starter-kit.online, and the Kourier LoadBalancer has an external IP value of 143.198.242.190, then the above command becomes:

doctl compute domain records create "starter-kit.online" \
  --record-name "*.doks-ci-cd" \
  --record-data "143.198.242.190" \
  --record-type "A" \
  --record-ttl "30"

Finally, you can check the records created for the starter-kit.online domain:

doctl compute domain records list starter-kit.online

The output looks similar to:

ID           Type    Name           Data                    Priority    Port    TTL     Weight
274640149    SOA     @              1800                    0           0       1800    0
274640150    NS      @              ns1.digitalocean.com    0           0       1800    0
274640151    NS      @              ns2.digitalocean.com    0           0       1800    0
274640152    NS      @              ns3.digitalocean.com    0           0       1800    0
309782452    A       *.doks-ci-cd   143.198.242.190         0           0       3600    0

Configure a Custom Domain and Auto TLS Feature for Knative Services

Knative enables TLS termination automatically for your existing or new services, and automatically fetches or renewes TLS certificates from Let’s Encrypt. This feature is provided via Cert-Manager and a special component (or adapter) named net-certmanager. You can also configure Knative to use a custom domain you own and let users access your services via your domain.

The following steps configure Knative Serving with your domain and enable the auto-TLS feature:

  1. Create a ClusterIssuer resource to issue certificates from Let’s Encrypt.
  2. Install the net-certmanager controller to act as a bridge (or adapter) between Cert-Manager and Knative Serving, for automatic issuing of certificates.
  3. Configure Knative Serving component via the Knative Operator to:
    • Use a dedicated domain that is registered with DigitalOcean, as explained in Configure DigitalOcean Domain Records for Knative Services section.
    • Use the ClusterIssuer resource to automatically issue/renew certificates for each service.
    • Enable the auto TLS feature, via a special flag called auto-tls.

First, you need to create a ClusterIssuer CRD for Cert-Manager. This blueprint provides a ready-to-use manifest which you can install using kubectl:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: kn-letsencrypt-http01-issuer
spec:
  acme:
    privateKeySecretRef:
      name: kn-letsencrypt
    server: https://acme-v02.api.letsencrypt.org/directory
    solvers:
      - http01:
          ingress:
            class: kourier

The manifest has the following keys:

  • spec.acme.privateKeySecretRef.name - Specifies the unique name given to the private key of the TLS certificate.
  • spec.acme.server - Specifies the Let’s Encrypt server endpoint used to issue certificates.
  • spec.acme.solvers - Specifies the ACME client challenge type and what ingress class to use. The above configuration uses the HTTP-01 challenge, and Knative Kourier ingress controller.

Apply the Knative ClusterIssuer manifest using kubectl:

kubectl apply -f DOKS-CI-CD/assets/manifests/knative-serving/resources/kn-cluster-issuer.yaml

Check the ClusterIssuer state:

kubectl get clusterissuer kn-letsencrypt-http01-issuer

The output looks similar to:

NAME                           READY   AGE
kn-letsencrypt-http01-issuer   True    22h

The READY column should show True. Now, that the ClusterIssuer resource is functional, you need to tell Knative Serving how to use it and issue certificates automatically for your Knative Services. This feature is called auto TLS. To use this feature, you need to install an additional component called net-certmanager.

You can install net-certmanager using kubectl:

kubectl apply -f https://github.com/knative/net-certmanager/releases/download/knative-v1.4.0/release.yaml

Alternatively, you can use the Knative Operator. The Knative Operator is already installed in your DOKS cluster via the DigitalOcean Knative 1-Click app, which you deployed previously. The following YAML manifest shows you how to instruct Knative Operator to install the additional net-certmanager component, as part of the KnativeServing component configuration:

apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
  name: knative-serving
  namespace: knative-serving
spec:
  additionalManifests:
    - URL: https://github.com/knative/net-certmanager/releases/download/knative-v1.4.0/release.yaml

You don’t have to apply the above manifest by hand, because everything is handled via Kustomize in this repository, including enabling the auto TLS feature for Knative Services.

Kustomize uses the kustomization manifest to take the original KnativeServing configuration from the DigitalOcean Marketplace GitHub repository, and apply a set of patches for each needed feature. This way, you don’t have to modify the original file or keep a modified copy of it somewhere. Kustomize lets you use the original file as a base, and apply a set of patches on the fly.

The patch files are listed below:

  • net-certmanager-install: Installs net-certmananger component via Knative Operator.
  • certmanager-config: Configures Knative Serving to use the ClusterIssuer resource created earlier to automatically issue TLS certificates from Let’s Encrypt.
  • domain-config: Configures Knative Serving to use your custom domain when exposing services.
  • network-config: Configures Knative Serving to enable the auto TLS feature for Knative Services, via the auto-tls flag.

Follow the steps to apply all required changes for the Knative Serving component, via Kustomize:

  1. Edit domain-config to point to your own domain name (replace the starter-kit.online key with your domain), using a text editor of your choice (preferably with YAML linting support). For example, you can use Visual Studio Code:

    code DOKS-CI-CD/assets/manifests/knative-serving/patches/domain-config.yaml
    
  2. Apply kustomizations using kubectl:

    kubectl apply -k DOKS-CI-CD/assets/manifests/knative-serving
    

Finally, you can test your whole setup by deploying the sample hello-world Knative Service provided in the DigitalOcean Marketplace repo to test the whole setup:

kubectl apply \
  -n doks-ci-cd \
  -f https://raw.githubusercontent.com/digitalocean/marketplace-kubernetes/master/stacks/knative/assets/manifests/serving-example.yaml

After a few moments, a new service should show up. List all Knative Services using the Knative CLI (kn):

kn service list -n doks-ci-cd

The output looks similar to:

NAME     URL                                          LATEST          AGE   CONDITIONS   READY
hello    https://hello.doks-ci-cd.starter-kit.online  hello-world     31m   3 OK / 3     True    

The hello-world service endpoint should be in a healthy state with the READY column set to True. Also, the service endpoint should be HTTPS enabled as shown in the URL column value, and use your custom domain name. You can also open a web browser, and navigate to the service endpoint - a Hello World! message should be displayed.

Note:

  • It may take up to 1 minute or so until HTTPS is enabled for your service(s) endpoint. This is because it takes a while until the ACME client used by the ClusterIssuer finishes the HTTP-01 challenge, and obtains the final TLS certificate(s).
  • For testing purposes, it’s recommended to set the spec.acme.server field to point to Let’s Encrypt staging server by editing the kn-cluster-issuer file and then run kubectl apply -f. This is because the Let’s Encrypt production server has a quota limit set which can be very easily reached, and you will not be able to issue certificates for the whole day.
  • GitHub webhooks require production-ready TLS certificates, so switch to the Let’s Encrypt production server afterwards.

Knative Private Services

By default, Knative will configure all services with a public endpoint, if a valid domain is configured. If you do not want to publicly expose a Knative service because of some security implications, or if it’s not ready yet to be consumed by users, you can use the Knative Private Services feature.

To make a Knative service private, you need to add a special label named networking.knative.dev/visibility: cluster-local to any Knative service manifest. For example:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello
  labels:
    networking.knative.dev/visibility: cluster-local
spec:
  template:
    metadata:
      # Revision name
      # Must follow the convention {service-name}-{revision-name}
      name: hello-world
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go
          ports:
            - containerPort: 8080
          env:
            - name: TARGET
              value: "World"

After applying the above manifest in the doks-ci-cd namespace, Knative will create a private service:

kn service list -n doks-ci-cd

The output looks similar to:

NAME                    URL                                           LATEST           AGE   CONDITIONS   READY
hello                   http://hello.doks-ci-cd.svc.cluster.local     hello-world      17m   3 OK / 3     True     

In the output above, you will notice that the hello-world service endpoint is using the internal domain of the cluster - svc.cluster.local. Also, only HTTP is enabled for the service in question.

Next, you will configure Knative Eventing to listen for GitHub events, and trigger the sample Tekton CI/CD pipeline used in this tutorial.

Step 7: Configure Knative Eventing

Knative has a very powerful eventing system built in. Tekton also has some primitives built in to react on external events, such as the EventListener. Tekton EventListeners know how to filter events as well, via Interceptors. This tutorial shows you another way of processing events and triggering Tekton pipelines by using Knative Eventing.

Before continuing, it’s important to understand some Knative Eventing concepts used in this guide:

  • Sources: The main purpose is to define or create event producers. In this tutorial, the GitHubSource type is used as a producer of GitHub events. When working with GitHubSources, you can also filter and react on specific events such as push events. It can also manage webhooks for you automatically via the GitHub API using the PAT you created previously.
  • Event delivery mechanisms, such as Simple Delivery and Complex Delivery.

This guide uses a simple delivery mechanism, where a GitHubSource filters and fires the Tekton CI/CD pipeline EventListener on push events. The GitHubSource component is available via the eventing-github Knative project. The official version doesn’t support Kubernetes v1.22 and up at this time of writing (there is a open PR to address this issue). This repository provides a functional github-eventing-v1.4.0 manifest that you can use.

Install github-eventing in your cluster by apply the github-eventing-v1.4.0 manifest using kubectl:

kubectl apply -f DOKS-CI-CD/assets/manifests/knative-eventing/github-eventing-v1.4.0.yaml

The command also creates a dedicated knative-sources namespaces for github-eventing.

Now, check if all github-eventing pods are up and running:

kubectl get pods -n knative-sources

The output looks similar to:

NAME                              READY   STATUS    RESTARTS   AGE
github-controller-manager-0       1/1     Running   0          21h
github-webhook-6cdcfc69ff-2q4sn   1/1     Running   0          21h

The github-controller and github-webhook pods should be in a Running state. The github-controller pod is responsible with reconciling your GitHubSource CRDs. The github-webhook pod manages webhooks for your GitHub repository, as defined by the GitHubSource CRD.

Each time you create a GitHubSource resource, a corresponding Knative Service is created as well. The Knative Service exposes a public endpoint and gets registered as a webhook in your GitHub repository by the github-eventing component. Then, each time a GitHub event is fired, the Knative Service created by your GitHubSource resource is triggered, and notifies the Tekton EventListener. This approach has other benefits, such as letting Knative Serving to take care of creating endpoints for your webhooks, and automatically provide TLS support.

The following diagram illustrates the setup used in this guide and all involved components.

Tekton and Knative GitHubSource Integration

You can also replace the simple delivery mechanism with a complex delivery that uses channels or brokers. Then, you can have multiple subscribers responding to all kind of events. For example, you can have a dedicated setup where, based on the type of event that gets fired, a different Tekton Pipeline is triggered. For example:

  • When a PR is opened, a specific subscriber gets notified, and triggers a dedicated pipeline which runs only application tests and thus validates PRs.
  • When code is merged in the main (or development) branch, another subscriber gets notified, and triggers a dedicated CI/CD pipeline.

A typical GitHubSource CRD definition looks like the following:

apiVersion: sources.knative.dev/v1alpha1
kind: GitHubSource
metadata:
  name: sample-app-github-source
spec:
  eventTypes:
    - push
  ownerAndRepository: github-repo-owner/sample-app-repo
  accessToken:
    secretKeyRef:
      name: github-pat
      key: accessToken
  secretToken:
    secretKeyRef:
      name: github-pat
      key: secretToken
  sink:
    uri: http://el-tekton-event-listener.doks-ci-cd.svc.cluster.local:8080

The manifest has the following keys:

  • spec.eventTypes: Specifies what type of events you’re interested in, for example push.
  • spec.ownerAndRepository: Specifies the GitHub repository (and owner) for the application source code.
  • spec.accessToken (spec.secretToken): Specifies the GitHub personal access token name and value.
  • spec.sink: Specifies a destination for events, such as a Kubernetes service URI, or a Knative Service.

Next, you will configure and test the Tekton CI/CD pipeline for the sample application (2048 game). You will also learn how to automatically trigger the pipeline on GitHub events when pushing commits, using Knative GitHubSource and Tekton EventListeners.

Step 8: Set Up Your First CI/CD Pipeline Using Tekton and Argo CD

In this part of the tutorial, you will set up a Tekton CI/CD Pipeline that builds a Docker image for your custom application using Kaniko and publishes it to a remote Docker registry. Then, the Tekton pipeline will trigger Argo CD to create and deploy the application to your Kubernetes cluster.

At a high-level overview, you use the following steps:

  1. Implement the CI/CD Pipeline workflow using Tekton and Argo CD.
  2. Configure Tekton EventListeners and Triggers for automatic triggering of the CI/CD Pipeline using Git events such as pushing commits.
  3. Configure the Knative Eventing GitHubSource to trigger your Tekton CI/CD pipeline.

Next, to set up the CI/CD Pipeline workflow:

  1. Fetch sample application source code from Git.
  2. Build and pushing the application image to the DigitalOcean Container Registry.
  3. Trigger Argo CD to deploy (sync) the sample application to your Kubernetes cluster.

Finally, to configure the CI/CD pipeline to trigger on Git events:

  1. Set up the GitHubSource resource that triggers your Tekton pipeline, by registering the required webhook with your GitHub application repository.
  2. Set up an EventListener that triggers and processes incoming events from the GitHubSource.
  3. Set up a TriggerTemplate that instantiates a Pipeline resource (and associated Tasks) each time the EventListener is triggered.
  4. Set up a TriggerBinding resource to populate the TriggerTemplate input parameters with data extracted from the GitHub event.

The following diagram below illustrates the CI/CD process implemented using Tekton and Argo:

Tekton CI/CD Pipeline Overview

This blueprint provides all the necessary manifests to create resources such as Tekton CRDs, and Knative CRDs, in your Kubernetes cluster, via Kustomize. You will find everything inside the tekton folder, including the kustomization manifest. You can take a look at each, and see how it’s being used.

The following shows how the tekton kustomization folder is structured:

DOKS-CI-CD/assets/manifests/tekton/
├── configs
│   ├── argocd
│   │   ├── auth.env
│   │   └── server.env
│   ├── docker
│   │   ├── config.json
│   │   └── registry.yaml
│   └── github
│       ├── githubsource.yaml
│       └── pat.env
├── eventing
│   ├── tekton-ci-cd-channel-subscribers.yaml
│   ├── tekton-ci-cd-channel.yaml
│   └── tekton-ci-cd-github-source.yaml
├── pipelines
│   └── tekton-argocd-build-deploy.yaml
├── tasks
│   └── argocd-task-create-app.yaml
├── triggers
│   ├── rbac.yaml
│   ├── tekton-argocd-build-deploy-event-listener.yaml
│   ├── tekton-argocd-build-deploy-trigger-binding.yaml
│   └── tekton-argocd-build-deploy-trigger-template.yaml
└── kustomization.yaml

The DOKS-CI-CD/assets/manifests/tekton/ folder structure is explained below:

  • configs: Contains configuration files for the secret and configmap generators used in the kustomization file. This folder is further broken into:
    • argocd: Contains ArgoCD configurations and secrets.
    • docker - contains the registry configuration file used to push images to the DigitalOcean Docker registry.
    • github: Contains your PAT (Personal Access Token) credentials.
  • eventing: Contains all manifest files required to configure Knative Eventing to trigger the Tekton CI/CD pipeline. Following manifests are present here:
    • tekton-ci-cd-github-source.yaml: Configures the GitHubSource CRD used in this tutorial (in-depth explanations can be found inside).
    • tekton-ci-cd-channel-subscribers.yaml: This is optional and not being used by the kustomization from this tutorial. Provided as an example for how to use Knative Eventing subscriptions feature.
    • tekton-ci-cd-channel.yaml - This is optional and not being used by the kustomization from this tutorial. Provided as an example for how to use Knative Eventing channels feature.
  • pipelines: Contains configuration files for the Tekton CI/CD Pipeline used in this tutorial. Following manifests are present here:
    • tekton-argocd-build-deploy.yaml: Contains the definition for the CI/CD pipeline (in-depth explanations can be found inside).
  • tasks: Contains configuration files for custom Tekton Tasks used in this tutorial. Following manifests are present here:
    • argocd-task-create-app.yaml: Defines the Argo CD task for creating new applications.
  • triggers: Contains configuration files for the Tekton Triggers used in this tutorial. Following manifests are present here:
    • rbac.yaml - defines the service account and required role bindings used by the Tekton EventListener from this tutorial. This is required by the EventListener to instantiate resources, such as the Tekton CI/CD pipeline.
    • tekton-argocd-build-deploy-event-listener.yaml - contains the definition for the Tekton EventListener used this tutorial (in-depth explanations can be found inside).
    • tekton-argocd-build-deploy-trigger-binding.yaml: Contains the definition for the Tekton TriggerBinding used this tutorial (in-depth explanations can be found inside).
    • tekton-argocd-build-deploy-trigger-template.yaml: Contains the definition for the Tekton TriggerTemplate used this tutorial (in-depth explanations can be found inside).
  • kustomization.yaml: This is the main kustomization file (in-depth explanations can be found inside).

Note: The configs folder used by Kustomize contains sensitive data. Use a .gitignore file to exclude commiting those files in your Git repository.

Create all required resources in your Kubernetes cluster, via Kustomize. Edit and save each property file from the DOKS-CI-CD/assets/manifests/tekton/configs subfolder, making sure to replace the <> placeholders accordingly. For example, you can use VS Code:

code DOKS-CI-CD/assets/manifests/tekton/configs/argocd/auth.env

code DOKS-CI-CD/assets/manifests/tekton/configs/docker/config.json

code DOKS-CI-CD/assets/manifests/tekton/configs/github/pat.env

Tips:

  • To obtain Argo CD admin password, use the following command:

    kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
    
  • To obtain your DigitalOcean Container Registry read/write credentials, you can use the following command and write results directly in the config.json file:

        doctl registry docker-config --read-write <YOUR_DOCKER_REGISTRY_NAME_HERE> > DOKS-CI-CD/assets/manifests/tekton/configs/docker/config.json
    
  1. Edit the configs/github/githubsource.yaml file and replace the <> placeholders accordingly, then save changes. For example, you can use VS Code:

    code DOKS-CI-CD/assets/manifests/tekton/configs/github/githubsource.yaml
    
  2. Edit the configs/docker/registry.yaml file and replace the <> placeholders accordingly, then save changes. For example, you can use VS Code:

    code DOKS-CI-CD/assets/manifests/tekton/configs/docker/registry.yaml
    
  3. Finally, apply all changes via Kustomize (-k flag):

    kubectl apply -k DOKS-CI-CD/assets/manifests/tekton
    

After Kustomize finishes, you should have all required resources created in your cluster, in the doks-ci-cd namespace.

First, check the eventing resources state (the GitHubSource):

kubectl get githubsources -n doks-ci-cd

The output looks similar to (READY column should display True, and SINK should point to the EventListener service URI):

NAME                         READY   REASON   SINK                                                                                    AGE
tekton-ci-cd-github-source   True             http://el-tekton-argocd-build-deploy-event-listener.doks-ci-cd.svc.cluster.local:8080   36s

The above output shows you that the EventListener of your Tekton CI/CD pipeline is connected (via the SINK column value) to the GitHubSource CRD to receive GitHub events.

Next, check associated Knative Services:

kn services list -n doks-ci-cd

The output looks similar to:

NAME                               URL                                                                      LATEST                                  READY
hello                              https://hello.doks-ci-cd.starter-kit.online                              hello-world                             True
tekton-ci-cd-github-source-7cgcw   https://tekton-ci-cd-github-source-7cgcw.doks-ci-cd.starter-kit.online   tekton-ci-cd-github-source-7cgcw-00001  True

You should see a tekton-ci-cd-github-source-xxxx service running, and in the READY state. This service is responsible for receiving events from GitHub. The tekton-ci-cd-github-source-xxxx service URL should be also registered as a webhook in your forked repo. Navigate to the Settings of your forked GitHub repository and check the Webhooks section. A new webhook should be listed with a green check mark:

Tekton GitHub Webhook

It points to the same URL as displayed in the Knative services listing. On the other hand, you can also inspect the events being sent by GitHub (including response status).

Finally, check important Tekton resources status, such as pipelines, triggers, eventlisteners:

tkn pipeline list -n doks-ci-cd

tkn pipelinerun list -n doks-ci-cd

tkn triggertemplate list -n doks-ci-cd

tkn eventlistener list -n doks-ci-cd

Tip: For troubleshooting, you can inspect each resource events and logs, via the corresponding subcommand as shown below:

  • Describing Tekton pipeline resources using the describe subcommand:

    tkn pipeline describe tekton-argocd-build-deploy-pipeline -n doks-ci-cd
    
    tkn pipelinerun describe <tekton-argocd-build-deploy-pipeline-run-zt6pt-r-r7wgw> -n doks-ci-cd
    
  • Inspecting Tekton pipeline resources logs using the logs subcommand:

    tkn pipeline logs tekton-argocd-build-deploy-pipeline -n doks-ci-cd
    
    tkn pipelinerun logs tekton-argocd-build-deploy-pipeline-run-zt6pt-r-r7wgw  -n doks-ci-cd
    

Next, you will test the CI/CD pipeline flow by pushing some changes to the tekton-sample-app repository prepared in the Forking the Sample Application Repo step. Then, you will access the Tekton dashboard and watch a live demonstration of how the pipeline triggers automatically, and execution of steps.

Step 9: Test the CI/CD Setup

You will test the CI/CD flow by changing the knative-service resource from the application repo to point to your container registry.

First, clone the git repository prepared in the Forking the Sample Application Repo step (make sure to replace the <> placeholders accordingly):

git clone [email protected]:<YOUR_GITHUB_USER_NAME_HERE>/tekton-sample-app.git

Then, change the directory to your local clone:

cd tekton-sample-app

Next, edit the knative-service.yaml manifest file to point to your Docker registry. For example you can use VS Code (make sure to replace the <> placeholders accordingly):

code resources/knative-service.yaml

Save and commit knative-service.yaml file changes to your GitHub repository. Next, port-forward the tekton dashboard service:

kubectl port-forward svc/tekton-dashboard -n tekton-pipelines 9097:9097

Now, launch your Web browser and access localhost:9097. Then, navigate to PipelineRuns - you should see your Tekton CI/CD pipeline running:

Tekton PipelineRun Listing

If you click on it, you should see each Task execution status, and logs:

Tekton PipelineRun Details

Note:

Initially, you may get a failed pipeline run as well. When the webhook is created for the first time, GitHub sends a specific payload to test your GitHubSource endpoint to check if it’s alive. The payload content is not valid for the Tekton pipeline and causes the pipeline to fail.

You can access your application endpoint and play the game. First, list all Knative routes from the doks-ci-cd namespace:

kn route list -n doks-ci-cd

The output looks similar to:

NAME                               URL                                                                      READY
game-2048                          https://game-2048.doks-ci-cd.starter-kit.online                          True
hello                              https://hello.doks-ci-cd.starter-kit.online                              True
tekton-ci-cd-github-source-7cgcw   https://tekton-ci-cd-github-source-7cgcw.doks-ci-cd.starter-kit.online   True  

A new entry should be present named game-2048, and in a READY state with HTTPS enabled in the URI field. Open a Web browser and paste the link shown in the URL column. The 2048 game should start successfully:

2048 game

If everything looks like above, then you created and tested your first Tekton CI/CD pipeline successfully.

Summary

In this guide, you learned how to combine different software components (such as Knative, Tekton and Argo) to create a simple CI/CD pipeline running entirely on Kubernetes. You also learned how to configure and use both Knative Serving and Eventing to do useful work for you, as well as ease application development on Kubernetes. Then, by using Knative Eventing and Tekton EventListeners, you enabled automatic triggering of the CI/CD flow each time a change is pushed to the application GitHub repository. Finally, Argo CD closes the loop and deploys your custom application on Kubernetes as a Knative service.

Next Steps

Argo CD is a GitOps tool and can be used to keep both application and Kubernetes-specific configuration in sync with dedicated GitHub repositories. You can use a dedicated repository for automatically configuring your DOKS cluster with all steps used in this guide. From a practical point of view, you can fork the container-blueprints repository, and tell Argo CD to sync all Kubernetes configuration manifests with your cluster. This way, you don’t need to redo all steps by hand again, and let Argo take handle it automatically as well as future upgrades for all software components used in this guide.

You can also let Argo CD manage Helm releases as well majority of DigitalOcean Marketplace 1-Click apps use Helm to deploy software components.