DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains, integrate natively with DigitalOcean Load Balancers and volumes, and can be managed programmatically using the API and command line. For critical workloads, add the high-availability control plane to increase uptime with 99.95% SLA.
In this tutorial, you will build a Docker image for a sample application and securely run it on a DigitalOcean Kubernetes cluster.
To follow this tutorial, you need to:
doctl
, the DigitalOcean command-line tool.kubectl
, the Kubernetes command-line tool.First, generate a token with read and write access, using any name of your choice. This only displays the token once, so save it somewhere safe.
Use the following command to authenticate doctl
, then paste in your token when prompted:
doctl auth init
Create a sample app that outputs a “Hello World!” message and its hostname to the screen.
In a new directory, create a app.py
file with and write the following content:
from flask import Flask
import os
import socket
app = Flask(__name__)
@app.route("/")
def hello():
html = """Hello {name}!
Hostname: {hostname}"""
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname())
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
Save and exit the file.
The first line establishes this app’s dependency on the Python library for Flask. However, to express that dependency in Python, you must also create a text file called requirements.txt
.
In the same directory, create the requirements.txt
file and write the following content:
Flask
To build a Docker image, you first need to create a Dockerfile
. A Dockerfile
is a text document that defines the code, the runtime, and any dependencies that your code has, thus recreating the same environment every time it runs. This ensures reproducibility by allowing the code to run correctly on other machines.
In the same directory, create a file called Dockerfile
and write the following content:
# Use an official Python runtime as a parent image
FROM python:slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
With these commands, you have set up an environment with Python and Flask pre-installed, defined an environment variable that the app needs, and defined the command that runs your app. Through environment composability, container apps enable your code to run anywhere.
Run the following command to create an image based on the Dockerfile
you defined. Tag it with -t
so it has a friendly name:
docker build -t my-python-app .
This builds the image in your machine’s local image registry. You can access the image with the following command:
docker images | grep my-python-app
Use the following command to run the image:
docker run -p 80:80 my-python-app
The output displays that Python is serving your app at port 80.
Then, run the app again:
docker run -p 80:80 my-python-app
The hostname changes with the second output. This because every instance of your app is unique. In fact, the hostname is just the ID of the container, or the runtime instance of the image.
The image is still only in your local registry. To run it in production, you need to upload the image to a remote registry. In this tutorial, you use DigitalOcean Container Registry.
Since you have already authenticated your environment with your DigitalOcean account, you can create a registry and log into it. Registry names must be globally unique. Run the following command using your registry name for the <your-registry-name>
variable:
doctl registry create <your-registry-name>
Then, log into your registry:
doctl registry login
doctl auth init
.Now that you have a registry and Docker is authorized to use it, run the following command to tag your local image with its fully-qualified destination path and use your registry name:
docker tag my-python-app registry.digitalocean.com/<your-registry-name>/my-python-app
Next, upload your local image to your registry:
docker push registry.digitalocean.com/<your-registry-name>/my-python-app
Once uploaded, you can log into your DigitalOcean registry from any machine, pull your image from the registry, and create a running container with it. The command to do so is similar to creating a container locally, except using the DigitalOcean registry’s version of the image:
docker run -p 80:80 registry.digitalocean.com/<your-registry-name>/my-python-app
Now that you have a consistently-performing container, you can create a DigitalOcean Kubernetes cluster. Clusters are sets of nodes that allow your app to scale, connect with other containers, manage advanced configuration such as state and secrets, etc.
Kubernetes is a popular and flexible orchestrator, an application that coordinates the scheduling of containers and manages their workloads, state, and secrets for you. With Kubernetes, you can create a cluster of machines in the cloud and run commands on them as though they were one machine. Kubernetes then packs them efficiently with your container workloads.
Each virtual machine you add to a Kubernetes cluster is called a node, and each provides empty capacity for you to use with your containers.
DigitalOcean provides a managed Kubernetes product that facilitates the creation and management of clusters. You can create a new DigitalOcean Kubernetes cluster with the following command, using your cluster name in the <your-cluster-name>
variable:
doctl kubernetes cluster create <your-cluster-name> --tag do-tutorial --auto-upgrade=true --node-pool "name=mypool;count=2;auto-scale=true;min-nodes=1;max-nodes=3;tag=do-tutorial"
This operation will take several minutes.
doctl
also automatically configures the Kubernetes command-line interface, kubectl
, so that all kubectl
commands are “pointed at” your new cluster. You use the kubectl
commands in this tutorial to manage your specific cluster. You can have doctl
set kubectl
’s context this way again later if you need to by calling doctl
’s save command.DigitalOcean Kubernetes uses node pools , which are groups of virtual machines that you configure to be of a certain size and number. If configured, a cluster can autoscale by adding or removing nodes to node pools automatically.
There are a number of machine sizes you can use for a node, each one offering a different combination of memory and CPU cores. When you create a node pool, you can configure machine size using any slug you see with doctl kubernetes options sizes
. By default, the nodes have 2 CPUs and 4GB of memory, but only 2.5GB of that memory is available for your container workloads due to the overhead of the operating system and all the software running DigitalOcean Kubernetes. For the purposes of this tutorial, 1GB of memory is sufficient.
If you change the desired machine size after creating the node pool, DigitalOcean will recycle the nodes, which destroys the old nodes at the same rate as they are being replaced with the new ones.
Your cluster is ready when you get output that looks like this:
Notice: Cluster is provisioning, waiting for cluster to be running
......................................................
Notice: Cluster created, fetching credentials
Notice: Adding cluster credentials to kubeconfig file found in "/root/.kube/config"
Notice: Setting current-context to do-nyc1-*********
You can now run your app, and use kubectl
to manage your new cluster.
The first thing to do is authorize your cluster’s access to your private registry. Use the following command to output the Kubernetes secret manifest of the registry and pipe it directly to kubectl
:
doctl registry kubernetes-manifest | kubectl apply -f -
Previously, you authenticated Docker to push to and pull from your registry. Now, you are authenticating your Kubernetes cluster to use the registry, which stores your registry credentials as a secret, the built-in mechanism Kubernetes offers for securely storing sensitive data.
After running the previous command, you’ll see that the secret is uploaded and given a name similar to your registry’s name. Next, use this secret as an authentication token when pulling your images from your private registry and creating containers:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "registry-<your-registry-name>"}]}'
Use the two nodes and run multiple instances of your application at once. To do this, create a Deployment of your app, which is the object Kubernetes uses to maintain the desired state of your running containers. Run the following command to launch the app live in the cluster:
kubectl create deployment my-python-app --image=registry.digitalocean.com/<your-registry-name>/my-python-app
One aspect of the Deployment you created is its default Replica Set, which is the object Kubernetes uses to maintain a stable number of replicas of your container. Each replica is a separate running instance of your container called a Pod. You can confirm that by running the following command, which lists all Replica Sets:
kubectl get rs
This returns an output similar to the following:
NAME DESIRED CURRENT READY AGE
my-python-app-84b997f5b4 1 1 1 5s
And see that your Replica Set is just running one Pod, which is one replica, one instance of your application:
kubectl get pods
This returns an output similar to the following:
NAME READY STATUS RESTARTS AGE
my-python-app-84b997f5b4-6j5pn 1/1 Running 0 27s
Try scaling up to run 20 replicas:
kubectl scale deployment/my-python-app --replicas=20
Now, when you call kubectl get rs
and kubectl get pods
, you see a lot more activity. If you repeatedly call kubectl get pods
, you can watch the Status
column change as Kubernetes gets the 19 new Pods up and running.
Next, run the following command to see how these Pods get divided over your nodes:
kubectl get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name --all-namespaces | grep my-python-app
This returns an output similar to the following:
mypool-cokph my-python-app-84b997f5b4-25shx
mypool-cokph my-python-app-84b997f5b4-2tkgz
mypool-cokpk my-python-app-84b997f5b4-5dtbz
mypool-cokpk my-python-app-84b997f5b4-5gl7h
mypool-cokpk my-python-app-84b997f5b4-6j5pn
...
Since you named your node pool mypool
, the two individual nodes have names with mypool
, plus some random characters. Furthermore, the pods are being scheduled so that they are comfortably spread out on your available capacity.
Next, create a load balancer to expose your Deployment to the world. The load balancer runs in the cloud and routes the incoming traffic:
kubectl expose deployment my-python-app --type=LoadBalancer --port=80 --target-port=80
This command exposes the replicas of the sample Python app to the world behind a load balancer, which receives traffic at port 80 and routes that traffic to port 80 on the Pods.
Keep running this command until you see active
under the Status
column for the new load balancer:
doctl compute load-balancer list --format Name,Created,IP,Status
This returns an output similar to the following:
Name Created At IP Status
a55a6520a74d5437fa389891f2f8708f 2022-04-27T14:46:34Z 143.244.215.170 new
Status
is new
and the IP
is blank; re-run the command until you have been assigned an IP address.Navigate to the IP address of the load balancer and refresh your browser. You’ll see that the hostname
you used earlier is changing with every refresh, cycling between the container IDs. This confirms that you have multiple healthy Pods running and serving traffic in a load-balanced way.
You created a sample app, built a Docker image of it, created a private registry, uploaded your image, created a cluster, deployed your application to it, scaled your app up 10x, and exposed it to the world over a load balancer.
Now you can use these steps to deploy your own app to Kubernetes or enable push-to-deploy to facilitate future deployments. Once deployed, you can also configure your load balancer or add volumes to your cluster.