Forwarding cluster event logs from your DOKS cluster to your DigitalOcean Managed OpenSearch cluster is now in general availability. You can forward logs using the control panel or the API.
VPC-native networking is now available in early availability for all DigitalOcean Managed Kubernetes (DOKS) customers. VPC-native networking allows customers to route traffic directly between DOKS pods, services, and other resources on VPC networks. For more information, see the DOKS Features page.
You can now create internal-only regional load balancers. Internal load balancers have no public IP address and are only accessible by resources in the same VPC. This feature is currently in early availability and only available through the CLI and API.
GPU worker nodes for DigitalOcean Kubernetes are now in general availability. You can create a new cluster with GPU nodes or add a GPU node pool to an existing cluster on versions 1.30.4-do.0, 1.29.8-do.0, 1.28.13-do.0, and later.
The ability to connect DOKS clusters to global load balancers via regional load balancers is now in beta.
You can now forward cluster event logs from your DOKS cluster to your DigitalOcean Managed OpenSearch cluster. This feature is in beta. You can send us your feedback about the feature.
GPU worker nodes are now available in early availability for select DOKS customers. For more information, see GPU Worker Nodes.
We have increased the volume attach limit for DOKS nodes from 7 to 15.
DOKS now supports the LoadBalancerSourceRanges
field in the load balancer service configuration file. This field specifies a list of IP addresses from which traffic can pass to the load balancer.
We have deprecated the service.beta.kubernetes.io/do-loadbalancer-allow-rules
annotation in favor of the LoadBalancerSourceRanges
field.
Control plane firewalls are now available in early availability for select DOKS customers. For more information, see How to Add a Control Plane Firewall.
DigitalOcean Load Balancers added to DOKS clusters now default to Kubernetes’ recommended health check configuration which facilitates worker node replacements with minimal request disruption. The new configuration automatically updates for all existing managed load balancers on DOKS 1.26 and later.
We do not recommend configuring health checks manually. You can continue the previous behavior by setting the service.beta.kubernetes.io/do-loadbalancer-override-health-check
annotation as described here.
We are moving the managed Cilium Operator component (cilium-operator
) from the worker nodes to the control plane of DOKS clusters. This frees up resources on the worker nodes and improves autoscaling of the component.
Tokens returned by the /kubeconfig
and /credentials
endpoints now have custom scopes to provide read-only access to Kubernetes resources. Within DOKS clusters, operations to access Kubernetes objects are still available based on team role (owner, biller, or member) as before.
We have added synchronous validation of LoadBalancer
service annotations. If you provide invalid values, DOKS returns an error, thus preventing misconfiguration of your load balancer.
We have removed the built-in Kubernetes Dashboard from the control panel.
As an alternative, you can use the Kubernetes Dashboard 1-Click App from the DigitalOcean Marketplace, Cilium Hubble, or other open-source options for monitoring and visualizing Kubernetes workloads.
All currently supported DigitalOcean Kubernetes versions now have Cilium Hubble, Hubble Relay and Hubble UI enabled. For more information, see Use Cilium Hubble.
The Kubernetes API endpoints /v2/kubernetes/clusters/<cluster ID>/kubeconfig
and /v2/kubernetes/clusters/<cluster ID>/credentials
now require API tokens to have write scope.
Premium Intel CPUs are now available for CPU-Optimized Droplets in TOR1.
Premium Intel CPUs are now available for CPU-Optimized Droplets in BLR1.
We have extended the promotional period for CPU-Optimized Droplets with Premium Intel CPUs (no billing for outbound data transfer at speeds faster than 2 Gbps) from 30 April 2023 to 30 June 2023. Learn more about bandwidth billing.
Premium Intel CPUs are now available for CPU-Optimized Droplets in SFO2.
Premium Intel CPUs are now available for CPU-Optimized Droplets. You can create CPU-Optimized Droplets with Premium Intel CPUs in NYC1, NYC3, FRA1, AMS3, SFO3, and SYD1.
Compared to CPU-Optimized Droplets with Regular Intel CPUs, CPU-Optimized Droplets with Premium Intel CPUs have the latest hardware and five times more network throughput.
Additionally, for a promotional period from 1 February through 30 April 2023, we will not bill for outbound data transfer at speeds faster than 2 Gbps for CPU-Optimized Droplets with Premium Intel CPUs. Learn more about bandwidth billing.
You can use this plan for both standalone Droplets and Kubernetes nodes. You can also resize your existing Droplets to this node plan.
We have deprecated our legacy load balancer scaling system in all datacenter regions. This includes the deprecation of the do-loadbalancer-size-slug
annotation for DigitalOcean Kubernetes load balancers.
Horizontal scaling is now available in all regions.
DigitalOcean Kubernetes clusters originally created with version 1.20 or older have an outdated version of our control plane architecture, which does not allow you to enable high availability. However, you can now upgrade your control plane to our new version. This upgrade option is available for Kubernetes versions currently 1.22 and later.
To check whether you can upgrade your cluster to the new control plane, see our guide.
You can now enable high availability on existing Kubernetes clusters. For detailed steps, see our guide.
When creating a new Kubernetes cluster, you can add a free database operator (now in beta), which allows you to automatically link new databases to your cluster. For more details, see our guide.
do-operator
, our operator for managing and consuming DigitalOcean resources from a Kubernetes cluster, is now an open-source beta project.
A new CPU-Optimized Droplet plan with more computing power is now available. This plan features 48 vCPUs (up from the previous maximum of 32) and 96 GB of memory (up from the previous maximum of 64).
This large CPU-Optimized Droplet plan is available where CPU-Optimized Droplets are already available, except for BLR1 and SFO2.
You can use this plan for both standalone Droplets and Kubernetes nodes. You can also resize your existing Droplets to this node plan.
DOKS clusters now accrue free bandwidth based on the worker pool’s largest sizes within 28 days of usage. Learn more about DOKS billing.
Previously, you may have received slightly more free bandwidth on 30 and 31-day months. Individual worker nodes were billed per hour, up to a maximum of 744 hours per month (31 days * 24 hours). As a result, they could accrue extra bandwidth allowance beyond the advertised monthly allowance for the corresponding Droplet plan.
To improve security, DigitalOcean no longer accepts TLS 1.0 and TLS 1.1 connections. This includes connections to www.digitalocean.com
, cloud.digitalocean.com
, and api.digitalocean.com
.
High-availability control plane is now Generally Available in all regions where DigitalOcean Kubernetes is supported.
You can now search for and install Kubernetes 1-Click Apps from the new Marketplace tab of DOKS clusters.
High-availability control plane (early availability) is now available in all regions where DOKS is supported.
Released v1.65.0 of doctl, the official DigitalOcean CLI. This release includes a number of new features:
--ha
flag was added to the kubernetes cluster create
sub-command to optionally create a cluster configured with a highly-available control plane. This feature is in early availabilitykubernetes cluster
sub-commands now include a “Support Features” field when displaying version options--disable-lets-encrypt-dns-records
flag was added to the compute load-balancer create
sub-command to optionally disable automatic DNS record creation for Let’s Encrypt certificates that are added to the load balancerHigh-availability control plane is now in early availability in the following regions: ams3, nyc1, sfo3, and sgp1.
You can now add Kubernetes clusters as sources or destinations in Cloud Firewall rules.
You can now do the following on Kubernetes clusters:
Use surge upgrade when upgrading an existing cluster. Surge upgrade is enabled by default when you create a new cluster.
Move a Kubernetes cluster and its associated resources, such as Droplets, load balancers and volumes, to a project using the DigitalOcean Control Panel or doctl
command-line tool. You can also assign a project when you create a new cluster. If you do not specify a project, it gets assigned to the default project.
Delete resources, such as load balancers and volumes, associated with a Kubernetes cluster using the DigitalOcean Control Panel, API or the doctl
command-line tool.
On Kubernetes 1.19 and later we now provision two fully-managed firewalls for each new Kubernetes cluster. One firewall manages the connection between worker nodes and control plane, and the other manages connections between worker nodes and the public internet.
You can now apply taints to Kubernetes node pools using the DigitalOcean API. When you configure taints for a node pool, the taint automatically applies to all current nodes and any subsequently created nodes in the pool. For more information, see Kubernetes’ documentation on taints and tolerations.
The DigitalOcean Virtual Private Cloud (VPC) service is now available for all customers. VPC replaces the private networking service. Existing private networks will continue to function as normal but with the enhanced security and features of the VPC service. See the description of VPC features for more information.
We began the incremental release of the DigitalOcean Virtual Private Cloud (VPC) service. It will be available for all customers soon. VPC replaces the private networking service.
v1.14.0 of the DigitalOcean Terraform Provider is now available. This release includes a bug fix for projects containing many resources and exposes the Droplet IDs for individual nodes in Kubernetes clusters.
Released v1.38.0 of doctl, the official DigitalOcean CLI. This release adds the ability to set Kubernetes node pool labels as well as support for deleting multiple Kubernetes clusters with a single command.
DigitalOcean Kubernetes users can run our cluster linter before upgrading their cluster to a new minor version. The linter automatically finds issues with your cluster and links to recommended fixes.
DigitalOcean Container Registry has been released in Beta. To request early access, visit the homepage for Container Registry.
DigitalOcean Kubernetes has added native support for the Kubernetes Dashboard for all DOKS clusters.
The DigitalOcean Kubernetes (DOKS) October release is now available, and contains the following new features:
6-hour and 1-day alert policies for Droplets and Kubernetes worker nodes have been deprecated. No new alert policies with these intervals can be created. Existing alert policies using these intervals will remain in place until 1 August 2019, at which point they will be modified to reflect a 1-hour interval.
DigitalOcean Kubernetes is now Generally Available. Highlights include:
Availability in SGP1 and TOR1.
Support for patch version upgrades.
Configurable maintenance window and automatic upgrade options.
Delete node feature, which removes a specific node from a worker pool.
Basic and advanced monitoring insights for resource utilization and deployment status metrics.
Kubernetes version 1.14.1 is now available for cluster creation in DOKS.
DOKS node pools can now be named at creation time.
DOKS master nodes now automatically rotate logs to avoid disk space issues.
DOKS customers can now see the cost of their Kubernetes nodes and load balancers aggregated by cluster name within a Kubernetes clusters group on their invoice. Volumes and volume snapshots used in a DOKS cluster are not yet included in the cluster aggregation.
The following updates were released for DigitalOcean Kubernetes:
The minimum size for a Kubernetes node was changed to the 2 GB Memory / 1 vCPU plan.
DigitalOcean Kubernetes is now in early availability. Learn more.