Linode Kubernetes Engine - Release Notes
A managed Kubernetes service that enables you to easily control and scale your application’s infrastructure.
Page 1 of 5
Linode Kubernetes Engine v1.66.0
1.66.0,Changed
- Rename ConfigMap
kube-system/coredns
tokube-system/coredns-base
.
Added
- CoreDNS configuration customization capabilities via the
kube-system/coredns-custom
ConfigMap.
Linode Kubernetes Engine v1.65.0
1.65.0,Changed
- Upgraded clusters using Kubernetes:
- Adjusted
terminated-pod-gc-threshold
:- Details:
- Change: The
--terminated-pod-gc-threshold
setting in thekube-controller-manager
has been reduced from its default value to 500 pods. - Context: Previously, Kubernetes kept a large number of evicted and terminated pods. This could consume unnecessary resources and limit space for new pods.
- Impact: When the count of evicted and terminated pods exceeds 500, the oldest pods (first by eviction timestamp, then by creation timestamp) are deleted to maintain the threshold. This helps reclaim resources and improve cluster performance.
- Change: The
- Resources:
- Details:
Linode Kubernetes Engine v1.63.0
1.63.0,Changed
- Upgraded clusters using Kubernetes:
- Upgraded CSI driver to v0.6.3.
- Upgraded LKE kernel version from v5.15 to v6.1 for new LKE nodes.
Fixed
- CVE-2024-21626 has been mitigated for newly created LKE nodes. If you have an existing LKE node, you need to recycle it to apply the mitigation.
Linode Kubernetes Engine v1.60.0
1.60.0,Changed
- Upgraded CCM to v0.3.22
- Upgraded CSI driver to v0.6.2
- Upgraded Kubernetes dashboard to v3.0.0-alpha0
Linode Kubernetes Engine v1.57.0
1.57.0,Changed
- Upgraded clusters using Kubernetes 1.27 to patch version 1.27.8
Added
- Kubernetes 1.28 is now available on LKE. Review the Kubernetes changelog.
- The node-mask-cidr size changed from /24 to /25. This has no impact on the max pods per node (110).
Linode Kubernetes Engine v1.54.0
1.54.0,Changed
- Upgraded clusters using Kubernetes 1.26 to patch version 1.26.9
Added
-
Kubernetes 1.27 is now available on LKE. Review the Kubernetes changelog and blog post.
Kubernetes 1.27 locks the
LegacyServiceAccountTokenNoAutoGeneration
feature gate that enables the token controller to automatically create API server access tokens for Kubernetes service accounts. After upgrading to 1.27, customers may notice a warning message regarding these legacy tokens:Warning: Use tokens from the TokenRequest API or manually created secret-based tokens instead of autogenerated secret-based tokens.
To fix this issue, remove any auto-generated secrets of type
kubernetes.io/service-account-token
in thekube-system
namespace withkubectl delete secrets -n kube-system --field-selector="type==kubernetes.io/service-account-token"
and regenerate the cluster’s Kubeconfig. See the Kubernetes Cluster Regenerate ( (POST /lke/clusters/{clusterId}/regenerate) endpoint.Customers with service accounts outside of
kube-system
need to delete the auto-generated service account tokens in their respective namespaces.
Linode Kubernetes Engine v1.52.0
1.52.0,Added
- Ability to force-rotate Service Account tokens without permanently breaking the control-plane.
- In order to manually rotate Service Account tokens used by the control-plane, delete secrets with the type
kubernetes.io/service-account-token
in thekube-system
namespace. - Deleting
ccm-user-token-*
secrets can still result in a momentary disruption of the control-plane. - Deleting
lke-admin-token-*
secrets invalidates the current kubeconfig. Allow some time for the new token to propagate to the control-plane before downloading a new kubeconfig via the API or Cloud Manager.
- In order to manually rotate Service Account tokens used by the control-plane, delete secrets with the type
Changed
- Upgraded clusters using Kubernetes 1.25 to patch version 1.25.12
- Upgraded clusters using Kubernetes 1.26 to patch version 1.26.7
Fixed
- Improvements to etcd stability.
Page 1 of 5
This page was originally published on