FEATURE STATE: Kubernetes v1.30 [beta] This page explains how user namespaces are used in Kubernetes pods. A user namespace isolates the user running inside the container from the one in the host. A process running as root in a container can run as a different (non-root) user in the host; in other words, the process has full privileges for operations inside the user namespace, but is unprivileged for operations outside the namespace.| Kubernetes
Details of Kubernetes authorization mechanisms and supported authorization modes.| Kubernetes
Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods. Typically you have several nodes in a cluster; in a learning or resource-limited environment, you might have only one node. The components on a node include the kubelet, a container runtime, and the kube-proxy.| Kubernetes
FEATURE STATE: Kubernetes v1.32 [beta] (enabled by default: false) Dynamic resource allocation is an API for requesting and sharing resources between pods and containers inside a pod. It is a generalization of the persistent volumes API for generic resources. Typically those resources are devices like GPUs. Third-party resource drivers are responsible for tracking and preparing resources, with allocation of resources handled by Kubernetes via structured parameters (introduced in Kubernetes 1.| Kubernetes
FEATURE STATE: Kubernetes v1.30 [stable] This page provides an overview of Validating Admission Policy. What is Validating Admission Policy? Validating admission policies offer a declarative, in-process alternative to validating admission webhooks. Validating admission policies use the Common Expression Language (CEL) to declare the validation rules of a policy. Validation admission policies are highly configurable, enabling policy authors to define policies that can be parameterized and scop...| Kubernetes
In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand. Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload.| Kubernetes