GETTING STARTED| kubernetes.io
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams. However, the native functionality provided by a container engine or ...| Kubernetes
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled.| Kubernetes
Synopsis The kubelet is the primary "node agent" that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object that describes a pod. The kubelet takes a set of PodSpecs that are provided through various mechanisms (primarily through the apiserver) and ensures that the containers described in those PodSpecs are ru...| Kubernetes
Kubernetes (version 1.3 through to the latest 1.31, and likely onwards) lets you use Container Network Interface (CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your cluster and that suits your needs. Different plugins are available (both open- and closed- source) in the wider Kubernetes ecosystem. A CNI plugin is required to implement the Kubernetes network model. You must use a CNI plugin that is compatible with the v0.| Kubernetes
Distributed systems often have a need for leases, which provide a mechanism to lock shared resources and coordinate activity between members of a set. In Kubernetes, the lease concept is represented by Lease objects in the coordination.k8s.io API Group, which are used for system-critical capabilities such as node heartbeats and component-level leader election. Node heartbeats Kubernetes uses the Lease API to communicate kubelet node heartbeats to the Kubernetes API server.| Kubernetes
Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods. Typically you have several nodes in a cluster; in a learning or resource-limited environment, you might have only one node. The components on a node include the kubelet, a container runtime, and the kube-proxy.| Kubernetes
In Kubernetes, scheduling refers to making sure that Pods are matched to Nodes so that Kubelet can run them. Scheduling overview A scheduler watches for newly created Pods that have no Node assigned. For every Pod that the scheduler discovers, the scheduler becomes responsible for finding the best Node for that Pod to run on. The scheduler reaches this placement decision taking into account the scheduling principles described below.| Kubernetes
Welcome to the containerd documentation! This document contains some basic project-level information about containerd.| containerd
Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up cluster resources. This allows the clean up of resources like the following: Terminated pods Completed Jobs Objects without owner references Unused containers and container images Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete Stale or expired CertificateSigningRequests (CSRs) Nodes deleted in the following scenarios: On a cloud when the cluster uses a cloud c...| Kubernetes
Synopsis The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends. Service cluster IPs and ports are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs.| Kubernetes
Note: Dockershim has been removed from the Kubernetes project as of release 1.24. Read the Dockershim Removal FAQ for further details. You need to install a container runtime into each node in the cluster so that Pods can run there. This page outlines what is involved and describes related tasks for setting up nodes. Kubernetes 1.33 requires that you use a runtime that conforms with the Container Runtime Interface (CRI).| Kubernetes
Deploy the web UI (Kubernetes Dashboard) and access it.| Kubernetes
Your workload can discover Services within your cluster using DNS; this page explains how that works.| Kubernetes
Expose an application running in your cluster behind a single outward-facing endpoint, even when the workload is split across multiple backends.| Kubernetes
A Deployment manages a set of Pods to run an application workload, usually one that doesn't maintain state.| Kubernetes