Static Pods, Manual Scheduling, Labels, and Selectors in Kubernetes - CKA
In Kubernetes architecture, the Kube-Scheduler plays a crucial role in assigning Pods to nodes. When the API Server receives a request to schedule Pods, it passes this request to the Scheduler, which then decides the most efficient node for each Pod. But here's an interesting question: since the Kube-Scheduler is itself a Pod, who schedules the Scheduler Pod? The answer lies in the concept of Static Pods.
In this blog, we will explore Static Pods in depth. Let's get started!
Static Pods
Static Pods are a special type of Pod in Kubernetes that are directly managed by the kubelet
on a specific node, rather than being managed by the Kubernetes API Server. They are primarily used for deploying critical system components, like the Kube-Scheduler and Kube-Controller-Manager, especially in environments where Kubernetes is bootstrapping itself.
Create a Static Pod Definition: Write a YAML file to define the static pod:
# /etc/kubernetes/manifests/static-nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-nginx
spec:
containers:
- name: nginx
image: nginx:1.19
ports:
- containerPort: 80
Place the YAML in the Manifest Directory: Place the pod definition file in the kubelet's manifest directory:
sudo cp static-nginx.yaml /etc/kubernetes/manifests/
Verify the Pod Creation: After placing the manifest file, the Kubelet will automatically create the pod. You can verify the pod is running on the node using:
kubectl get pods -A | grep static-nginx
Key Characteristics of Static Pods:
Not Managed by the Scheduler: Unlike deployments or replicasets, the Kubernetes scheduler does not manage static pods.
Defined on the Node: Configuration files for static pods are placed directly on the node's file system, and the
kubelet
watches these files.Some examples of static pods are: ApiServer, Kube-scheduler, controller-manager, ETCD etc
Managing Static Pods:
SSH into the Node: You will gain access to the node where the static pod is defined.(Mostly the control plane node)
Modify the YAML File: Edit or create the YAML configuration file for the static pod.
Remove the Scheduler YAML: To stop the pod, you must remove or modify the corresponding file directly on the node.
Default location: is usually
/etc/kubernetes/manifests/
; you can place the pod YAML in the directory, and Kubelet will pick it for scheduling.
Managing Static Pods
To manage Static Pods, you need to modify the manifest files located in the /etc/kubernetes/manifests
directory on the node. Here's how you can do it:
Access the Node: Use Docker to exec into the node.
docker exec -it <node-name> bash
Navigate to the Manifests Directory:
cd /etc/kubernetes/manifests ls -lrt
You'll see the YAML files for all static pods. If you remove the kube-scheduler.yaml
file and create a new pod, it will not be assigned to any node because the Kube-Scheduler pod is not running. Let's see an example.
Remove and Restore the Scheduler Manifest:
mv kube-scheduler.yaml /tmp/
Create a new pod and observe that it will remain in pending state because scheduler whose responsibility is to schedule pods in worker node is removed due to which pod will never go in running state.
Describing the pod will show that no node is assigned to it.
When you move the kube-scheduler.yaml
file back to its original location, the pod will be scheduled and start running.
Manual Scheduling
Manual scheduling in Kubernetes involves explicitly assigning a Pod to a specific node without relying on the automated scheduling logic of the Kube-Scheduler. This approach can be useful in scenarios where you need to control exactly where a Pod runs, such as for performance reasons, licensing constraints, or specific hardware requirements.
Key Points:
nodeName
Field: Use this field in the pod specification to specify the node where the pod should run.No Scheduler Involvement: When
nodeName
is specified, the scheduler bypasses the pod, and it’s directly assigned to the given node.
Example
First, remove the kube-scheduler.yaml
file from /etc/kubernetes/manifests
. Then, create a pod1.yaml
file with the following content:
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
nodeName: cka-cluster-worker
In this YAML file, we specify the node name on which we want to schedule our pod. Apply this YAML file using kubectl
:
kubectl apply -f pod1.yaml
You can see that our pod is created and scheduled on the specified node that we defined in our YAML file.
Labels and Selectors
Labels and selectors are fundamental concepts in Kubernetes used to organize and manage resources. Labels are key-value pairs attached to objects, such as Pods, that can be used to identify and group them. Selectors are used to filter and select objects based on their labels.
Example
Here’s an example of a Deployment that uses labels and selectors:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
tier: backend
spec:
template:
metadata:
name: nginx
labels:
app: v1
spec:
containers:
- name: nginx
image: nginx:1.23.0
replicas: 3
selector:
matchLabels:
app: v1
Key Sections
Deployment Metadata
Deployment Spec
Pod Template
Deployment Metadata
metadata:
name: nginx-deploy
labels:
tier: backend
name: nginx-deploy
: This sets the name of the Deployment tonginx-deploy
.labels: tier: backend
: This label is assigned to the Deployment object itself, indicating that this Deployment is part of thebackend
tier. This label is useful for identifying and grouping the Deployment within the Kubernetes cluster, but it is not used in the Pod scheduling process.
Deployment Spec
spec:
replicas: 3
selector:
matchLabels:
app: v1
replicas: 3
: Specifies that three replicas of the Pods should be running.selector: matchLabels: app: v1
: This selector tells the Deployment which Pods it is responsible for managing. It will select and manage all Pods that have the labelapp: v1
.
Pod Template
template:
metadata:
name: nginx
labels:
app: v1
spec:
containers:
- name: nginx
image: nginx:1.23.0
template: metadata: name: nginx
: This sets the name of the Pods created by the Deployment tonginx
.labels: app: v1
: This label is applied to each Pod created by the Deployment. It is crucial because it matches the selector defined in the Deployment spec (selector: matchLabels: app: v1
).spec: containers
: Defines the container specification for the Pods, in this case, running thenginx
container with the imagenginx:1.23.0
.
Conclusion
In this YAML file, labels and selectors are used to:
Assign meaningful metadata to the Deployment (
tier: backend
).Define the specific labels for the Pods (
app: v1
).Use selectors to manage a group of Pods based on their labels (
app: v1
).
Example of Labels and Selectors -
Create 3 pods with the name as pod1, pod2 and pod3 based on the nginx image and use labels as env:test, env:dev and env:prod for each of these pods respectively.
# pod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
env: test
tier: frontend
spec:
containers:
- name: nginx
image: nginx:1.19
# pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod2
labels:
env: dev
tier: frontend
spec:
containers:
- name: nginx
image: nginx:1.19
# pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod3
labels:
env: prod
tier: frontend
spec:
containers:
- name: nginx
image: nginx:1.19
If you want to make it accessible from outside you can make service also
Create a Service Using a Label Selector: After labeling your pods, you can use those labels in a service to expose the pods.
apiVersion: v1 kind: Service metadata: name: frontend-service spec: selector: env: test tier: frontend ports: - protocol: TCP port: 80 targetPort: 80
Apply the YAML Files: To create the labeled pod and the service, apply the YAML files:
kubectl apply -f pod1.yaml kubectl apply -f frontend-service.yaml
Verify the Service and Pods: List the pods with a specific label:
kubectl get pods -l app=web
Check if the service is correctly targeting the pods:
kubectl get endpoints frontend-service
You can check labels also by —show-labels
tag
kubectl get pods --show-labels
Use Selector to filter pod according to labels
kubectl get pods --selector tier=frontend
kubectl get pods --selector env=test
Labels vs. Namespaces 🌍
Labels: Organize resources within the same or across namespaces.
Namespaces: Provide a way to isolate resources from each other within a cluster.
Annotations 📝
Annotations are similar to labels but attach non-identifying metadata to objects. For example, recording the release version of an application for information purposes or last applied configuration details etc.