Namespaces & Resource Quotas
This response provides detailed notes from the video, focusing on Kubernetes Namespaces and Resource Quotas, with examples and explanations to aid your study for exams and interviews.
Kubernetes Namespaces#
The video introduces Namespaces as a fundamental concept in Kubernetes for organizing and managing resources within a cluster. It addresses the challenge of managing multiple projects or applications from different companies within a single Kubernetes cluster, which can become complicated if all resources exist in one flat space. The speaker draws an analogy to Git, where initially all work might be done on a master
branch, but then branches are created to separate development efforts; Namespaces serve a similar purpose in Kubernetes by providing boundaries or logical separation within a cluster. You can think of a Namespace as a dedicated area or a "folder" for a specific project or team, where all related Kubernetes objects like pods, services, and deployments reside.
The primary purpose of Namespaces is to make managing multiple projects easier by creating these isolated environments. For example, if a cluster hosts applications for companies A, B, C, and D, putting all their pods and resources in one place would be difficult to manage. By creating a separate Namespace for each project or company, such as a "Project A" Namespace or a "Project B" Namespace, you establish a clear boundary for their resources. This helps avoid confusion, especially if similar applications are running for different entities. A project manager, for instance, can quickly understand what is running by just looking at the Namespace name, which can be named after the project, team, area, or location.
Namespaces also allow for the attachment of authorization and policy rules to specific sections of the cluster. This means you can define which users or teams have access to which Namespace, and what actions they can perform within it. Furthermore, Namespaces are critical for allocating resources effectively, as they allow you to define Resource Quotas (discussed next) for each isolated section. This enables you to give different amounts of resources to different projects based on their needs or the budget allocated to them.
By default, when you start a Kubernetes cluster and haven't created any custom Namespaces, all your work and resources are placed in the default
Namespace. When you run a command like kubectl get pods
without specifying a Namespace, Kubernetes will automatically search for pods within this default
Namespace. The video demonstrates this by showing "No resources found in default namespace" when no pods are present and no custom Namespace is specified.
To view all existing Namespaces in your cluster, you can use the command kubectl get namespaces
. Typically, you'll see default
, kube-system
(used by Kubernetes itself), kube-public
, and kube-node-lease
.
Creating and Managing Namespaces:
The video shows how to create your own Namespace using a simple YAML file. For instance, to create a Namespace named dev
:
kubectl apply -f namespace.yaml
. After application, running kubectl get namespaces
will show your newly created dev
Namespace as active.
When creating resources like pods within a specific Namespace, you must explicitly mention the Namespace. For example, to create a pod from pod.yaml
in the dev
Namespace:
kubectl apply -f pod.yaml -n dev
.
If you then try to list pods using kubectl get pods
(which defaults to the default
Namespace), you will see "No resources found in default namespace" because your pod was created in dev
. To see pods in your custom Namespace, you must specify it: kubectl get pods -n dev
.
Similarly, when deleting resources from a specific Namespace, you must include the -n
flag: kubectl delete -f pod.yaml -n dev
. If you attempt to delete without specifying the Namespace, Kubernetes will look in default
and report that the resource is not found there.
For convenience, you can change your current Kubernetes context to default to a specific Namespace, so you don't have to use -n
every time. This is done with the command:
kubectl config set-context --current --namespace=dev
.
After running this, subsequent kubectl get pods
commands will automatically check the dev
Namespace. To verify which Namespace your current context is set to, you can use kubectl config view --minify | grep namespace
.
It's important to remember that most Kubernetes resources like pods, services, and deployments are Namespace-scoped. However, some lower-level or cluster-wide resources, such as Nodes and Persistent Volumes, are not Namespace-scoped. This means Persistent Volumes and Nodes operate at the cluster level and are not tied to any specific Namespace, which is a crucial point for understanding Kubernetes architecture and potentially for interview questions.
Namespaces are particularly beneficial for environments with many users or multiple teams sharing a large Kubernetes cluster. For smaller setups with only a few applications and users, Namespaces might not be strictly necessary.
Resource Quota#
Following the discussion of Namespaces, the video delves into Resource Quota, a mechanism for managing resource consumption within a Namespace. Without Resource Quotas, pods can potentially consume all available CPU and memory on a cluster, leading to resource starvation for other applications.
The Need for Resource Quota:
By default, containers running on Kubernetes operate without explicit CPU or memory limits. This means an application can scale its resource usage indefinitely if not restrained. While this might seem beneficial for a single application, in a shared cluster, it can lead to one application monopolizing resources, thereby impacting the performance or even availability of other applications. The analogy used here is that of a buffet versus a fixed-menu restaurant: without limits, a container might consume as much as it wants from an available pool, but with limits, it can only take what's allocated.
Purpose of Resource Quota:
Resource Quotas allow you to define and enforce limits on the total amount of compute resources (like CPU and memory) or object count (like number of pods or services) that a Namespace can consume. This ensures fair resource allocation among different teams or projects sharing a cluster, preventing any single Namespace from overwhelming the system.
Key Concepts: Request and Limit
Within the context of Resource Quotas, two critical parameters for defining resource allocation for individual containers are:
-
Request: This is the minimum guaranteed amount of resources (CPU and memory) that a container needs to function. Kubernetes guarantees that this amount will always be available to the container. The Kubernetes scheduler uses the
request
value to decide which worker node to place a pod on; it will only place a pod on a node that has enough available resources to satisfy itsrequest
.- CPU
request
: Specified in units of cores or millicores (m). One CPU core is equivalent to 1000 millicores. So, 0.5 CPU is 500m. - Memory
request
: Specified in units of bytes, such as megabytes (Mi) or gigabytes (Gi).
- CPU
-
Limit: This is the maximum amount of resources (CPU and memory) that a container is allowed to consume. If a container attempts to exceed its CPU limit, it will be throttled. If it exceeds its memory limit, it will be terminated by Kubernetes.
- The
limit
for CPU and memory must always be greater than or equal to its correspondingrequest
. You cannot request more than your limit.
- The
Example YAML for defining Requests and Limits within a Pod:
Here's an example from the video of a pod definition that includes requests
and limits
for its container:
# pod-resources.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-resource-pod
spec:
containers:
- name: my-container
image: busybox
command: ["sh", "-c", "echo 'Hello'; sleep 3600"]
resources:
requests:
memory: "64Mi" # Request 64 MiB of memory
cpu: "100m" # Request 100 millicores of CPU
limits:
memory: "128Mi" # Limit to 128 MiB of memory
cpu: "200m" # Limit to 200 millicores of CPU
my-container
needs at least 64Mi
memory and 100m
CPU, but will not be allowed to use more than 128Mi
memory or 200m
CPU. After creating this pod using kubectl apply -f pod-resources.yaml -n dev
(assuming you are in the dev
Namespace), you can inspect its resource allocation with kubectl describe pod my-resource-pod
to see the Requests
and Limits
section.
Relationship Between ResourceQuota and Container Resources:
A ResourceQuota object is typically applied to a Namespace to set aggregate limits on CPU, memory, and storage that all pods within that Namespace can consume collectively.
Example YAML for a Resource Quota:
# my-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: my-quota
spec:
hard:
cpu: "400m" # Total CPU limit for the Namespace
memory: "40Mi" # Total memory limit for the Namespace
dev
Namespace to a total of 400 millicores of CPU and 40 megabytes of memory.
A crucial point demonstrated in the video is that the total sum of requests
and limits
of all containers within a Namespace must not exceed the hard
limits defined in the Resource Quota for that Namespace. The video illustrates a scenario where a Resource Quota was set for 400m
CPU, but a deployment attempted to create three replicas, each requesting 200m
CPU. This would total 600m
CPU, which exceeds the Namespace's 400m
quota. In such a case, the deployment will fail, and inspecting it with kubectl describe deployment <deployment-name>
will show FailedQuota
errors, indicating that the requested resources exceed the available quota.
Defaulting Behavior of Request and Limit:
The video highlights important defaulting behaviors when request
and limit
are not explicitly defined:
-
If
request
is NOT defined, butlimit
IS defined: In this scenario, therequest
for the container will automatically be set equal to itslimit
. This means if you only specifylimits: {cpu: "100m"}
, the container will implicitly haverequests: {cpu: "100m"}
as well. The video demonstrates this by creating aLimitRange
with a 1 CPU limit and no request, then a pod gets 1 CPU for both request and limit. -
If
request
IS defined, butlimit
is NOT defined: In this case, thelimit
for the container will NOT be set equal to itsrequest
. Instead, it will be set to the default limit defined for the cluster or Namespace, if one exists. If no such default exists (e.g., in aLimitRange
object), the limit might effectively be considered unlimited (0), meaning it can use as much as available. The video demonstrates this by defining a pod with a750m
CPU request but no limit. When described, the pod's limit appears as1 CPU
(which is1000m
), indicating it picked up the default limit, not the request value. -
If NEITHER
request
norlimit
are defined: By default, containers will have no CPU or memory limits, and can consume resources as needed. The video also explains that in this case,request
effectively becomes equal tolimit
, both being unlimited (or constrained only by the cluster's overall capacity).
These interactions are crucial for understanding how resource allocation behaves in Kubernetes, especially for interview scenarios.
The video concludes by reiterating the importance of Namespaces and Resource Quotas for managing complex Kubernetes environments, ensuring proper resource allocation, and preventing resource contention among different projects or teams. It encourages hands-on practice through the provided lab steps to solidify understanding.