Skip to content

03. APIsΒΆ

WorkloadsΒΆ

Replica SetΒΆ

A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

How a ReplicaSet worksΒΆ

A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template.

Scaling a ReplicaSetΒΆ

A ReplicaSet can be easily scaled up or down by simply updating the .spec.replicas field. The ReplicaSet controller ensures that a desired number of Pods with a matching label selector are available and operational.

ReplicaSet APIsΒΆ

Note: The most important tag that we should use in workload is: Label Selectors [Label-Selectors]

Deployment SetΒΆ

A Deployment provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

Main Difference with ReplicaSetΒΆ

  • Rolling Updating
    • Smooth version updating
    • Step updateing
  • Pause/Resume Changes
  • Rollback Changes

Use CaseΒΆ

  • Create a Deployment to rollout a ReplicaSet.
  • Declare the new state of the Pods.
  • Rollback to an earlier Deployment.
  • Scale up the Deployment to facilitate more load.
  • Pause the rollout of a Deployment.
  • Use the status of the Deployment.
  • Clean up older ReplicaSets.

DeploymentSet APIsΒΆ

Labels and SelectorsΒΆ

Labels are key/value pairs that are attached to objects such as Pods.

Label selectorsΒΆ

The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (&&) operator.

AnnotationsΒΆ

You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata.

NamespaceΒΆ

In Kubernetes, namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc).

Change default namespaceΒΆ

kubectl config set-context --current --namespace=\<ns-name>

Namespaces and DNSΒΆ

When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>, it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN).

Namespace APIsΒΆ

Resource QuotasΒΆ

When several users or teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share of resources.

Resource quotas are a tool for administrators to address this concern.

Resource quotas APIΒΆ

Resource quotas API version 1

ServiceΒΆ

In Kubernetes, a Service is a method for exposing a network application that is running as one or more Pods in your cluster.

  • Enable communication between various component within and outside of application.

  • Help us connect application together with other applications or users

Service typeΒΆ

Kubernetes Service types allow you to specify what kind of Service you want.

The available type values and their behaviors are:

  • ClusterIPΒΆ

    Exposes the Service on a cluster internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don't explicitly specify a type for a Service. You can expose the Service to the public internet using an Ingress or a Gateway.

    Calling a clusterIP via DNS in Different Namespace

    kubectl -n <namespace-name> exec <pod-name> -- curl <service-name>.<namespace>.svc.cluster.local:<target-port>
    

    Calling a clusterIP via DNS in Same Namespace

    kubectl exec <pod-name> -- curl <service-name>:<target-port>
    
  • NodePortΒΆ

    Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.

  • LoadBalancerΒΆ

    Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component; you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.

  • ExternalNameΒΆ

    Maps the Service to the contents of the externalName field (for example, to the hostname api.foo.bar.example). The mapping configures your cluster's DNS server to return a CNAME record with that external hostname value. No proxying of any kind is set up.

Port ForwardΒΆ

port forwarding example

kubectl port-forward <resource-type>/<resourcename> 8080:80
kubectl port-forward service/nginx-internal 80:8000

SchedularΒΆ

Important parts of an Scheduler

Taint & ToleranceΒΆ

Node affinity is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a node to repel a set of pods.

Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also evaluates other parameters as part of its function.

Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.

ConceptsΒΆ

You add a taint to a node using kubectl taint. For example,

kubectl taint nodes <node-name> key1=value1:taunt-effect
kubectl taint nodes node1 key1=value1:NoSchedule

places a taint on node node1. The taint has key key1, value value1, and taint effect NoSchedule. This means that no pod will be able to schedule onto node1 unless it has a matching toleration.

To remove the taint added by the command above, you can run:

kubectl taint nodes node1 key1=value1:NoSchedule-