Skip to content

11. IngressΒΆ

Ingress ControllerΒΆ

Requirement Resource fro Ingress ControllerΒΆ

  • Deployment
  • ConfigMap
  • NodePort / LoadBalancer
  • Auth
  • ServiceAccount
  • Role
  • ClusterRole
  • RoleBinding

watch schematic resource requirements

Deploy an Ingress ControllerΒΆ

Ingress ResourcesΒΆ

  • Routing
  • SSL Configuration

Configure Manager ToolsΒΆ

We have dozen configure manager tools for kubernetes.

But some of them is more popular, such as HashiCorp Terraform, RedHat Ansible and Kubernetes native configure manager Kustomize.

HelmChartΒΆ

InstallationΒΆ

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

update repository

helm repo update

check to added successfully

helm repo list

install chart

helm install my-ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace

check helm releases

helm list -A

TroubleshootingΒΆ

Layers of problems locations

  1. Application Failure
  2. ControlPlane Failure
  3. WorkerNode Failure

Application FailuresΒΆ

  1. Watch the service status
curl http://web-service-ip:node-port
  1. Describe the service and check it's parameters especially Endpoints, Service and Pods
kubectl describe svc <service-name>
  1. Get resource list
kubectl get pods

Pay attention to 'eady' and 'Status'

3.1. Get describe and logs of resources

kubectl describe pods <pod-name>

Check the pod events

3.2. Get pod logs

kubectl logs <pod-name>
kubectl logs <pod-name> -f # follows pod logs
kubectl logs <pod-name> -f --previous # logs of previous container
  1. Complete your debug section via debug-application

Control Plane FailureΒΆ

  1. Check cluster nodes
kubectl get nodes
  1. Get the list of system's pods
kubectl -n kube-system get pods

NOTE: It is work if cluster set up via Kubeadm.

If not, you should check via systemd

systemctl status kube-apiserver
systemctl status kube-control-plane
systemctl status kube-schedular

Then get logs via journalctl or kubectl logs

  1. Complete your debug section via debug-cluster

Worker Node FailureΒΆ

  1. Check cluster nodes again
kubectl get nodes
  1. Describe the worker nodes
kubectl describe nodes <worker-node-name>

2.1. If your worker node is on 'NOT RAEDY' status, check your 'Kubelet' first

2.2. Get more focus on 'Conditions' column

  • Memory Pressure
  • Disk Pressure
  • PID Pressure

2.3. If your node status is 'UNKNOWN' that means the 'Kubelet' could not connect to 'API-Server'

check the kubelet via ssh

NOTE: the 'Best Practice' for this purpose is use monitoring tools, like 'Prometheus' via 'Node Exporter'

  1. Check the Kubelet service status and follow it's log
systemctl status kubelet
journalctl -u kubelet -f
  1. Check the Kubelet Certificate
openssl x509 -in /var/lib/kubelet/pki/kubelet.crt -text -noout
  • Issuer: value must be 'KUBERNETES-CA'
  • Organization: value must be 'system:nodes'