You could manually install all Kubernetes components onto your compute nodes yourself, but this can be complex, error-prone, and time-consuming. Simpler, more scalable, and robust solutions are available!
Automation tools such as Kubespray make deploying a Kubernetes cluster almost effortless. Kubespray is an open-source tool that allows for the automated deployment of Kubernetes clusters across computing resources, such as virtual machines. Built to be configurable, fast, and lightweight, Kubespray meets most needs.
# Copy ``inventory/sample`` as ``inventory/mycluster``cp-rfpinventory/sampleinventory/mycluster
# Update Ansible inventory file with inventory builderdeclare-aIPS=(10.10.1.310.10.1.410.10.1.5)CONFIG_FILE=inventory/mycluster/hosts.yamlpython3contrib/inventory_builder/inventory.py${IPS[@]}# Review and change parameters under ``inventory/mycluster/group_vars``catinventory/mycluster/group_vars/all/all.yml
catinventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
Note: In small clusters, there is no need to setup NodeLocal DNS. So, we disable that in the file inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml:
enable_nodelocaldns:false
Note: For connecting to cluster from the host that runs Ansible, you could edit the above config file:
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifactskubeconfig_localhost:true# Use ansible_host as external api ip when copying over kubeconfig.# kubeconfig_localhost_ansible_host: false# Download kubectl onto the host that runs Ansible in {{ bin_dir }}kubectl_localhost:true
After deployment of Kubespray, the required files will be generated in the path inventory/mycluster/artifacts. You must do some tasks to connect the Ansible host to the cluster:
Make kubectl executable and copy to /usr/local/bin.
Copy admin.conf to ~/.kube/config and check its permissions for your user.
# Deploy Kubespray with Ansible Playbook - run the playbook as root# The option `--become` is required, as for example writing SSL keys in /etc/,# installing packages and interacting with various systemd daemons.# Without --become the playbook will fail to run!ansible-playbook-iinventory/mycluster/hosts.yaml--become--become-user=rootcluster.yml
# Clean up old Kubernetes cluster with Ansible Playbook - run the playbook as root# The option `--become` is required, as for example cleaning up SSL keys in /etc/,# uninstalling old packages and interacting with various systemd daemons.# Without --become the playbook will fail to run!# And be mind it will remove the current kubernetes cluster (if it's running)!ansible-playbook-iinventory/mycluster/hosts.yaml--become--become-user=rootreset.yml
The public mirror is useful to make the public resources download quickly in some areas of the world (such as China).
If you want to download quickly in China, the configuration can be like:
# Copy ``inventory/sample`` as ``inventory/mycluster``cp-rfpinventory/sampleinventory/mycluster
# Update Ansible inventory file with inventory builderdeclare-aIPS=(10.10.1.310.10.1.410.10.1.5)CONFIG_FILE=inventory/mycluster/hosts.yamlpython3contrib/inventory_builder/inventory.py${IPS[@]}# Use the download mirrorcpinventory/mycluster/group_vars/all/offline.ymlinventory/mycluster/group_vars/all/mirror.yml
sed-i-E'/# .*\{\{ files_repo/s/^# //g'inventory/mycluster/group_vars/all/mirror.yml
tee-ainventory/mycluster/group_vars/all/mirror.yml<<EOFgcr_image_repo: "gcr.m.daocloud.io"kube_image_repo: "k8s.m.daocloud.io"docker_image_repo: "docker.m.daocloud.io"quay_image_repo: "quay.m.daocloud.io"github_image_repo: "ghcr.m.daocloud.io"files_repo: "https://files.m.daocloud.io"EOF# Review and change parameters under ``inventory/mycluster/group_vars``catinventory/mycluster/group_vars/all/all.yml
catinventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root# The option `--become` is required, as for example writing SSL keys in /etc/,# installing packages and interacting with various systemd daemons.# Without --become the playbook will fail to run!ansible-playbook-iinventory/mycluster/hosts.yaml--become--become-user=rootcluster.yml
In case your servers don't have access to the internet directly (for example when deploying on premises with security constraints), you need to get the required artifacts from another environment where has access to the internet.
---## Global Offline settings### Private Container Image Registry# registry_host: "myprivateregisry.com"# files_repo: "http://myprivatehttpd"### If using CentOS, RedHat, AlmaLinux or Fedora# yum_repo: "http://myinternalyumrepo"### If using Debian# debian_repo: "http://myinternaldebianrepo"### If using Ubuntu# ubuntu_repo: "http://myinternalubunturepo"## Container Registry overrideskube_image_repo:"{{registry_host}}"gcr_image_repo:"{{registry_host}}"github_image_repo:"{{registry_host}}"docker_image_repo:"{{registry_host}}"quay_image_repo:"{{registry_host}}"## Kubernetes componentskubeadm_download_url:"{{files_repo}}/dl.k8s.io/release/{{kubeadm_version}}/bin/linux/{{image_arch}}/kubeadm"kubectl_download_url:"{{files_repo}}/dl.k8s.io/release/{{kube_version}}/bin/linux/{{image_arch}}/kubectl"kubelet_download_url:"{{files_repo}}/dl.k8s.io/release/{{kube_version}}/bin/linux/{{image_arch}}/kubelet"## Two options - Override entire repository or override only a single binary.## [Optional] 1 - Override entire binary repository# github_url: "https://my_github_proxy"# dl_k8s_io_url: "https://my_dl_k8s_io_proxy"# storage_googleapis_url: "https://my_storage_googleapi_proxy"# get_helm_url: "https://my_helm_sh_proxy"## [Optional] 2 - Override a specific binary## CNI Pluginscni_download_url:"{{files_repo}}/github.com/containernetworking/plugins/releases/download/{{cni_version}}/cni-plugins-linux-{{image_arch}}-{{cni_version}}.tgz"## cri-toolscrictl_download_url:"{{files_repo}}/github.com/kubernetes-sigs/cri-tools/releases/download/{{crictl_version}}/crictl-{{crictl_version}}-{{ansible_system|lower}}-{{image_arch}}.tar.gz"## [Optional] etcd: only if you use etcd_deployment=hostetcd_download_url:"{{files_repo}}/github.com/etcd-io/etcd/releases/download/{{etcd_version}}/etcd-{{etcd_version}}-linux-{{image_arch}}.tar.gz"# [Optional] Calico: If using Calico network plugincalicoctl_download_url:"{{files_repo}}/github.com/projectcalico/calico/releases/download/{{calico_ctl_version}}/calicoctl-linux-{{image_arch}}"# [Optional] Calico with kdd: If using Calico network plugin with kdd datastorecalico_crds_download_url:"{{files_repo}}/github.com/projectcalico/calico/archive/{{calico_version}}.tar.gz"# [Optional] Cilium: If using Cilium network pluginciliumcli_download_url:"{{files_repo}}/github.com/cilium/cilium-cli/releases/download/{{cilium_cli_version}}/cilium-linux-{{image_arch}}.tar.gz"# [Optional] helm: only if you set helm_enabled: truehelm_download_url:"{{files_repo}}/get.helm.sh/helm-{{helm_version}}-linux-{{image_arch}}.tar.gz"# [Optional] crun: only if you set crun_enabled: truecrun_download_url:"{{files_repo}}/github.com/containers/crun/releases/download/{{crun_version}}/crun-{{crun_version}}-linux-{{image_arch}}"# [Optional] kata: only if you set kata_containers_enabled: truekata_containers_download_url:"{{files_repo}}/github.com/kata-containers/kata-containers/releases/download/{{kata_containers_version}}/kata-static-{{kata_containers_version}}-{{ansible_architecture}}.tar.xz"# [Optional] cri-dockerd: only if you set container_manager: dockercri_dockerd_download_url:"{{files_repo}}/github.com/Mirantis/cri-dockerd/releases/download/v{{cri_dockerd_version}}/cri-dockerd-{{cri_dockerd_version}}.{{image_arch}}.tgz"# [Optional] runc: if you set container_manager to containerd or criorunc_download_url:"{{files_repo}}/github.com/opencontainers/runc/releases/download/{{runc_version}}/runc.{{image_arch}}"# [Optional] cri-o: only if you set container_manager: crio# crio_download_base: "download.opensuse.org/repositories/devel:kubic:libcontainers:stable"# crio_download_crio: "http://{{ crio_download_base }}:/cri-o:/"crio_download_url:"{{files_repo}}/storage.googleapis.com/cri-o/artifacts/cri-o.{{image_arch}}.{{crio_version}}.tar.gz"skopeo_download_url:"{{files_repo}}/github.com/lework/skopeo-binary/releases/download/{{skopeo_version}}/skopeo-linux-{{image_arch}}"# [Optional] containerd: only if you set container_runtime: containerdcontainerd_download_url:"{{files_repo}}/github.com/containerd/containerd/releases/download/v{{containerd_version}}/containerd-{{containerd_version}}-linux-{{image_arch}}.tar.gz"nerdctl_download_url:"{{files_repo}}/github.com/containerd/nerdctl/releases/download/v{{nerdctl_version}}/nerdctl-{{nerdctl_version}}-{{ansible_system|lower}}-{{image_arch}}.tar.gz"# [Optional] runsc,containerd-shim-runsc: only if you set gvisor_enabled: truegvisor_runsc_download_url:"{{files_repo}}/storage.googleapis.com/gvisor/releases/release/{{gvisor_version}}/{{ansible_architecture}}/runsc"gvisor_containerd_shim_runsc_download_url:"{{files_repo}}/storage.googleapis.com/gvisor/releases/release/{{gvisor_version}}/{{ansible_architecture}}/containerd-shim-runsc-v1"# [Optional] Krew: only if you set krew_enabled: truekrew_download_url:"{{files_repo}}/github.com/kubernetes-sigs/krew/releases/download/{{krew_version}}/krew-{{host_os}}_{{image_arch}}.tar.gz"## CentOS/Redhat/AlmaLinux### For EL7, base and extras repo must be available, for EL8, baseos and appstream### By default we enable those repo automatically# rhel_enable_repos: false### Docker / Containerd# docker_rh_repo_base_url: "{{ yum_repo }}/docker-ce/$releasever/$basearch"# docker_rh_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"## Fedora### Docker# docker_fedora_repo_base_url: "{{ yum_repo }}/docker-ce/{{ ansible_distribution_major_version }}/{{ ansible_architecture }}"# docker_fedora_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"### Containerd# containerd_fedora_repo_base_url: "{{ yum_repo }}/containerd"# containerd_fedora_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"## Debian### Docker# docker_debian_repo_base_url: "{{ debian_repo }}/docker-ce"# docker_debian_repo_gpgkey: "{{ debian_repo }}/docker-ce/gpg"### Containerd# containerd_debian_repo_base_url: "{{ debian_repo }}/containerd"# containerd_debian_repo_gpgkey: "{{ debian_repo }}/containerd/gpg"# containerd_debian_repo_repokey: 'YOURREPOKEY'## Ubuntu### Docker# docker_ubuntu_repo_base_url: "{{ ubuntu_repo }}/docker-ce"# docker_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/docker-ce/gpg"### Containerd# containerd_ubuntu_repo_base_url: "{{ ubuntu_repo }}/containerd"# containerd_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/containerd/gpg"# containerd_ubuntu_repo_repokey: 'YOURREPOKEY'registry_host:"nexus.inside.nahanet.ir:8082"files_repo:"https://nexus.inside.nahanet.ir/repository/file"
To set up the Nexus offline repository, first, go to the contrib/offline path in the project and then use the following command to generate the list of required images:
./generate_list.sh
By executing this command, the temp directory will be generated as follows:
Then, using the following command, the required images are pulled and stored as tar archive files in a file called container-images.tar.gz. It should be noted that these scripts are executed through one of podman, nerdctl, and docker tools, depending on which one is installed.
Also, a Docker repository should be created on Nexus, for example on port 8082. In addition, you must enable the Docker Bearer Token Realm in Nexus Security->Realms tab and also
enable the Allow anonymous docker pull option.