Introduction
Table of Contents
Kubernetes installation is not big deal but certain requirements must be meet. In this article we will show you Kubernetes installation on CentOS, step by step guide. We will start with requirements in order to successfully install Kubernetes. Let’s go.
Requirements
- OS: CentOS 7+, RHEL 7+,Fedora 25+,Debian 9+, Ubuntu 16.04+,HypriotOS v1.0.1+
- 2 GB RAM, 2vCPU
- Network connectivity between nodes
- Unique MAC address, hostname and product id
- Open ports:
TCP (ingress): 6443*, 2379-2380, 10250, 10251 , 10252 (MASTER)
TCP (ingress): 10250, 30000-32767
- Swap disabled
Nodes
swmanager.local – 192.168.78.3 (master node)
swworker1.local – 192.168.78.5 (worker node 1)
swworker2.local – 192.168.78.6 (worker node2)
Steps
- Prepare one master and two worker nodes
- Install container runtime (Docker) on all nodes
- Install Kubernetes components
- Initialize cluster
- Join worker nodes to Kubernetes cluster
Procedure
- Disable SELinux to allow containers to access host file system on all nodes (example for master):
[root@swmanager ~] setenforce 0
setenforce: SELinux is disabled [root@swmanager ~]# sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config |
- Populate local hosts file with FQDN of virtual machines on all nodes:
[root@swworker1 ~] cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.78.3 swmanager.local 192.168.78.5 swworker1.local 192.168.78.6 swworker2.local |
- Install container runtime (Docker) on all nodes:
[root@swworker1 ~] yum install docker-ce –nobest -y
Last metadata expiration check: 0:00:18 ago on Mon 09 Mar 2020 09:05:09 AM CET. Dependencies resolved. Problem: package docker-ce-3:19.03.7-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed – cannot install the best candidate for the job – package containerd.io-1.2.10-3.2.el7.x86_64 is excluded – package containerd.io-1.2.13-3.1.el7.x86_64 is excluded – package containerd.io-1.2.2-3.3.el7.x86_64 is excluded – package containerd.io-1.2.2-3.el7.x86_64 is excluded – package containerd.io-1.2.4-3.1.el7.x86_64 is excluded – package containerd.io-1.2.5-3.1.el7.x86_64 is excluded – package containerd.io-1.2.6-3.3.el7.x86_64 is excluded |
- Configure docker to use overlay2 storage driver:
[root@swworker1 ~] tee /etc/docker/daemon.json <<EOF
{ “exec-opts”: [“native.cgroupdriver=systemd”], “log-driver”: “json-file”, “log-opts”: { “max-size”: “100m” }, “storage-driver”: “overlay2”, “storage-opts”: [ “overlay2.override_kernel_check=true” ] } EOF |
- Restart Docker services:
[root@swmanager ~] systemctl daemon-reload && systemctl restart docker
[root@swmanager ~] systemctl enable docker |
- Check that product id of nodes is unique (sometimes virtual machines have same product id):
[root@swworker2 ~] cat /sys/class/dmi/id/product_uuid
1793d4a8-2745-42db-8ec5-35eff09fd25f [root@swworker1 ~] cat /sys/class/dmi/id/product_uuid 88c217f9-60aa-4095-a499-dba6c2d7b26a [root@swmanager ~] cat /sys/class/dmi/id/product_uuid 837eb82e-ac0e-49d2-8d1b-d7ae1b7778d7 |
- Disable firewalld or allow ports on all nodes (we will disable firewalld – example for master node):
[root@swmanager ~] systemctl stop firewalld && systemctl disable firewalld |
- To avoid Pod performance disable swap on all nodes (example for master node):
[root@swmanager ~]
sed -i ‘/swap/d’ /etc/fstab swapoff -a |
- To allow VXLAN communication between pods enable br_netfilter on all nodes (example for master node):
[root@swmanager ~] modprobe br_netfilter
[root@swmanager ~] lsmod | grep br_netfilter br_netfilter 24576 0 bridge 192512 1 br_netfilter |
- For Linux iptables to see bridge traffic correctly enable this filters:
[root@swmanager ~] cat <<EOF > /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > EOF [root@swmanager ~]# sysctl –system * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf … kernel.yama.ptrace_scope = 0 * Applying /usr/lib/sysctl.d/50-coredump.conf … kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e * Applying /usr/lib/sysctl.d/50-default.conf … kernel.sysrq = 16 kernel.core_uses_pid = 1 net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.all.promote_secondaries = 1 net.core.default_qdisc = fq_codel fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf … net.core.optmem_max = 81920 * Applying /etc/sysctl.d/99-sysctl.conf … * Applying /etc/sysctl.d/k8s.conf … net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 * Applying /etc/sysctl.conf … |
- Add Kubernetes repository in order to download required packages on all nodes:
[root@swmanager ~]cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes] > name=Kubernetes > baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 > enabled=1 > gpgcheck=1 > repo_gpgcheck=1 > gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg > EOF |
- Install Kubernetes packages:
[root@swworker1 ~]# yum -y install kubelet kubeadm kubectl –disableexcludes=kubernetes
Kubernetes 366 B/s | 454 B 00:01 Kubernetes 13 kB/s | 1.8 kB 00:00 Importing GPG key 0xA7317B0F: Userid : “Google Cloud Packages Automatic Signing Key <gc-team@google.com>” Fingerprint: D0BC 747F D8CA F711 7500 D6FA 3746 C208 A731 7B0F From : https://packages.cloud.google.com/yum/doc/yum-key.gpg Importing GPG key 0xBA07F4FB: Userid : “Google Cloud Packages Automat ………. Installed: kubeadm-1.19.4-0.x86_64 kubectl-1.19.4-0.x86_64 kubelet-1.19.4-0.x86_64 socat-1.7.3.3-2.el8.x86_64 conntrack-tools-1.4.4-10.el8.x86_64 libnetfilter_cthelper-1.0.0-15.el8.x86_64 libnetfilter_cttimeout-1.0.0-11.el8.x86_64 libnetfilter_queue-1.0.2-11.el8.x86_64 cri-tools-1.13.0-0.x86_64 kubernetes-cni-0.8.7-0.x86_64 Complete! |
- Start Kubernetes services on master node:
[root@swmanager]# systemctl enable kubelet && systemctl start kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service. |
- Check if Kubernetes client is installed:
[root@swmanager]# kubectl version –client
Client Version: version.Info{Major:”1″, Minor:”19″, GitVersion:”v1.19.4″, GitCommit:”d360454c9bcd1634cf4cc52d1867af5491dc9c5f”, GitTreeState:”clean”, BuildDate:”2020-11-11T13:17:17Z”, GoVersion:”go1.15.2″, Compiler:”gc”, Platform:”linux/amd64″} |
- Ensure that systemd is configured for cgroup:
[root@swmanager ~]# cat <<EOF > /etc/sysconfig/kubelet
> KUBELET_EXTRA_ARGS=”–cgroup-driver=systemd” > EOF |
- Initialize Kubernetes cluster:
[root@swmanager ~]# kubeadm init –pod-network-cidr=10.96.0.0/16 –apiserver-advertise-address=192.168.78.3
W1119 09:08:38.536223 1905 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [WARNING FileExisting-tc]: tc not found in system path [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’ [certs] Using certificateDir folder “/etc/kubernetes/pki” [certs] Generating “ca” certificate and key [certs] Generating “apiserver” certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local swmanager.local] and IPs [10.96.0.1 192.168.78.3] … …. Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.78.3:6443 –token sfzbf4.gqh1xwwjqhkznwv5 \ –discovery-token-ca-cert-hash sha256:25432bd4b786d54887e3078bfe0e3db7cf88199bf75d4702b6dc618e20a1dfe2 |
- Apply correct permissions for Kubernetes cluster (master node):
[root@swmanager]# mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config |
- Join all worker nodes to kubernetes cluster:
[root@swworker2 ~]# kubeadm join 192.168.78.3:6443 –token sfzbf4.gqh1xwwjqhkznwv5 –discovery-token-ca-cert-hash sha256:25432bd4b786d54887e3078bfe0e3db7cf88199bf75d4702b6dc618e20a1dfe2
[preflight] Running pre-flight checks [WARNING FileExisting-tc]: tc not found in system path [WARNING Hostname]: hostname “swworker2” could not be reached [WARNING Hostname]: hostname “swworker2”: lookup swworker2 on 83.139.103.3:53: no such host [preflight] Reading configuration from the cluster… [preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’ [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml” [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap… This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster. |
- From master run following command to check status of the cluster (all nodes should be in ready status):
[root@swmanager ~] kubectl get nodes
NAME STATUS ROLES AGE VERSION swmanager.local Ready master 11m v1.19.4 swworker1.local Ready <none> 38s v1.19.4 swworker2 Ready <none> 5m50s v1.19.4 |
- Check status of system pods on master node (all should be in running state):
[root@swmanager] kubectl get pods –n kube-system
NAME READY STATUS RESTARTS AGE coredns-f9fd979d6-k2xjv 1/1 Running 0 3m18s coredns-f9fd979d6-xjzzr 1/1 Running 0 3m18s etcd-swmanager.local 1/1 Running 0 3m28s kube-apiserver-swmanager.local 1/1 Running 0 3m28s kube-controller-manager-swmanager.local 1/1 Running 0 3m28s kube-flannel-ds-82s4h 1/1 Running 0 106s kube-flannel-ds-vkt7k 1/1 Running 0 55s kube-proxy-pz2gh 1/1 Running 0 3m18s kube-proxy-sfljr 1/1 Running 0 55s kube-scheduler-swmanager.local 1/1 Running 0 3m28s |
- Run one image to test if pods are running:
[root@swmanager ~] kubectl run nginx –image=nginx
pod/nginx created |
- Pod should be in running state:
[root@swmanager log] kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 80s 10.244.1.3 swworker1.local <none> <none> |
Congratulations you have successfully installed three node kubernetes cluster!