深入剖析Kubernetes系列连载(十)鲲鹏平台搭建K8S集群

Kubernetes的学习往往让人摸不着头脑,很难理解其中的原理。

深入剖析Kubernetes系列连载是学习《深入剖析Kubernetes》课程的笔记和总结,记录学习的过程,并且传递知识。

Master节点操作

华为云鲲鹏云服务器 ecs-k8s-master

2vCPUs | 4GB | kc1.large.2

CentOS 7.6 64bit with ARM

环境配置

//关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

//关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

//临时关闭swap
swapoff -a

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

安装时间同步服务器

yum install chrony –y
systemctl enable chronyd.service
systemctl start chronyd.service

//查看chrony状态
systemctl status chronyd.service chronyc sources
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2021-03-22 14:32:33 CST; 11min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
 Main PID: 611 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─611 /usr/sbin/chronyd

Mar 22 14:32:33 localhost systemd[1]: Starting NTP client/server...
Mar 22 14:32:33 localhost chronyd[611]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +... +DEBUG)
Mar 22 14:32:33 localhost chronyd[611]: Frequency -3.712 +/- 0.085 ppm read from /var/lib/chrony/drift
Mar 22 14:32:33 localhost systemd[1]: Started NTP client/server.
Mar 22 14:32:38 k8s-master chronyd[611]: Selected source 100.125.0.251

安装依赖

yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bash-completion yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools vim libtool-ltdl

安装Docker CE

//安装依赖
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

//配置repo
wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+repo.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
export releasever=7
export basearch=aarch64

//安装Docker-CE
sudo yum makecache
sudo yum -y install docker-ce-18.09.9-3.el7.aarch64

//配置Docker-CE
systemctl start docker
systemctl enable docker.service
cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["替换为华为云SWR加速器地址或第三方加速器"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

安装kubeadm等组件

cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://repo.huaweicloud.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://repo.huaweicloud.com/kubernetes/yum/doc/yum-key.gpg https://repo.huaweicloud.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF

setenforce 0

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes 
systemctl enable kubelet
systemctl start kubelet --now

部署 Kubernetes Master

//V1.19.0
# kubeadm init --apiserver-advertise-address=192.168.0.25 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.25:6443 --token hc2aba.rk21jnnyuyxb8mvj \
    --discovery-token-ca-cert-hash sha256:f9b9865f2d0607ac4ac0041f6f37905cd651d21adda9b78151cc039d9afad279

其中--apiserver-advertise-address=192.168.0.25为master节点内网IP

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看Node

# kubectl get node
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   NotReady   master   4m6s   v1.14.2
# kubectl get pod -n kube-system -o wide
NAME                                 READY   STATUS    RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
coredns-7c855c8477-4vdvq             0/1     Pending   0          5m58s   <none>         <none>       <none>           <none>
coredns-7c855c8477-xjbcq             0/1     Pending   0          5m58s   <none>         <none>       <none>           <none>
etcd-k8s-master                      1/1     Running   0          5m2s    192.168.0.25   k8s-master   <none>           <none>
kube-apiserver-k8s-master            1/1     Running   0          4m58s   192.168.0.25   k8s-master   <none>           <none>
kube-controller-manager-k8s-master   1/1     Running   0          5m22s   192.168.0.25   k8s-master   <none>           <none>
kube-proxy-hx886                     1/1     Running   0          5m58s   192.168.0.25   k8s-master   <none>           <none>
kube-scheduler-k8s-master            1/1     Running   0          5m12s   192.168.0.25   k8s-master   <none>           <none>

可以看到,CoreDNS等依赖于网络的 Pod 都处于 Pending状态,即调度失败。这当然是符合预期的:因为这个 Master 节点的网络尚未就绪。

部署容器网络插件

# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

验证Master节点

# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   13m   v1.14.2

# kubectl get pod -n kube-system -o wide
NAME                                 READY   STATUS    RESTARTS   AGE    IP             NODE         NOMINATED NODE   READINESS GATES
coredns-7c855c8477-4vdvq             1/1     Running   0          13m    10.32.0.2      k8s-master   <none>           <none>
coredns-7c855c8477-xjbcq             1/1     Running   0          13m    10.32.0.3      k8s-master   <none>           <none>
etcd-k8s-master                      1/1     Running   0          12m    192.168.0.25   k8s-master   <none>           <none>
kube-apiserver-k8s-master            1/1     Running   0          12m    192.168.0.25   k8s-master   <none>           <none>
kube-controller-manager-k8s-master   1/1     Running   0          12m    192.168.0.25   k8s-master   <none>           <none>
kube-proxy-hx886                     1/1     Running   0          13m    192.168.0.25   k8s-master   <none>           <none>
kube-scheduler-k8s-master            1/1     Running   0          12m    192.168.0.25   k8s-master   <none>           <none>
weave-net-rh766                      2/2     Running   1          118s   192.168.0.25   k8s-master   <none>           <none>

可以看到,所有的系统 Pod 都成功启动了,而刚刚部署的 Weave 网络插件则在 kubesystem 下面新建了一个名叫 weave-net-xxxxx Pod,一般来说,这些 Pod 就是容器网络插件在每个节点上的控制组件。

Kubernetes 支持容器网络插件,使用的是一个名叫 CNI 的通用接口,它也是当前容器网络的事实标准,市面上的所有容器网络开源项目都可以通过 CNI 接入 Kubernetes,比如FlannelCalicoCanalRomana 等等,它们的部署方式也都是类似的“一键部署”。


Worker节点操作

华为云鲲鹏云服务器 ecs-k8s-worker

2vCPUs | 4GB | kc1.large.2

CentOS 7.6 64bit with ARM


参考Master节点的前面部分配置

环境配置

//关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

//关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

//临时关闭swap
swapoff -a

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

安装时间同步服务器

yum install chrony –y
systemctl enable chronyd.service
systemctl start chronyd.service

//查看chrony状态
systemctl status chronyd.service chronyc sources
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2021-03-22 14:32:33 CST; 11min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
 Main PID: 611 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─611 /usr/sbin/chronyd

Mar 22 14:32:33 localhost systemd[1]: Starting NTP client/server...
Mar 22 14:32:33 localhost chronyd[611]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +... +DEBUG)
Mar 22 14:32:33 localhost chronyd[611]: Frequency -3.712 +/- 0.085 ppm read from /var/lib/chrony/drift
Mar 22 14:32:33 localhost systemd[1]: Started NTP client/server.
Mar 22 14:32:38 k8s-master chronyd[611]: Selected source 100.125.0.251

安装依赖

yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bash-completion yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools vim libtool-ltdl

安装Docker CE

//安装依赖
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

//配置repo
wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+repo.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
export releasever=7
export basearch=aarch64

//安装Docker-CE
sudo yum makecache
sudo yum -y install docker-ce-18.09.9-3.el7.aarch64

//配置Docker-CE
systemctl start docker
systemctl enable docker.service
cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["替换为华为云SWR加速器地址或第三方加速器"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

安装kubeadm等组件

cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://repo.huaweicloud.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://repo.huaweicloud.com/kubernetes/yum/doc/yum-key.gpg https://repo.huaweicloud.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes 
systemctl enable kubelet
systemctl start kubelet --now

加入Master

使用master成功启动后提示给我们的kubeadm join命令

kubeadm join 192.168.0.25:6443 --token hc2aba.rk21jnnyuyxb8mvj --discovery-token-ca-cert-hash sha256:f9b9865f2d0607ac4ac0041f6f37905cd651d21adda9b78151cc039d9afad279
(完)