[k8s实践] kubeadm 升级控制面板

1,901 阅读8分钟

现状

现有的Kubernetes 集群正在运行版本1.18.8

目标

  1. 仅将主节点上的所有 Kubernetes控制平面和节点组件升级到版本1.19.0
  2. 另外,在主节点上升级kubelet和kubectl。
  3. 确保在升级之前 drain 主节点,并在升级后 uncordon 主节点。
  4. 不要升级工作节点,etcd,container 管理器,CNI插件, DNS服务或任何其他插件。

升级前检查

升级前,我们先确定下各个组件的版本

# 检查 kubeadm
baiyutang@vb-n1:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:10:16Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

# 检查 kubectl
baiyutang@vb-n1:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.9", GitCommit:"94f372e501c973a7fa9eb40ec9ebd2fe7ca69848", GitTreeState:"clean", BuildDate:"2020-09-16T13:47:43Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

# 检查 kubelet
baiyutang@vb-n1:~$ kubelet --version
Kubernetes v1.18.8

# 检查 kube-apiserver 
baiyutang@vb-n1:~$ kubectl exec -it kube-apiserver-vb-n1 -n kube-system -- kube-apiserver --version
Kubernetes v1.18.9

# 检查 kube-controller-manager
baiyutang@vb-n1:~$ kubectl exec -it kube-controller-manager-vb-n1  -n kube-system -- kube-controller-manager  --version
Kubernetes v1.18.9

baiyutang@vb-n1:~$ kubectl exec -it kube-scheduler-vb-n1  -n kube-system -- kube-scheduler  --version
I0925 13:32:59.717135      56 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0925 13:32:59.718611      56 registry.go:150] Registering EvenPodsSpread predicate and priority function
Kubernetes v1.18.9

# 检查 kube-proxy,去检查 daemonset 中镜像版本更合适
baiyutang@vb-n1:~$ kubectl exec -it kube-proxy-b488d  -n kube-system -- kube-proxy  --version
Kubernetes v1.18.9

# 检查 etcd 服务
baiyutang@vb-n1:~$ kubectl exec -it etcd-vb-n1  -n kube-system -- etcd  --version
etcd Version: 3.4.3
Git SHA: 3cf2f69b5
Go Version: go1.12.12
Go OS/Arch: linux/amd64

baiyutang@vb-n1:~$ kubectl exec -it etcd-vb-n1  -n kube-system -- etcdctl  version
etcdctl version: 3.4.3
API version: 3.4

# 检查 DNS 服务 
baiyutang@vb-n1:~$ kubectl describe deployments.apps  -l k8s-app=kube-dns -n kube-system  | grep image
baiyutang@vb-n1:~$ kubectl get deployments.apps  -l k8s-app=kube-dns -n kube-system -o yaml | grep image
                  f:image: {}
                  f:imagePullPolicy: {}
          image: registry.aliyuncs.com/google_containers/coredns:1.6.7
          imagePullPolicy: IfNotPresent
          
# 检查 CNI 之 flannel
baiyutang@vb-n1:~$ kubectl get ds kube-flannel-ds-amd64 -n kube-system -o yaml | grep image
      {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"app":"flannel","tier":"node"},"name":"kube-flannel-ds-amd64","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"app":"flannel"}},"template":{"metadata":{"labels":{"app":"flannel","tier":"node"}},"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"beta.kubernetes.io/os","operator":"In","values":["linux"]},{"key":"beta.kubernetes.io/arch","operator":"In","values":["amd64"]}]}]}}},"containers":[{"args":["--ip-masq","--kube-subnet-mgr"],"command":["/opt/bin/flanneld"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/coreos/flannel:v0.12.0-amd64","name":"kube-flannel","resources":{"limits":{"cpu":"100m","memory":"50Mi"},"requests":{"cpu":"100m","memory":"50Mi"}},"securityContext":{"capabilities":{"add":["NET_ADMIN"]},"privileged":false},"volumeMounts":[{"mountPath":"/run/flannel","name":"run"},{"mountPath":"/etc/kube-flannel/","name":"flannel-cfg"}]}],"hostNetwork":true,"initContainers":[{"args":["-f","/etc/kube-flannel/cni-conf.json","/etc/cni/net.d/10-flannel.conflist"],"command":["cp"],"image":"quay.io/coreos/flannel:v0.12.0-amd64","name":"install-cni","volumeMounts":[{"mountPath":"/etc/cni/net.d","name":"cni"},{"mountPath":"/etc/kube-flannel/","name":"flannel-cfg"}]}],"serviceAccountName":"flannel","tolerations":[{"effect":"NoSchedule","operator":"Exists"}],"volumes":[{"hostPath":{"path":"/run/flannel"},"name":"run"},{"hostPath":{"path":"/etc/cni/net.d"},"name":"cni"},{"configMap":{"name":"kube-flannel-cfg"},"name":"flannel-cfg"}]}}}}
                f:image: {}
                f:imagePullPolicy: {}
                f:image: {}
                f:imagePullPolicy: {}
        image: quay.io/coreos/flannel:v0.12.0-amd64
        imagePullPolicy: IfNotPresent
        image: quay.io/coreos/flannel:v0.12.0-amd64
        imagePullPolicy: IfNotPresent


截图如下,点击可查看大图

步骤

确定 kubeadm 是否可升级

sudo -i

apt update

apt-cache policy kubeadm

得到如下可升级的版本列表

升级 kubeadm

sudo -i

apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.19.0-00 && \
apt-mark hold kubeadm

# 执行过后确定是否成功
kubeadm version

腾空控制平面节点

kubectl drain vb-n1 --ignore-daemonsets

image.png

kubeadm 检查集群是否可升级

 kubeadm upgrade plan

image.png
请留意 You can now apply the upgrade by executing the following command: 的信息

应用升级版本

kubeadm upgrade apply v1.19.0  --etcd-upgrade=false # 不升级etcd
root@vb-n1:~# kubeadm upgrade apply v1.19.0 --etcd-upgrade=false
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.0"
[upgrade/versions] Cluster version: v1.18.9
[upgrade/versions] kubeadm version: v1.19.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"...
Static pod: kube-apiserver-vb-n1 hash: 1c78c880abd4c7301a3e15e0b8ba0739
Static pod: kube-controller-manager-vb-n1 hash: a7092f0e72ccf0dde097448255396198
Static pod: kube-scheduler-vb-n1 hash: 1d49f6bea141a03c33715369a619d2a9
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests007582360"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-25-21-47-44/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-vb-n1 hash: 1c78c880abd4c7301a3e15e0b8ba0739
Static pod: kube-apiserver-vb-n1 hash: 1c78c880abd4c7301a3e15e0b8ba0739
Static pod: kube-apiserver-vb-n1 hash: 1c78c880abd4c7301a3e15e0b8ba0739
Static pod: kube-apiserver-vb-n1 hash: 79e1af63686084ebb219fefaaf989593
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-25-21-47-44/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-vb-n1 hash: a7092f0e72ccf0dde097448255396198
Static pod: kube-controller-manager-vb-n1 hash: e300bc107fc98f68d284e0aa8a71380b
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-25-21-47-44/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-vb-n1 hash: 1d49f6bea141a03c33715369a619d2a9
Static pod: kube-scheduler-vb-n1 hash: 340ea85a0f34a4df64d62b1a784833ae
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
W0925 21:48:01.019661   19731 dns.go:282] the CoreDNS Configuration will not be migrated due to unsupported version of CoreDNS. The existing CoreDNS Corefile configuration and deployment has been retained.
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

截图如下,点击可看大图
image.png
我们留意到最后的信息

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

升级 kubelet 和 kubectl

apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.19.0-00 kubectl=1.19.0-00 && \
apt-mark hold kubelet kubectl

# 重启 kubelet
systemctl daemon-reload
systemctl restart kubelet

恢复主节点可调度

kubectl uncordon vb-n1

检查组件版本

# 检查 kubeadm,已升级成功
baiyutang@vb-n1:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:28:32Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

# 检查 kubectl,已升级成功
baiyutang@vb-n1:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

# 检查 kubelet,已升级成功
baiyutang@vb-n1:~$ kubelet --version
Kubernetes v1.19.0

# 检查 kube-controller-manager,已升级成功
baiyutang@vb-n1:~$ kubectl exec -it kube-controller-manager-vb-n1  -n kube-system -- kube-controller-manager  --version
Kubernetes v1.19.0

# 检查 kube-apiserver,已升级成功
baiyutang@vb-n1:~$ kubectl exec -it kube-apiserver-vb-n1 -n kube-system -- kube-apiserver --version
Kubernetes v1.19.0

# 检查 kube-scheduler,已升级成功
baiyutang@vb-n1:~$ kubectl exec -it kube-scheduler-vb-n1  -n kube-system -- kube-scheduler  --version
I0925 14:24:10.422385      13 registry.go:173] Registering SelectorSpread plugin
I0925 14:24:10.422439      13 registry.go:173] Registering SelectorSpread plugin
Kubernetes v1.19.0

# 检查 kube-proxy,已升级成功
baiyutang@vb-n1:~$ kubectl exec -it kube-proxy-b488d  -n kube-system -- kube-proxy  --version
Error from server (NotFound): pods "kube-proxy-b488d" not found
baiyutang@vb-n1:~$ kubectl get ds kube-proxy -n kube-system -o yaml | grep image
                f:image: {}
                f:imagePullPolicy: {}
        image: registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
        imagePullPolicy: IfNotPresent

# 检查 etcd,未升级,符合要求
baiyutang@vb-n1:~$ kubectl exec -it etcd-vb-n1  -n kube-system -- etcd  --version
etcd Version: 3.4.3
Git SHA: 3cf2f69b5
Go Version: go1.12.12
Go OS/Arch: linux/amd64
baiyutang@vb-n1:~$
baiyutang@vb-n1:~$ kubectl exec -it etcd-vb-n1  -n kube-system -- etcdctl  version
etcdctl version: 3.4.3
API version: 3.4

# 检查 DNS 服务,未升级,符合要求
baiyutang@vb-n1:~$ kubectl get deployments.apps  -l k8s-app=kube-dns -n kube-system -o yaml | grep image
                  f:image: {}
                  f:imagePullPolicy: {}
          image: registry.aliyuncs.com/google_containers/coredns:1.6.7
          imagePullPolicy: IfNotPresent
          
# 检查 CNI 值 flannel,未升级,符合要求
baiyutang@vb-n1:~$ kubectl get ds kube-flannel-ds-amd64 -n kube-system -o yaml | grep image
      {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"app":"flannel","tier":"node"},"name":"kube-flannel-ds-amd64","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"app":"flannel"}},"template":{"metadata":{"labels":{"app":"flannel","tier":"node"}},"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"beta.kubernetes.io/os","operator":"In","values":["linux"]},{"key":"beta.kubernetes.io/arch","operator":"In","values":["amd64"]}]}]}}},"containers":[{"args":["--ip-masq","--kube-subnet-mgr"],"command":["/opt/bin/flanneld"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/coreos/flannel:v0.12.0-amd64","name":"kube-flannel","resources":{"limits":{"cpu":"100m","memory":"50Mi"},"requests":{"cpu":"100m","memory":"50Mi"}},"securityContext":{"capabilities":{"add":["NET_ADMIN"]},"privileged":false},"volumeMounts":[{"mountPath":"/run/flannel","name":"run"},{"mountPath":"/etc/kube-flannel/","name":"flannel-cfg"}]}],"hostNetwork":true,"initContainers":[{"args":["-f","/etc/kube-flannel/cni-conf.json","/etc/cni/net.d/10-flannel.conflist"],"command":["cp"],"image":"quay.io/coreos/flannel:v0.12.0-amd64","name":"install-cni","volumeMounts":[{"mountPath":"/etc/cni/net.d","name":"cni"},{"mountPath":"/etc/kube-flannel/","name":"flannel-cfg"}]}],"serviceAccountName":"flannel","tolerations":[{"effect":"NoSchedule","operator":"Exists"}],"volumes":[{"hostPath":{"path":"/run/flannel"},"name":"run"},{"hostPath":{"path":"/etc/cni/net.d"},"name":"cni"},{"configMap":{"name":"kube-flannel-cfg"},"name":"flannel-cfg"}]}}}}
                f:image: {}
                f:imagePullPolicy: {}
                f:image: {}
                f:imagePullPolicy: {}
        image: quay.io/coreos/flannel:v0.12.0-amd64
        imagePullPolicy: IfNotPresent
        image: quay.io/coreos/flannel:v0.12.0-amd64
        imagePullPolicy: IfNotPresent


检查记录如截图,点击可看大图
image.png

总结

容易遗漏的一点是 在应用升级计划的时候,标注不升级 etcd, kubeadm upgrade apply v1.19.0  --etcd-upgrade=false # 不升级etcd
最好还是实操一遍,要不然光看命令很简单还是容易出错。

我们再来回顾下

参考