Bài viết này sẽ giới thiệu khái quát về cách nâng cấp một Kubernetes Cluster lên phiên bản tiếp theo, đảm bảo bao gồm các nâng cấp tính năng quan trọng mà chúng ta có thể muốn tận dụng trong phiên bản ổn định mới nhất của Kubernetes.
1. Nâng cấp Kubernetes Cluster dùng kubeadm
Đối với các Kubernetes Cluster được khởi tạo bằng kubeadm thì chúng ta có thể dùng kubeadm để nâng cấp lên phiên bản mới một cách đơn giản và dễ dàng.
Lưu ý: Trong ví dụ sau chúng ta sẽ nâng cấp một Kubernetes Cluster từ phiên bản 1.16.8 lên 1.17.4 trên CentOS 7, công cụ kubeadm chỉ cho phép chúng ta nâng cấp lên phiên bản(Version) Minor tiếp theo 1 cấp thôi hoặc lên các bản Patch cùng Minor đó, ví dụ từ 1.16.x thì nâng lên được 1.17.x, tuy nhiên nếu chúng ta có Kubernetes Cluster đang chạy phiên bản 1.14.x cần nâng lên 1.17.x thì đầu tiên chúng ta phải nâng lên 1.15.x sau đó lên 1.16.x và cuối cùng lên 1.17.x
1.1. Kiểm tra phiên bản
Thông tin: VM k8s-master01 (10.124.11.17): đóng vai trò Master Node, VM k8s-worker01 (10.124.11.28) và k8s-worker02 (10.124.11.38): đóng vai trò Worker Node.
Đầu tiên chúng ta phải kiểm tra phiên bản hiện tại của Kubernetes Cluster của chúng ta đang dự định nâng cấp:
[root@k8s-master01 ~]# kubectl version --short Client Version: v1.16.8 Server Version: v1.16.8
Kiểm tra chi tiết phiên bản của từng node(bao gồm cả master node):
[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01.novalocal Ready master 326d v1.16.8 k8s-worker01.novalocal Ready <none> 326d v1.16.8 k8s-worker02.novalocal Ready <none> 326d v1.16.8
Chúng ta cũng có thể kiểm tra phiên bản của các thành phần như kubelet hay kube-controller-manager:
[root@k8s-master01 ~]# kubectl get nodes -o jsonpath="{..kubeletVersion}" | tr " " "\n" v1.16.8 v1.16.8 v1.16.8 [root@k8s-master01 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-5644d7b6d9-cnwdv 1/1 Running 1 4h10m coredns-5644d7b6d9-mpxzn 1/1 Running 1 3h59m etcd-k8s-master01.novalocal 1/1 Running 1 4h5m kube-apiserver-k8s-master01.novalocal 1/1 Running 1 4h5m kube-controller-manager-k8s-master01.novalocal 1/1 Running 1 4h5m kube-flannel-ds-amd64-clwzk 1/1 Running 2 3h58m kube-flannel-ds-amd64-ctxpj 1/1 Running 2 3h58m kube-flannel-ds-amd64-jqtqb 1/1 Running 0 3h58m kube-proxy-28vlc 1/1 Running 1 4h9m kube-proxy-9lg4k 1/1 Running 2 4h10m kube-proxy-pw7zz 1/1 Running 1 4h10m kube-scheduler-k8s-master01.novalocal 1/1 Running 1 4h5m [root@k8s-master01 ~]# kubectl get pod kube-controller-manager-k8s-master01.novalocal -o jsonpath="{..image}" -n kube-system k8s.gcr.io/kube-controller-manager:v1.16.8 k8s.gcr.io/kube-controller-manager:v1.16.8
1.2. Nâng cấp công cụ kubeadm
Chúng ta cần nâng cấp kubeadm lên cùng phiên bản mà chúng ta cần nâng cấp lên, trong ví dụ này là 1.17.4, chúng ta dùng yum để nâng cấp kubeadm như repo của Kubernetes như sau:
[root@k8s-master01 ~]# yum update kubeadm-1.17.4-0 --disableexcludes=kubernetes -y
Kiểm tra lại phiên bản kubeadm đã được nâng cấp:
[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:01:11Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
1.3. Nâng cấp Master Node
Trong phần này chúng ta sẽ nâng cấp các thành phần như API Server, Controller Manager, Scheduler, Kube Proxy, CoreDNS, etcd và kubelet.
Chúng ta bắt đầu chạy
kubeadm upgrade plan
để kiểm tra và lên kế hoạch nâng cấp:root@k8s-master01 ~]# kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.16.8 [upgrade/versions] kubeadm version: v1.17.4 [upgrade/versions] Latest stable version: v1.17.4 [upgrade/versions] Latest version in the v1.16 series: v1.16.8 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 3 x v1.16.8 v1.17.4 Upgrade to the latest stable version: COMPONENT CURRENT AVAILABLE API Server v1.16.8 v1.17.4 Controller Manager v1.16.8 v1.17.4 Scheduler v1.16.8 v1.17.4 Kube Proxy v1.16.8 v1.17.4 CoreDNS 1.6.2 1.6.5 Etcd 3.3.15 3.4.3-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.17.4 _____________________________________________________________________
Chúng ta dùng lệnh
kubeadm upgrade apply
để xác nhận nâng cấp lên phiên bản mong muốn là 1.17.4:[root@k8s-master01 ~]# kubeadm upgrade apply v1.17.4 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/version] You have chosen to change the cluster version to "v1.17.4" [upgrade/versions] Cluster version: v1.16.8 [upgrade/versions] kubeadm version: v1.17.4 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.4"... Static pod: kube-apiserver-k8s-master01.novalocal hash: 599574cf0644753d787fcea14ce931f6 Static pod: kube-controller-manager-k8s-master01.novalocal hash: ebb4359c429efb20e6f203f7c468646e Static pod: kube-scheduler-k8s-master01.novalocal hash: e9ed1342b049d4763a8d5b241a18b555 [upgrade/etcd] Upgrading to TLS for etcd Static pod: etcd-k8s-master01.novalocal hash: 1ba3659cde8ea6e7187582e92bc799d6 [upgrade/staticpods] Preparing for "etcd" upgrade [upgrade/staticpods] Renewing etcd-server certificate [upgrade/staticpods] Renewing etcd-peer certificate [upgrade/staticpods] Renewing etcd-healthcheck-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-21-19-45-23/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: etcd-k8s-master01.novalocal hash: 1ba3659cde8ea6e7187582e92bc799d6 Static pod: etcd-k8s-master01.novalocal hash: 25d28c1e50e7ee59aaec5f06cbb16de4 [apiclient] Found 1 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests283160290" W0321 19:45:36.566642 3555 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-21-19-45-23/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-k8s-master01.novalocal hash: 599574cf0644753d787fcea14ce931f6 Static pod: kube-apiserver-k8s-master01.novalocal hash: 335f7c7d725a1cd1573c94703e12dfd0 [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-21-19-45-23/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-k8s-master01.novalocal hash: ebb4359c429efb20e6f203f7c468646e Static pod: kube-controller-manager-k8s-master01.novalocal hash: a9468b4e77d8dc21b1e255f9579e12bf [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-21-19-45-23/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-k8s-master01.novalocal hash: e9ed1342b049d4763a8d5b241a18b555 Static pod: kube-scheduler-k8s-master01.novalocal hash: d4d13becd444ee4a984c5c68b2a69ffd [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons]: Migrating CoreDNS Corefile [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.4". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
Kiểm tra các Pod của các thành phần vừa nâng cấp đã "Running" chưa?
[root@k8s-master01 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6955765f44-5898z 1/1 Running 0 58s coredns-6955765f44-gdfwc 1/1 Running 0 58s etcd-k8s-master01.novalocal 1/1 Running 0 110s kube-apiserver-k8s-master01.novalocal 1/1 Running 0 104s kube-controller-manager-k8s-master01.novalocal 1/1 Running 0 92s kube-flannel-ds-amd64-clwzk 1/1 Running 2 4h4m kube-flannel-ds-amd64-ctxpj 1/1 Running 2 4h4m kube-flannel-ds-amd64-jqtqb 1/1 Running 0 4h4m kube-proxy-7xwbz 1/1 Running 0 24s kube-proxy-hdnw6 1/1 Running 0 46s kube-proxy-rrlkm 0/1 Pending 0 0s kube-scheduler-k8s-master01.novalocal 1/1 Running 0 88s
Đến đây thì các thành phần của Master Node đã được nâng cấp lên phiên bản 1.17.4 ngoại trừ kubelet:
[root@k8s-master01 ~]# kubectl get nodes -o jsonpath="{..kubeletVersion}" | tr " " "\n" v1.16.8 v1.16.8 v1.16.8 [root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01.novalocal Ready master 326d v1.16.8 k8s-worker01.novalocal Ready <none> 326d v1.16.8 k8s-worker02.novalocal Ready <none> 326d v1.16.8 [root@k8s-master01 ~]# kubectl version --short Client Version: v1.16.8 Server Version: v1.17.4
Chúng ta tiến hành nâng cấp kubelet trên Master Node:
[root@k8s-master01 ~]# yum update kubelet-1.17.4-0 --disableexcludes=kubernetes -y [root@k8s-master01 ~]# systemctl daemon-reload [root@k8s-master01 ~]# systemctl restart kubelet
Kiểm tra lại phiên bản Master Node, lần này đã lên 1.17.4:
[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01.novalocal Ready master 326d v1.17.4 k8s-worker01.novalocal Ready <none> 326d v1.16.8 k8s-worker02.novalocal Ready <none> 326d v1.16.8
1.4. Nâng cấp Worker Node
Chúng ta tiến hành nâng cấp kubelet ở các Worker Node ( k8s-worker01 và k8s-worker02):
Trên k8s-worker01:
[root@k8s-worker01 ~]# yum update kubelet-1.17.4-0 --disableexcludes=kubernetes -y [root@k8s-worker01 ~]# systemctl daemon-reload [root@k8s-worker01 ~]# systemctl restart kubelet
Trên k8s-worker02:
[root@k8s-worker02 ~]# yum update kubelet-1.17.4-0 --disableexcludes=kubernetes -y [root@k8s-worker02 ~]# systemctl daemon-reload [root@k8s-worker02 ~]# systemctl restart kubelet
Kiểm tra lại phiên bản của 2 Worker Node bằng cách chạy lệnh sau trên Master Node:
[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01.novalocal Ready master 326d v1.17.4 k8s-worker01.novalocal Ready <none> 326d v1.17.4 k8s-worker02.novalocal Ready <none> 326d v1.17.4
1.5. Nâng cấp kubectl
Cuối cùng, chúng ta sẽ nâng cấp công cụ kubectl
, trong ví dụ này kubectl được cài trên Master Node và cài qua Yum thì thế chúng ta cũng dùng Yum để nâng cấp, nếu chúng ta cài kubectl qua cách khác thì chúng ta dùng cách tương ứng khi cài để nâng cấp:
[root@k8s-master01 ~]# yum update kubectl-1.17.4-0 --disableexcludes=kubernetes -y
Kiểm tra lại phiên bản kubectl:
[root@k8s-master01 ~]# kubectl version --short
Client Version: v1.17.4
Server Version: v1.17.4
Đến đây, chúng ta đã hoàn tất nâng cấp các thành phần của Kubernetes Cluster lên phiên bản mong muốn là 1.17.4 🤗.
2. Lời kết
Có rất nhiều cách để nâng cấp Kubernetes Cluster lên phiên bản mới hơn, blogd.net đã hướng dẫn cách dễ dàng và đơn giản nhất là dùng kubeadm để nâng cấp một Kubernetes Cluster, cách này tương đối đơn giản và dễ hiểu với hầu hết mọi người khi mới tìm hiểu về Kubernetes 👍.