kubeadm升级kubernetes到1.15.0版本

今天一早到公司看日常新闻发现kubernetes更新到1.15.0版本,更新了不少的功能,具体功能见kubernetes的blog:https://kubernetes.io/blog/2019/06/19/kubernetes-1-15-release-announcement/

重点关注的更新:

  • kubeadm证书管理在1.15中变得更加强大,kubeadm现在可以在它们到期之前无缝转动所有证书(升级时)。有关如何管理证书的信息,请查看kubeadm文档
  • kubeadm配置文件API在1.15中从v1beta1移动到v1beta2
  • 在 Kubernetes Core 中支持 Go 模块
  • 继续为云供应商的提取与代码组织需求做好准备。云服务供应商的代码已经被移动至 kubernetes/legacy-cloud-providers,旨在降低后续删除与外部使用难度
  • Kubectl的get与describe现可与各扩展成功协作
  • 节点现可支持第三方监控插件
  • 发布新的alpha测试版本调度框架,用于管理各调度插件
  • 用于在不同容器用例当中触发 hook 命令的 ExecutionHook API 现在进入 alpha 测试阶段
  • 继续弃用 extensions/v1beta1、apps/v1beta1 以及 apps/v1beta2 APIs;这些扩展将在 1.16 版本中被彻底淘汰

官方更新了功能,我也迫不及待的去升级了我的kubernetes环境。

检查群集

检查群集可用于升级的版本和当前群集是否可升级

1
kubeadm upgrade plan

这里需要先升级kubeadm kubelet kubectl

升级kubelet kubeadm kubectl

1
2
yum clean all // 如果yum查找不到1.15.0版本,先清理一下yum的本地缓存
yum install -y kubelet kubeadm kubectl

其他节点也需要执行

下载对应的镜像

  • kubeadm-config.yaml
1
2
3
4
5
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: "v1.15.0"
...
imageRepository: registry.aliyuncs.com/google_containers

在kubeadm初始化的配置中指定要更新的版本和镜像仓库

  • 下载镜像
1
2
3
4
5
6
7
8
$ kubeadm config images pull --config=kubeadm-config.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.3.10
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.3.1

镜像下载成功

升级群集组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
kubeadm upgrade apply v1.15.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.15.0"
[upgrade/versions] Cluster version: v1.14.2
[upgrade/versions] kubeadm version: v1.15.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.15.0"...
Static pod: kube-apiserver-k8s-11 hash: 2e138075197b77cbc857ed6c45d3e0a3
Static pod: kube-controller-manager-k8s-11 hash: d4e699449cae3b28f9f657d0eabfef0e
Static pod: kube-scheduler-k8s-11 hash: a29556bf1d34f898bf5d0ce3c15a5948
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests653407144"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-21-09-39-05/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-11 hash: 2e138075197b77cbc857ed6c45d3e0a3
Static pod: kube-apiserver-k8s-11 hash: a0b1f68dcbfbbb58b72942275ea6e8c8
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-21-09-39-05/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-11 hash: d4e699449cae3b28f9f657d0eabfef0e
Static pod: kube-controller-manager-k8s-11 hash: e421c8900f2987ad26251124112ccba8
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-21-09-39-05/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-11 hash: a29556bf1d34f898bf5d0ce3c15a5948
Static pod: kube-scheduler-k8s-11 hash: b778c0dffa2d3c4049df6a82b96ea2c4
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

可以看到提示成功升级到v1.15.0版本

  • 重启kubelet
1
2
systemctl daemon-reload
systemctl restart kubelet

所有节点升级后都需要重启kubelet

  • 升级其它master节点,如果有
1
kubeadm upgrade node control-plane
  • 升级工作节点
1
kubeadm upgrade node

验证群集升级

1
2
3
4
5
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-11 Ready master 27d v1.15.0
k8s-12 Ready <none> 27d v1.15.0
k8s-13 Ready <none> 27d v1.15.0
1
2
3
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

参考文档:https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/

当前网速较慢或者你使用的浏览器不支持博客特定功能,请尝试刷新或换用Chrome、Firefox等现代浏览器