• 机构投资者应合理审慎报价 2019-07-16
  • “2018上影之夜”姜文等为“谢晋经典电影回顾展”揭幕 2019-07-10
  • 交大钱学森学院举行毕业典礼 14毕业生赴世界排名前百大学读研 2019-07-10
  • 西安市:构建“五方联动”社会治理新格局 2019-06-23
  • IP定向--云南频道--人民网 2019-06-23
  • 育儿十大坎 新手妈妈快快get起来 2019-06-21
  • 吉林:让更多农村孩子参加少年宫活动 2019-06-21
  • 美国发起贸易战,我们要让世界知道美元、美债并不可靠 2019-06-05
  • 紫光阁中共中央国家机关工作委员会 2019-05-31
  • 监察体制改革后 湘西半年72名公职人员主动交代问题 2019-05-12
  • 媒体宣传报道重庆日报 王国平:扮靓重庆两江四岸” 让城市有机更新 2019-04-26
  • 我相信“交警雨中护送高考生”是真,“交警雨中护送高考生”反被该高考生家长投诉是假。 2019-04-16
  • 14名消防员日巡逻28公里 洗冷水澡 2019-04-10
  • 靶壕有了“蓝军”,百发百中的“神枪手”练起来 2019-04-10
  • 不是秀强大了,别人就会来做朋友,这逻辑不对 2019-04-01
  • 频道栏目
    神奇公式秒杀全国11选5 > 网络 > 云计算 > 正文

    11选5杀号大师100准确:kubernetes1.13.3版本升级至1.14.1版本-李永峰的博客-51CTO博客

    2019-05-07 18:11:58           
    收藏   我要投稿

    相关推荐

    神奇公式秒杀全国11选5 www.2zfa.com 本文的kubernetes环境:https://blog.51cto.com/billy98/2350660


    一、说明

    本文章介绍如何将使用kubeadm创建的Kubernetes集群从版本1.13.x升级到版本1.14.x。

    只能从一个MINOR版本升级到下一个MINOR版本,或者在同一个MINOR的PATCH版本之间升级。也就是说,升级时不能跳过MINOR版本。例如,可以从1.y升级到1.y + 1,但不能从1.y升级到1.y + 2。

    升级工作流程如下:

    升级主master节点。 升级其他master节点。 升级work节点。

    当前kubernetes版本信息:

    [[email protected] ~]# kubectl get nodes
    NAME      STATUS   ROLES    AGE   VERSION
    node-01   Ready    master   99d   v1.13.3
    node-02   Ready    master   99d   v1.13.3
    node-03   Ready    master   99d   v1.13.3
    node-04   Ready       99d   v1.13.3
    node-05   Ready       99d   v1.13.3
    node-06   Ready       99d   v1.13.3

    二、升级主master节

    1. 在第一个master节点上,升级kubeadm

    yum install kubeadm-1.14.1  -y

    2. 验证下载是否有效并具有预期版本:

    kubeadm version

    3. 运行kubeadm upgrade plan

    此命令检查您的群集是否可以升级,并获取可以升级到的版本
    ,您应该看到与此类似的输出:

    [[email protected] ~]#  kubeadm upgrade plan
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Fetching available versions to upgrade to
    [upgrade/versions] Cluster version: v1.13.3
    [upgrade/versions] kubeadm version: v1.14.1
    I0505 13:55:58.449783   12871 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    I0505 13:55:58.449867   12871 version.go:97] falling back to the local client version: v1.14.1
    [upgrade/versions] Latest stable version: v1.14.1
    I0505 13:56:08.645796   12871 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.13.txt": Get https://dl.k8s.io/release/stable-1.13.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    I0505 13:56:08.645861   12871 version.go:97] falling back to the local client version: v1.14.1
    [upgrade/versions] Latest version in the v1.13 series: v1.14.1
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    Kubelet     6 x v1.13.3   v1.14.1
    
    Upgrade to the latest version in the v1.13 series:
    
    COMPONENT            CURRENT   AVAILABLE
    API Server           v1.13.3   v1.14.1
    Controller Manager   v1.13.3   v1.14.1
    Scheduler            v1.13.3   v1.14.1
    Kube Proxy           v1.13.3   v1.14.1
    CoreDNS              1.2.6     1.3.1
    Etcd                 3.2.24    3.3.10
    
    You can now apply the upgrade by executing the following command:
    
            kubeadm upgrade apply v1.14.1
    
    _____________________________________________________________________

    4. 运行升级命令

    kubeadm upgrade apply v1.14.1

    你应该看到与此类似的输出

    [[email protected] ~]# kubeadm upgrade apply v1.14.1
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade/version] You have chosen to change the cluster version to "v1.14.1"
    [upgrade/versions] Cluster version: v1.13.3
    [upgrade/versions] kubeadm version: v1.14.1
    [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
    [upgrade/prepull] Prepulling image for component etcd.
    [upgrade/prepull] Prepulling image for component kube-apiserver.
    [upgrade/prepull] Prepulling image for component kube-scheduler.
    [upgrade/prepull] Prepulling image for component kube-controller-manager.
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [upgrade/prepull] Prepulled image for component kube-scheduler.
    [upgrade/prepull] Prepulled image for component kube-apiserver.
    [upgrade/prepull] Prepulled image for component etcd.
    [upgrade/prepull] Prepulled image for component kube-controller-manager.
    [upgrade/prepull] Successfully prepulled the images for all the control plane components
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.1"...
    Static pod: kube-apiserver-node-01 hash: 26d86add2bfd0fd6825f5507fff1fb5e
    Static pod: kube-controller-manager-node-01 hash: 21ea3d3ccb8d8dc00056209ca3da698b
    Static pod: kube-scheduler-node-01 hash: a8d928943d47ec793a700ef95c4b6b4a
    [upgrade/etcd] Upgrading to TLS for etcd
    Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/etcd.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95
    Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95
    Static pod: etcd-node-01 hash: 17ddbcfb2ddf1d447ceec2b52c9faa96
    [apiclient] Found 3 Pods for label selector component=etcd
    [upgrade/staticpods] Component "etcd" upgraded successfully!
    [upgrade/etcd] Waiting for etcd to become available
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests940835611"
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-apiserver-node-01 hash: 26d86add2bfd0fd6825f5507fff1fb5e
    Static pod: kube-apiserver-node-01 hash: ff2267bcddb83b815efb49ff766ad897
    [apiclient] Found 3 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-controller-manager-node-01 hash: 21ea3d3ccb8d8dc00056209ca3da698b
    Static pod: kube-controller-manager-node-01 hash: ff8be061048a4660a1fbbf72db229d0d
    [apiclient] Found 3 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-scheduler-node-01 hash: a8d928943d47ec793a700ef95c4b6b4a
    Static pod: kube-scheduler-node-01 hash: 959a5cdf1468825401daa8d35329351e
    [apiclient] Found 3 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [addons] Applied essential addon: CoreDNS
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.1". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

    5. 手动升级您的CNI提供程序插件。

    你的容器网络接口(CNI)提供商可能有自己的升级说明。检查插件页面以查找你的CNI提供商,并查看是否需要其他升级步骤。

    6. 升级第一个master节点上的kubelet和kubectl

    yum install kubectl-1.14.1 kebulet-1.14.1 -y

    重启kubelet

    systemctl daemon-reload
    systemctl restart kubelet

    三、升级其他master节点

    1. 升级kubeadm程序

    yum install kubeadm-1.14.1 -y          

    2. 升级静态pod

    kubeadm upgrade node experimental-control-plane

    你可以看到类似信息:

    [[email protected] ~]# kubeadm upgrade node experimental-control-plane
    [upgrade] Reading configuration from the cluster...
    [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.14.1"...
    Static pod: kube-apiserver-node-02 hash: 26d86add2bfd0fd6825f5507fff1fb5e
    Static pod: kube-controller-manager-node-02 hash: 21ea3d3ccb8d8dc00056209ca3da698b
    Static pod: kube-scheduler-node-02 hash: a8d928943d47ec793a700ef95c4b6b4a
    [upgrade/etcd] Upgrading to TLS for etcd
    Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/etcd.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd
    Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd
    Static pod: etcd-node-02 hash: 4710a34897e7838519a1bf8fe4dccf07
    [apiclient] Found 3 Pods for label selector component=etcd
    [upgrade/staticpods] Component "etcd" upgraded successfully!
    [upgrade/etcd] Waiting for etcd to become available
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests483113569"
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-apiserver-node-02 hash: 26d86add2bfd0fd6825f5507fff1fb5e
    Static pod: kube-apiserver-node-02 hash: fe1005f40c3f390280358921c3073223
    [apiclient] Found 3 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-controller-manager-node-02 hash: 21ea3d3ccb8d8dc00056209ca3da698b
    Static pod: kube-controller-manager-node-02 hash: ff8be061048a4660a1fbbf72db229d0d
    [apiclient] Found 3 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-scheduler-node-02 hash: a8d928943d47ec793a700ef95c4b6b4a
    Static pod: kube-scheduler-node-02 hash: 959a5cdf1468825401daa8d35329351e
    [apiclient] Found 3 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [upgrade] The control plane instance for this node was successfully updated!

    3. 更新kubelet和kubectl

    yum install kubectl-1.14.1 kebulet-1.14.1 -y

    4. 重启kubelet

    systemctl daemon-reload
    systemctl restart kubelet

    四、升级work节点

    1. 在work节点上升级kubeadm:

    yum install -y kubeadm-1.14.1

    2.调整调度策略

    通过将节点标记为不可调度并逐出pod来准备节点以进行维护(此步骤在master上执行)。

    kubectl drain $NODE --ignore-daemonsets
    [[email protected] ~]# kubectl drain node-04 --ignore-daemonsets
    node/node-04 already cordoned
    WARNING: ignoring DaemonSet-managed Pods: cattle-system/cattle-node-agent-h555m, default/glusterfs-vhdqv, kube-system/canal-mbwvf, kube-system/kube-flannel-ds-amd64-zdfn8, kube-system/kube-proxy-5d64l
    evicting pod "coredns-55696d4b79-kfcrh"
    evicting pod "cattle-cluster-agent-66bd75c65f-k7p6n"
    pod/cattle-cluster-agent-66bd75c65f-k7p6n evicted
    pod/coredns-55696d4b79-kfcrh evicted
    node/node-04 evicted

    3. 更新node

    [[email protected] ~]# kubeadm upgrade node config --kubelet-version v1.14.1
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [upgrade] The configuration for this node was successfully updated!
    [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

    4. 更新kubelet和kubectl

    yum install kubectl-1.14.1 kebulet-1.14.1 -y

    5.重启kubelet

    systemctl daemon-reload
    systemctl restart kubelet

    6. 恢复调度策略(master执行)

    kubectl uncordon $NODE

    其他work节点升级根据以上的步骤执行一遍即可。

    四、验证群集的状态

    在所有节点上升级kubelet之后,通过从kubectl可以访问群集的任何位置运行以下命令来验证所有节点是否可用:

    [[email protected] ~]# kubectl get nodes
    NAME      STATUS   ROLES    AGE   VERSION
    node-01   Ready    master   99d   v1.14.1
    node-02   Ready    master   99d   v1.14.1
    node-03   Ready    master   99d   v1.14.1
    node-04   Ready       99d   v1.14.1
    node-05   Ready       99d   v1.14.1
    node-06   Ready       99d   v1.14.1

    该STATUS列所有节点应显示Ready,并且应更新版本号。

    五、工作原理

    kubeadm upgrade apply 执行以下操作:

    检查您的群集是否处于可升级状态: API服务器是可访问的 所有节点都处于该Ready状态 master是否健康 执行版本控制策略。 确保master镜像可用或可用于拉到机器。 升级master组件或回滚(如果其中任何一个组件无法启动)。 应用新的kube-dns和kube-proxy清单,并确保创建所有必需的RBAC规则。 创建API服务器的新证书和密钥文件,如果旧文件即将在180天后过期,则备份旧文件。

    kubeadm upgrade node experimental-control-plane在其他控制平面节点上执行以下操作:

    ClusterConfiguration从群集中获取kubeadm 。 可选择备份kube-apiserver证书。 升级master组件的静态Pod清单。

    本次的分享就到此,如有问题欢迎在下面留言交流,希望大家多多关注和点赞,谢谢!

    相关TAG标签
    上一篇:62.AzureAI+机器学习综合能力展现之视频索引器-ZJUNSEN的云计算-51CTO博客
    下一篇:在IDC回收服务器,如何做到安全快捷-高哲的博客-51CTO博客
    相关文章
    图文推荐
  • 机构投资者应合理审慎报价 2019-07-16
  • “2018上影之夜”姜文等为“谢晋经典电影回顾展”揭幕 2019-07-10
  • 交大钱学森学院举行毕业典礼 14毕业生赴世界排名前百大学读研 2019-07-10
  • 西安市:构建“五方联动”社会治理新格局 2019-06-23
  • IP定向--云南频道--人民网 2019-06-23
  • 育儿十大坎 新手妈妈快快get起来 2019-06-21
  • 吉林:让更多农村孩子参加少年宫活动 2019-06-21
  • 美国发起贸易战,我们要让世界知道美元、美债并不可靠 2019-06-05
  • 紫光阁中共中央国家机关工作委员会 2019-05-31
  • 监察体制改革后 湘西半年72名公职人员主动交代问题 2019-05-12
  • 媒体宣传报道重庆日报 王国平:扮靓重庆两江四岸” 让城市有机更新 2019-04-26
  • 我相信“交警雨中护送高考生”是真,“交警雨中护送高考生”反被该高考生家长投诉是假。 2019-04-16
  • 14名消防员日巡逻28公里 洗冷水澡 2019-04-10
  • 靶壕有了“蓝军”,百发百中的“神枪手”练起来 2019-04-10
  • 不是秀强大了,别人就会来做朋友,这逻辑不对 2019-04-01
  • 竞彩合买网站 香港九龙六合图库开奖 爱彩人幸运赛车走势图 p3开机号3d试机号千禧 江苏快三开奖结果 黑龙江36选7开奖时间 广东十一选五任选基本走势图 天津快乐10分奖金 新彩网大乐透专家预 山西十一选五跨度 双色球什么时候开奖 体彩顶呱刮中奖几率 河北快3直播 浙江十一选五开奖结果走势图 天津11选5官网