K8S资源调度

K8S资源调度

Replication Controller 和 ReplicaSet

Replication Controller(复制控制器,RC)和 ReplicaSet(复制集,RS)是两种简单部署 Pod 的方式。因为在生产环境中,主要使用更高级的 Deployment 等方式进行 Pod 的管理和部署,所 以只需要简单了解即可。

Replication Controller(简称 RC)可确保 Pod 副本数达到期望值,也就是 RC 定义的数量。 换句话说,Replication Controller 可确保一个 Pod 或一组同类 Pod 总是可用。 如果存在的 Pod 大于设定的值,则 Replication Controller 将终止额外的 Pod。如果太小, Replication Controller 将启动更多的 Pod 用于保证达到期望值。与手动创建 Pod 不同的是,用 Replication Controller 维护的 Pod 在失败、删除或终止时会自动替换。因此即使应用程序只需要 一个 Pod,也应该使用 Replication Controller 或其他方式管理。Replication Controller 类似于进程 管理程序,但是 Replication Controller 不是监视单个节点上的各个进程,而是监视多个节点上的 多个 Pod。

Replication Controller 和 ReplicaSet 的创建删除和 Pod 并无太大区别,Replication Controller 目前几乎已经不在生产环境中使用,ReplicaSet 也很少单独被使用,都是使用更高级的资源 Deployment、DaemonSet、StatefulSet 进行管理 Pod。

定义一个 Replication Controller 的示例如下。

[root@master-01 ~]# vim /k8spod/controllers/replication.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 3 #副本数为3
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1
        ports:
        - containerPort: 80

创建资源,并查看资源状态

[root@master-01 ~]# kubectl apply -f /k8spod/controllers/replication.yaml
replicationcontroller/nginx created
[root@master-01 ~]# kubectl get pods
NAME                            READY   STATUS    RESTARTS      AGE
cluster-test-665f554bcc-bcw5v   1/1     Running   2 (16m ago)   136m
nginx-cbp45                     1/1     Running   0             24s
nginx-lbdgg                     1/1     Running   0             24s
nginx-vb6km                     1/1     Running   0             24s

删除nginx-cbp45 ,可以观察到replication controller重新创建pod,将副本数维持在3个

[root@master-01 ~]# kubectl delete pod nginx-cbp45
pod "nginx-cbp45" deleted
[root@master-01 ~]# kubectl get pods
NAME                            READY   STATUS    RESTARTS      AGE
cluster-test-665f554bcc-bcw5v   1/1     Running   2 (18m ago)   138m
nginx-cw7sp                     1/1     Running   0             8s
nginx-lbdgg                     1/1     Running   0             2m40s
nginx-vb6km                     1/1     Running   0             2m40s

ReplicaSet 是支持基于集合的标签选择器的下一代 Replication Controller,它主要用作 Deployment 协调创建、删除和更新 Pod,和 Replication Controller 唯一的区别是,ReplicaSet 支持 标签选择器。在实际应用中,虽然 ReplicaSet 可以单独使用,但是一般建议使用 Deployment 来 自动管理 ReplicaSet,除非自定义的 Pod 不需要更新或有其他编排等。

定义一个 ReplicaSet 的示例如下:

[root@master-01 ~]# cat /k8spod/controllers/replicaset-frontend.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # modify replicas according to your case
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: m.daocloud.io/docker.io/library/nginx

创建资源,并查看资源状态

[root@master-01 ~]# kubectl apply -f /k8spod/controllers/replicaset-frontend.yaml
replicaset.apps/frontend created

[root@master-01 ~]# kubectl get pods
NAME                            READY   STATUS    RESTARTS      AGE
cluster-test-665f554bcc-bcw5v   1/1     Running   2 (21m ago)   141m
frontend-gvst7                  1/1     Running   0             10s
frontend-hn8nr                  1/1     Running   0             10s
frontend-k4r6l                  1/1     Running   0             10s
[root@master-01 ~]# kubectl get replicaset
NAME                      DESIRED   CURRENT   READY   AGE
cluster-test-665f554bcc   1         1         1       15d
frontend                  3         3         3       37s
[root@master-01 ~]# kubectl describe rs frontend
Name:         frontend
Namespace:    default
Selector:     tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  tier=frontend
  Containers:
   nginx:
    Image:         m.daocloud.io/docker.io/library/nginx
    Port:          <none>
    Host Port:     <none>
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Events:
  Type    Reason            Age    From                   Message
  ----    ------            ----   ----                   -------
  Normal  SuccessfulCreate  2m14s  replicaset-controller  Created pod: frontend-gvst7
  Normal  SuccessfulCreate  2m14s  replicaset-controller  Created pod: frontend-hn8nr
  Normal  SuccessfulCreate  2m14s  replicaset-controller  Created pod: frontend-k4r6l
无状态应用管理 Deployment

Deployment 控制器是一种高级的控制器,用于管理无状态应用的部署和滚动更新。它提供了声明式更新应用程序的方法,确保应用始终运行在预期的状态。Deployment 控制器通过管理 ReplicaSet 来实现声明式的应用部署和滚动更新。

定义一个 Deployment 控制器的示例如下:

[root@master-01 ~]# vim /k8spod/controllers/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment #Deployment控制器的名称
  labels:
    app: nginx
spec:
  replicas: 3 #创建的副本数
  selector:
    matchLabels: #定义Deployment控制器如何找到要管理的Pod,与template的label(标签)对应,apiVersion为apps/v1必须指定该字段
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1
        ports:
        - containerPort: 80

创建资源并查看资源状态

[root@master-01 ~]# kubectl create -f /k8spod/controllers/deployment.yaml
deployment.apps/nginx-deployment created
[root@master-01 ~]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           6s
字段 解释
NAME 集群中Deployment控制器的名称
READY Pod就绪个数和总副本数
UP-TO-DATE 显示已达到期望状态的被更新的副本数
AVAILABLE 显示用户可以使用的应用程序副本数,如果值为0,说明目前还没有达到期望的Pod
AGE 显示应用程序运行的时间

使用rollout命令查看Deployment的创建状态

[root@master-01 ~]# kubectl rollout status deployment nginx-deployment
deployment "nginx-deployment" successfully rolled out

查看此Deployment控制器对应的ReplicaSet

[root@master-01 ~]# kubectl get rs -l app=nginx
NAME                         DESIRED   CURRENT   READY   AGE
nginx-deployment-76649d58b   3         3         3       6m57s
字段 解释
DESIRED 应用程序的期望副本数
CURRENT 当前正在运行的副本数

当 Deployment 有过更新,对应的 RS 可能不止一个,可以通过-o yaml 获取当前对应的 RS 是哪个(匹配标签),或者是查看RS对应的ownerReferences字段,其余的 RS 为保留的历史版本,用于回滚等操作。

[root@master-01 ~]# kubectl get deployment nginx-deployment -o jsonpath='{.spec.selector.matchLabels}'
{"app":"nginx"}
[root@master-01 ~]# kubectl get rs nginx-deployment-76649d58b -oyaml | grep ownerReferences -A 5
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: Deployment
    name: nginx-deployment

更新Deployment控制器,当且仅当 Deployment 的 Pod 模板(即.spec.template)更改时,才会触发 Deployment更新,例如更改内存、CPU 配置或者容器的镜像

更新 Nginx Pod 的镜像使用 nginx:alpine,并使用--record 记录当前更改的参数,后期 回滚时可以查看到对应的信息(提示--record参数在未来可能会被移除)

[root@master-01 ~]# kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment image updated

使用 edit 命令直接编辑 Deployment,效果相同

[root@master-01 ~]# kubectl edit deployments.apps nginx-deployment
deployment.apps/nginx-deployment edited

使用 kubectl rollout status 命令查看更新过程

[root@master-01 ~]# kubectl rollout status deployment.apps/nginx-deployment     
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-deployment" successfully rolled out

可以看出更新过程为新旧交替更新,首先新建一个 Pod,当 Pod 状态为 Running 时,删除一个旧的 Pod,同时再创建一个新的 Pod。当触发一个更新后,会有新的 ReplicaSet 产生,旧的ReplicaSet 会被保存,查看此时 ReplicaSet,可以从 AGE 或 READY 看出来新旧 ReplicaSet

[root@master-01 ~]# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-7b4c78679f   3         3         3       2m40s
nginx-deployment-85b9d9f677   0         0         0       6m19s

通过 describe 查看 Deployment 的详细信息

[root@master-01 ~]# kubectl describe deployment nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Sat, 15 Jun 2024 04:05:25 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:         registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
    Port:          80/TCP
    Host Port:     0/TCP
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  nginx-deployment-76649d58b (0/0 replicas created)
NewReplicaSet:   nginx-deployment-85b9d9f677 (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  54s   deployment-controller  Scaled up replica set nginx-deployment-76649d58b to 3
  Normal  ScalingReplicaSet  12s   deployment-controller  Scaled up replica set nginx-deployment-85b9d9f677 to 1
  Normal  ScalingReplicaSet  11s   deployment-controller  Scaled down replica set nginx-deployment-76649d58b to 2 from 3
  Normal  ScalingReplicaSet  11s   deployment-controller  Scaled up replica set nginx-deployment-85b9d9f677 to 2 from 1
  Normal  ScalingReplicaSet  9s    deployment-controller  Scaled down replica set nginx-deployment-76649d58b to 1 from 2
  Normal  ScalingReplicaSet  9s    deployment-controller  Scaled up replica set nginx-deployment-85b9d9f677 to 3 from 2
  Normal  ScalingReplicaSet  7s    deployment-controller  Scaled down replica set nginx-deployment-76649d58b to 0 from 1

在 describe 中可以看出,第一次创建时,它创建了一个名为 nginx-deployment-76649d58b 的 ReplicaSet,并直接将其扩展为 3 个副本。更新部署时,它创建了一个新的 ReplicaSet,命名为 nginx-deployment-85b9d9f677,并将其副本数扩展为 1,然后将旧的 ReplicaSet 缩小为 2,这样至 少可以有 2 个 Pod 可用,最多创建了 4 个 Pod。以此类推,使用相同的滚动更新策略向上和向下 扩展新旧 ReplicaSet,最终新的 ReplicaSet 可以拥有 3 个副本,并将旧的 ReplicaSet 缩小为 0

增加一条更新Deployment内Pod镜像的命令,并使用record参数记录,但是指定是镜像不存在,模拟更新不稳定或配置不合理的情况,可以观察到Deployment更新失败了,UP-TO-DATE字段应为3

[root@master-01 ~]# kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:15.1 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment image updated
[root@master-01 ~]# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     1            3           16m

使用 kubectl rollout history 查看更新历史

[root@master-01 ~]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION  CHANGE-CAUSE
3         <none>
4         kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine --record=true
5         kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:15.1 --record=true

查看 Deployment 某次更新的详细信息,使用--revision 指定某次更新版本号

[root@master-01 ~]# kubectl rollout history deployment nginx-deployment --revision=5
deployment.apps/nginx-deployment with revision #5
Pod Template:
  Labels:       app=nginx
        pod-template-hash=859dff545
  Annotations:  kubernetes.io/change-cause:
          kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:15.1 --record=true
  Containers:
   nginx:
    Image:      registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:15.1
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>
  Node-Selectors:       <none>
  Tolerations:  <none>

使用 kubectl rollout undo命令回滚到上一个稳定版本

[root@master-01 ~]# kubectl rollout undo deployment nginx-deployment
deployment.apps/nginx-deployment rolled back

查看回滚历史和Deployment资源状态,可以观察到Pod的镜像版本更换为nginx:alpine

[root@master-01 ~]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION  CHANGE-CAUSE
3         <none>
5         kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:15.1 --record=true
6         kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine --record=true
[root@master-01 ~]# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           18m

如果要回滚到指定版本,使用--to-revision 参数(此处提示当前版本已经是nginx:alpine,所以不触发更新)

[root@master-01 ~]# kubectl rollout undo deployment nginx-deployment --to-revision 6
deployment.apps/nginx-deployment skipped rollback (current template already matches revision 6)

当业务访问量变大,或者有预期内的活动时(双十一),Pod的数量 可能已无法满足业务流量时,可以提前对Pod的规模进行扩展

使用 kubectl scale 动态调整 Pod 的副本数,比如增加 Pod 为 6 个(展示两种不同的命令方式,供读者自由选择)

[root@master-01 ~]# kubectl scale deployment nginx-deployment --replicas 5
deployment.apps/nginx-deployment scaled
You have new mail in /var/spool/mail/root
[root@master-01 ~]# kubectl scale deployment nginx-deployment --replicas=6
deployment.apps/nginx-deployment scaled

查看Pod,可以观察到Pod的数量已经增加

[root@master-01 ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS         AGE
nginx-deployment-85b9d9f677-gk82c   1/1     Running   0                17m
nginx-deployment-85b9d9f677-mcv8r   1/1     Running   0                62s
nginx-deployment-85b9d9f677-qwqxj   1/1     Running   0                62s
nginx-deployment-85b9d9f677-rs5fm   1/1     Running   0                17m
nginx-deployment-85b9d9f677-tzd4n   1/1     Running   0                17m
nginx-deployment-85b9d9f677-zhjx2   1/1     Running   0                50s

上述演示的均为更改某一处的配置,更改后立即触发更新,大多数情况下可能需要针对一个资源文件更改多处地方,而并不需要多次触发更新,此时可以使用 Deployment 暂停功能,临时禁用更新操作,对 Deployment 进行多次修改后在进行更新。 使用 kubectl rollout pause 命令即可暂停 Deployment 更新

[root@master-01 ~]# kubectl rollout pause deployment nginx-deployment
deployment.apps/nginx-deployment paused

然后对 Deployment 进行相关更新操作,比如先更新镜像,然后对其资源进行限制(如果使用的是 kubectl edit 命令,可以直接进行多次修改,无需暂停更新,kubectl set 命令一般会集成在CICD 流水线中)

[root@master-01 ~]# kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:1.15
deployment.apps/nginx-deployment image updated
[root@master-01 ~]# kubectl set resources deployment nginx-deployment -c nginx --limits=cpu=200m,memory=512Mi
deployment.apps/nginx-deployment resource requirements updated

通过 rollout history 可以观察到没有触发更新

[root@master-01 ~]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION  CHANGE-CAUSE
3         <none>
5         kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:15.1 --record=true
6         kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine --record=true

完成所有配置更改后,使用 kubectl rollout resume 恢复 Deployment 更新

[root@master-01 ~]# kubectl rollout resume deployment nginx-deployment
deployment.apps/nginx-deployment resumed

再次执行 rollout history 可以观察到已经触发更新(此处有一个问题,读者可以观察到新的记录7镜像版本依然没有变化,还是alpine,正常应该是1.15,可能是BUG还是其他的原因?但是如果在Pod暂停更新期间,执行镜像更新的命令,在最后手动加上--record=true,Pod更新恢复后,记录就会正常)

[root@master-01 ~]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION  CHANGE-CAUSE
3         <none>
5         kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:15.1 --record=true
6         kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine --record=true
7         kubectl set image deployment nginx-deployment nginx=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine --record=true

查看 Deployment 的镜像版本是否更改为nginx:1.15

[root@master-01 ~]# kubectl describe deployments.apps nginx-deployment | grep Image
    Image:      registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:1.15

查看 Deployment 是否已经对CPU和内存进行限制

[root@master-01 ~]# kubectl get deployments.apps nginx-deployment -oyaml | grep limits -A 2
          limits:
            cpu: 200m
            memory: 512Mi

注意:在默认情况下,revision 保留 10 个旧的 ReplicaSet,其余的将在后台进行垃圾回收,可以 在.spec.revisionHistoryLimit 设置保留 ReplicaSet 的个数。当设置为 0 时,不保留历史记录。

更新策略 含义
.spec.strategy.type==Recreate 重建,先删掉旧的Pod再创建新的Pod
.spec.strategy.type==RollingUpdate 滚动更新,可以指定maxUnavailable和maxSurge来控制滚动更新过程
.spec.strategy.rollingUpdate.maxUnavailable 指定在回滚更新时最大不可用的Pod数量,可选字段,默认为25%,可以设置为数字或百分比,如果maxSurge为0,则该值不能为0
.spec.strategy.rollingUpdate.maxSurge 可以超过期望值的最大Pod数,可选字段,默认为25%,可以设置成数字或百分比,如果maxUnavailable为0,则该值不能为0
.spec.minReadySeconds 可选参数,指定新创建的 Pod 应该在没有任何容器崩溃的情况下视为 Ready(就绪)状态的最小秒数,默认为 0,即一旦被创建就视为可用,通常和容器探针连用。主要目的是确保新创建的 Pod 在短时间内不会被立即认为是准备好的,这有助于检测和过滤出一些短暂的启动问题。
有状态应用管理 StatefulSet

在 Kubernetes 中,StatefulSet 是一种用于管理有状态应用的工作负载 API 对象。与无状态的 Deployment比,StatefulSet 提供了额外的保证和功能来支持有状态应用程序。

StatefulSet 的特性和优势

  1. 稳定的、唯一的网络标识

    • 每个 Pod 都有一个稳定的、唯一的 DNS 名称,即使在重新调度时也不会改变。这对于需要稳定网络标识的应用程序(如数据库)特别有用。
  2. 稳定的存储

    • 每个 Pod 都有一个稳定的、唯一的持久存储卷,即使 Pod 被删除或重新创建,存储卷也会被重新附加到相同的 Pod 上。这确保了数据的持久性。
  3. 有序的部署和扩展

    • Pods 是按顺序(一个接一个地)部署和扩展的。新的 Pod 只有在前一个 Pod 处于 Running 和 Ready 状态时才会被创建。这有助于依赖有序启动的应用程序。
  4. 有序的滚动更新

    • 在滚动更新时,Pod 按顺序逐个更新,确保在更新过程中始终只有一个 Pod 在更新状态。这对于确保最小化中断非常重要。

    定义一个 StatefulSet 控制器的示例如下:

[root@master-01 ~]# vim /k8spod/controllers/statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: headless
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "headless"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      containers:
      - name: nginx
        image: registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1
        ports:
        - containerPort: 80
          name: web

kind: Service 定 义 了 一 个 名 字 为 headless的 无头服务(后面课程会进行讲解,读者可以理解为该服务可以给Pod一个“DNS名分”) , 创 建 的 Service 格 式 为 web-0.headless.default.svc.cluster.local,其他的类似,因为没有指定Namespace,所以默认部署在default命名空间下。

kind: StatefulSet定义了一个名字为web的StatefulSet,replicas表示部署Pod的副本数,本实例为3。

在 StatefulSet 中 必 须 设 置 Pod 选 择 器 ( .spec.selector ) 用 来 匹 配 其 标 签 (.spec.template.metadata.labels)。在 1.8 版本之前,如果未配置该字段(.spec.selector),将被设置为默认值,在 1.8 版本之后,如果未指定匹配 Pod Selector,则会导致StatefulSet 创建错误。

当 StatefulSet 控制器创建 Pod 时,它会添加一个标签 statefulset.kubernetes.io/pod-name,该 标签的值为 Pod 的名称,用于匹配 Service。

[root@master-01 ~]# kubectl describe pod web-0 | grep Labels -A 3
Labels:           app=nginx
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=web-74895c745b
                  statefulset.kubernetes.io/pod-name=web-0

创建资源并查看资源状态

[root@master-01 ~]# kubectl create -f /k8spod/controllers/statefulset.yaml
service/headless created
statefulset.apps/web created
[root@master-01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
headless     ClusterIP   None         <none>        80/TCP    13s
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   21d
[root@master-01 ~]# kubectl get statefulset
NAME   READY   AGE
web    3/3     25s
[root@master-01 ~]# kubectl get sts
NAME   READY   AGE
web    3/3     35s
[root@master-01 ~]# kubectl get pods -l app=nginx
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          46s
web-1   1/1     Running   0          44s
web-2   1/1     Running   0          42s

查看headless svc为Pod创建的DNS条目,临时创建一个容器用于验证

[root@master-01 ~]# kubectl run -i --tty --image=m.daocloud.io/docker.io/busybox my-test-pod --restart=Never --rm -- nslookup web-0.headless.default.svc.cluster.local
Server:         10.96.0.10
Address:        10.96.0.10:53

Name:   web-0.headless.default.svc.cluster.local
Address: 172.16.190.51
[root@master-01 ~]# kubectl run -i --tty --image=m.daocloud.io/docker.io/busybox my-test-pod --restart=Never --rm -- nslookup web-1.headless.default.svc.cluster.local
Server:         10.96.0.10
Address:        10.96.0.10:53

Name:   web-1.headless.default.svc.cluster.local
Address: 172.16.133.163

与Pod地址对比,可以发现解析出来的地址就是web-0和web-1对应的地址

[root@master-01 ~]# kubectl get pods -owide | grep web-* | awk '{print $1 ":" $6}'
web-0:172.16.190.51
web-1:172.16.133.163
web-2:172.16.222.34

StatefulSet 管理的 Pod 部署和扩展规则如下:

  1. 对于具有N个副本的StatefulSet,将按顺序从0到N-1开始创建Pod;

  2. 当删除Pod时,将按照N-1到0的反顺序终止;

  3. 在缩放Pod之前,必须保证当前的Pod是Running(运行中)或者Ready(就绪);

  4. 在终止Pod之前,它所有的继任者必须是完全关闭状态。

StatefulSet 的 pod.Spec.TerminationGracePeriodSeconds(终止 Pod 的等待时间)不应该指定为 0,设置为 0 对 StatefulSet 的 Pod 是极其不安全的做法,优雅地删除 StatefulSet 的 Pod 是非常有必要的,而且是安全的,因为它可以确保在 Kubelet 从 APIServer 删除之前,让 Pod 正常关闭。

当创建上面的 Nginx 实例时,Pod 将按 web-0、web-1、web-2 的顺序部署 3 个 Pod。在 web-0 处于 Running 或者 Ready 之前,web-1 不会被部署,相同的,web-2 在 web-1 未处于 Running和 Ready 之前也不会被部署。如果在 web-1 处于 Running 和 Ready 状态时,web-0 变成 Failed(失败)状态,那么 web-2 将不会被启动,直到 web-0 恢复为 Running 和 Ready 状态。

如果用户将 StatefulSet 的 replicas 设置为 1,那么 web-2 将首先被终止,在完全关闭并删除web-2 之前,不会删除 web-1。如果 web-2 终止并且完全关闭后,web-0 突然失败,那么在 web-0 未恢复成 Running 或者 Ready 时,web-1 不会被删除。

StatefulSet 扩容和缩容

和 Deployment 类似,可以通过更新 replicas 字段扩容/缩容 StatefulSet,也可以使用 kubectl scale、kubectl edit 和 kubectl patch 来扩容/缩容一个 StatefulSet。

将名称为web的statefulset控制器下管理的Pod数量增加为5

[root@master-01 ~]# kubectl scale sts web --replicas=5
statefulset.apps/web scaled

查看扩容后Pod的状态,读者也可以使用-w参数动态的查看容器创建过程

[root@master-01 ~]# kubectl get pods
NAME                            READY   STATUS              RESTARTS       AGE
cluster-test-665f554bcc-bcw5v   1/1     Running             58 (51m ago)   5d1h
web-0                           1/1     Running             0              21m
web-1                           1/1     Running             0              21m
web-2                           1/1     Running             0              21m
web-3                           1/1     Running             0              8s
web-4                           0/1     ContainerCreating   0              7s

缩容操作,建议读者复制当前操作的终端,执行监视命令查看缩容的过程

[root@master-01 ~]# kubectl get pods -w
NAME                            READY   STATUS    RESTARTS       AGE
cluster-test-665f554bcc-bcw5v   1/1     Running   58 (56m ago)   5d1h
web-0                           1/1     Running   0              26m
web-1                           1/1     Running   0              26m
web-2                           1/1     Running   0              26m
web-3                           1/1     Running   0              24s
web-4                           1/1     Running   0              23s

使用scale命令也可以实现缩容操作,下面演示另一种方法

[root@master-01 ~]# kubectl patch sts web -p '{"spec":{"replicas":3}}'
statefulset.apps/web patched

可以观察到web-4和web-3的Pod正在被有序的删除(先删除web-4,成功后再删除web-3)

[root@master-01 ~]# kubectl get pods -w
NAME                            READY   STATUS    RESTARTS       AGE
cluster-test-665f554bcc-bcw5v   1/1     Running   58 (56m ago)   5d1h
web-0                           1/1     Running   0              26m
web-1                           1/1     Running   0              26m
web-2                           1/1     Running   0              26m
web-3                           1/1     Running   0              24s
web-4                           1/1     Running   0              23s
web-4                           1/1     Terminating   0              61s
web-4                           1/1     Terminating   0              61s
web-4                           0/1     Terminating   0              62s
web-4                           0/1     Terminating   0              62s
web-4                           0/1     Terminating   0              62s
web-3                           1/1     Terminating   0              63s
web-3                           1/1     Terminating   0              63s
web-3                           0/1     Terminating   0              64s
web-3                           0/1     Terminating   0              65s
web-3                           0/1     Terminating   0              65s
web-3                           0/1     Terminating   0              65s
StatefulSet更新策略

1.On Delete 策略

OnDelete 更新策略实现了传统(1.7 版本之前)的行为,它也是默认的更新策略。当选择这个更新策略并修改 StatefulSet 的.spec.template 字段时,StatefulSet 控制器不会自动更新 Pod,必须手动删除 Pod 才能使控制器创建新的 Pod。

2.RollingUpdate 策略

RollingUpdate(滚动更新)更新策略会自动更新一个 StatefulSet 中所有的 Pod,采用与序号索引相反的顺序进行滚动更新。

更改名称为 web 的 StatefulSet 控制器使用 RollingUpdate 方式更新

[root@master-01 ~]# kubectl patch sts web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'
statefulset.apps/web patched

改变容器的镜像触发滚动更新(此处使用的 jsonPath 的方式更改的资源配置,可以使用set 或 edit 减少复杂度)

[root@master-01 ~]# kubectl patch statefulset web --type='json' -p='[{"op": "replace","path": "/spec/template/spec/containers/0/image","value":"registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine"}]'
statefulset.apps/web patched

更新过程中可以使用 kubectl rollout status sts/\ 来查看滚动更新的状态

[root@master-01 ~]# kubectl rollout status sts/web
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 3 pods at revision web-b4d4549f7...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 4 pods at revision web-b4d4549f7...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
statefulset rolling update complete 5 pods at revision web-b4d4549f7...

查看更新后的镜像

[root@master-01 ~]# for p in {0..4}; do kubectl get po web-$p --template '{{range $i,$c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
分段更新

StatefulSet的分段更新是通过设置.spec.updateStrategy.rollingUpdate.partition字段来实现的。这个字段指定了一个整数,表示StatefulSet中从哪个序号开始的Pod(包含该序号)将会被更新。例如,如果partition被设置为3,那么序号大于或等于3的Pod将会进行更新。按照上述方式,可以实现分阶段更新,类似于灰度/金丝雀发布。

定义一个分区,分区值为5,可以使用 patch 或 edit 直接对 StatefulSet 进行设置

[root@master-01 ~]# kubectl patch sts web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":5}}}}'
statefulset.apps/web patched

当前Pod镜像为nginx:alpine,再次修改容器镜像

[root@master-01 ~]# kubectl patch statefulset web --type='json' -p='[{"op": "replace","path": "/spec/template/spec/containers/0/image","value":"registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1"}]'
statefulset.apps/web patched 

删除容器触发更新(读者可以思考,为什么需要删除才能更新呢,更新策略不是RollingUpdate吗)

[root@master-01 ~]# kubectl delete pod web-{0..4}
pod "web-0" deleted
pod "web-1" deleted
pod "web-2" deleted
pod "web-3" deleted
pod "web-4" deleted

查看容器镜像,可以观察到由于所有的的容器序号小于分区5,所以 Pod 不会被更新,还是会使用以前的容器恢复 Pod

[root@master-01 ~]# for p in {0..4}; do kubectl get po web-$p --template '{{range $i,$c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine

将分区数量修改为3,更新镜像,删除容器触发更新

[root@master-01 ~]# kubectl patch sts web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":3}}}}'
statefulset.apps/web patched
[root@master-01 ~]# kubectl patch statefulset web --type='json' -p='[{"op": "replace","path": "/spec/template/spec/containers/0/image","value":"registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1"}]'
statefulset.apps/web patched (no change)
You have new mail in /var/spool/mail/root
[root@master-01 ~]# kubectl delete pod web-{0..4}
pod "web-0" deleted
pod "web-1" deleted
pod "web-2" deleted
pod "web-3" deleted
pod "web-4" deleted

查看容器镜像,由于web-3和web-4的需要大于或者等于分区数量3,所以这两个容器的镜像被修改

[root@master-01 ~]# for p in {0..4}; do kubectl get po web-$p --template '{{range $i,$c := .spec.containers}}{{$c.image}}{{end}}'; echo; done
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1
registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1
删除 StatefulSet

删除 StatefulSet 有两种方式,即级联删除和非级联删除。使用非级联方式删除 StatefulSet 时,StatefulSet 的 Pod 不会被删除;使用级联删除时,StatefulSet 和它的 Pod 都会被删除。

1.非级联删除

使用 kubectl delete sts \<$sts_name> 删除 StatefulSet 时,只需提供--cascade=false 参数,就会采用 非级联删除,此时删除 StatefulSet 不会删除它的 Pod

[root@master-01 ~]# kubectl delete sts web --cascade=orphan
statefulset.apps "web" deleted

观察其他资源删除状态,可以观察到SVC和Pod都没有被删除,但是StatefulSet控制器已被删除

[root@master-01 ~]# kubectl get sts
No resources found in default namespace.
[root@master-01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
headless     ClusterIP   None         <none>        80/TCP    166m
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   21d
[root@master-01 ~]# kubectl get pods
NAME                            READY   STATUS    RESTARTS       AGE
web-0                           1/1     Running   0              100s
web-1                           1/1     Running   0              98s
web-2                           1/1     Running   0              97s

由于此时删除了 StatefulSet,它管理的 Pod 变成了“孤儿”Pod,因此单独删除 Pod 时,该Pod 不会被重建

再次创建StatefulSet资源,省略--cascade=orphan 参数即为级联删除

[root@master-01 ~]# kubectl create -f /k8spod/controllers/statefulset.yaml
statefulset.apps/web created
Error from server (AlreadyExists): error when creating "/k8spod/controllers/statefulset.yaml": services "headless" already exists
[root@master-01 ~]# kubectl delete sts web
statefulset.apps "web" deleted

观察其他资源删除状态

[root@master-01 ~]# kubectl get sts
No resources found in default namespace.
[root@master-01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
headless     ClusterIP   None         <none>        80/TCP    170m
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   21d
[root@master-01 ~]# kubectl get pods
No resources found in default namespace.

使用-f 指定创建 StatefulSet 和 Service 的 yaml 文件,直接删除 StatefulSet 和 Service(此处没有重新创建StatefulSet资源,所以提示not found,读者可忽略)

[root@master-01 ~]# kubectl delete -f /k8spod/controllers/statefulset.yaml
service "headless" deleted
Error from server (NotFound): error when deleting "/k8spod/controllers/statefulset.yaml": statefulsets.apps "web" not found
守护进程集 DaemonSet

DaemonSet是Kubernetes中的一个核心组件,其主要作用和意义在于确保在Kubernetes集群的每个节点(或满足特定条件的节点)上都运行一个Pod的副本。这种特性对于需要在每个节点上执行特定任务的守护进程(daemon)来说非常有用。

具体来说,DaemonSet的作用和意义体现在以下几个方面:

  1. 节点级任务的自动化部署:对于需要在集群的每个节点上运行的服务,如网络插件、存储插件、监控代理等,DaemonSet能够自动地在每个节点上部署和管理Pod,大大简化了这些任务的部署和管理过程。
  2. 集群健康状态的监控:通过DaemonSet部署的监控代理可以在每个节点上收集节点的状态信息,从而帮助管理员监控集群的健康状态,及时发现和解决问题。
  3. 资源的高效利用:由于DaemonSet能够确保在每个节点上都运行特定的Pod,因此可以充分利用集群中的每个节点资源,避免资源的浪费。
  4. 可扩展性和高可用性:当新的节点加入集群时,DaemonSet会自动在新的节点上部署Pod,确保每个节点上都有相应的服务运行。当节点从集群中移除时,DaemonSet也会自动删除该节点上的Pod,保证服务的一致性和可用性。

定义一个 DaemonSet 控制器的示例如下:

[root@master-01 ~]# vim /k8spod/controllers/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      containers:
      - name: fluentd-elasticsearch
        image: m.daocloud.io/quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        imagePullPolicy: IfNotPresent

注意:如果指定了.spec.template.spec.nodeSelector,DaemonSet Controller 将在与 Node Selector(节 点选择器)匹配的节点上创建 Pod,比如部署在磁盘类型为 ssd 的节点上(需要提前给节点定义 标签 Label)

nodeSelector:
  disktype: ssd

创建资源并查看资源状态

[root@master-01 ~]# kubectl create -f /k8spod/controllers/daemonset.yaml
daemonset.apps/fluentd-elasticsearch created
[root@master-01 ~]# kubectl get daemonset
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd-elasticsearch   5         5         5       5            5           <none>          47s
[root@master-01 ~]# kubectl get ds
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd-elasticsearch   5         5         5       5            5           <none>          93s
[root@master-01 ~]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS       AGE    IP               NODE        NOMINATED NODE   READINESS GATES
fluentd-elasticsearch-5dct4     1/1     Running   0              107s   172.16.190.11    node-01     <none>           <none>
fluentd-elasticsearch-8jrrq     1/1     Running   0              107s   172.16.184.109   master-01   <none>           <none>
fluentd-elasticsearch-kg2zr     1/1     Running   0              107s   172.16.222.53    master-02   <none>           <none>
fluentd-elasticsearch-nwhc8     1/1     Running   0              107s   172.16.133.182   master-03   <none>           <none>
fluentd-elasticsearch-zxxsc     1/1     Running   0              107s   172.16.184.43    node-02     <none>           <none>
更新和回滚 DaemonSet

如果添加了新节点或修改了节点标签(Label),DaemonSet 将立刻向新匹配上的节点添加Pod,同时删除不能匹配的节点上的 Pod。

DaemonSet 更新策略和 StatefulSet 类似,也有 OnDelete 和 RollingUpdate 两种方式。此处不再重复解释,仅给出命令。

查看DaemonSet控制器更新方式

[root@master-01 ~]# kubectl get ds fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
RollingUpdate

修改DaemonSet控制器更新方式为OnDelete

[root@master-01 ~]# kubectl patch ds fluentd-elasticsearch -p='{"spec":{"updateStrategy":{"type":"OnDelete"}}}'
daemonset.apps/fluentd-elasticsearch patched

更新镜像

[root@master-01 ~]#  kubectl patch ds fluentd-elasticsearch --type='json' -p='[{"op": "replace","path": "/spec/template/spec/containers/0/image","value":"registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1"}]'
daemonset.apps/fluentd-elasticsearch patched
[root@master-01 ~]# kubectl set image daemonset/fluentd-elasticsearch  fluentd-elasticsearch=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:1.15 --record=true
Flag --record has been deprecated, --record will be removed in the future

查看更新状态(当前更新状态为RollingUpdate)

[root@master-01 ~]# kubectl rollout status daemonset fluentd-elasticsearch
daemon set "fluentd-elasticsearch" successfully rolled out

查看历史版本

[root@master-01 ~]# kubectl rollout history daemonset/fluentd-elasticsearch
daemonset.apps/fluentd-elasticsearch
REVISION  CHANGE-CAUSE
1         <none>
4         <none>
5         <none>
6         kubectl set image daemonset/fluentd-elasticsearch fluentd-elasticsearch=registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine --record=true

回滚到指定版本revision

[root@master-01 ~]# kubectl rollout undo daemonset fluentd-elasticsearch --to-revision=6
daemonset.apps/fluentd-elasticsearch skipped rollback (current template already matches revision 6)
Horizontal Pod Autoscaling(HPA)

在Kubernetes中,HPA(Horizontal Pod Autoscaler)的作用是根据集群的实时资源利用率或自定义指标,自动调整Pod的副本数量

其意义在于:

  1. 提高资源利用率:当应用负载增加时,HPA会自动增加Pod的副本数量以满足需求,而当负载减少时,则会减少Pod的副本数量以节省资源。
  2. 增强应用的可伸缩性:通过自动扩缩容,HPA使应用能够动态地适应不同负载,提高系统的响应速度和可靠性。

注意:在本章节,只演示简单版本的HPA,autoscaling/v2版本涉及到Ingress,所以在本章节内不进行演示,延后到后面章节,读者可以访问kubernetes.io进行查阅。

定义被HPA管理的Deployment的示例如下,注意需要将replicas字段去除,此时默认值为1

[root@master-01 ~]# vim /k8spod/controllers/hpa.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      run: php-apache
  template:
    metadata:
      labels:
        run: php-apache
    spec:
      containers:
      - name: php-apache
        image: registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:alpine
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
  labels:
    run: php-apache
spec:
  ports:
  - port: 80
  selector:
    run: php-apache

创建资源并查看资源状态

[root@master-01 ~]# kubectl create -f /k8spod/controllers/hpa.yaml
deployment.apps/php-apache created
service/php-apache created
[root@master-01 ~]# kubectl get deployments.apps
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
php-apache     1/1     1            1           172m
[root@master-01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP   22d
php-apache   ClusterIP   10.96.12.63   <none>        80/TCP    172m

创建 HorizontalPodAutoscaler

[root@master-01 ~]# kubectl autoscale deployment php-apache --cpu-percent=10 --min=1 --max=10
horizontalpodautoscaler.autoscaling/php-apache autoscaled

查看资源创建状态

[root@master-01 ~]# kubectl get hpa
NAME         REFERENCE               TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   cpu: 0%/10%   1         10        1          172m

通过while循环和curl命令增加容器负载(可以在其他工作节点执行此命令)

[root@master-01 ~]# while true;do curl -s -o /dev/null  http://10.96.12.63;done

观察容器负载情况,在负载升时,更多的容器被创建以分担负载

[root@master-01 ~]# kubectl get hpa
NAME         REFERENCE               TARGETS        MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   cpu: 12%/10%   1         10        6          174m

停止while循环后,由于负载减少,HPA自动回收多余的容器,等待时间大概3min,下面通过watch参数给读者展示完整的过程

[root@master-01 ~]# kubectl get hpa -w
NAME         REFERENCE               TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   cpu: 0%/10%   1         10        1          3m26s
php-apache   Deployment/php-apache   cpu: 1%/10%   1         10        1          4m31s
php-apache   Deployment/php-apache   cpu: 0%/10%   1         10        1          4m46s
php-apache   Deployment/php-apache   cpu: 3%/10%   1         10        1          6m1s
php-apache   Deployment/php-apache   cpu: 58%/10%   1         10        1          6m16s
php-apache   Deployment/php-apache   cpu: 121%/10%   1         10        4          6m31s
php-apache   Deployment/php-apache   cpu: 150%/10%   1         10        8          6m46s
php-apache   Deployment/php-apache   cpu: 35%/10%    1         10        10         7m1s
php-apache   Deployment/php-apache   cpu: 23%/10%    1         10        10         7m16s
php-apache   Deployment/php-apache   cpu: 7%/10%     1         10        10         7m31s
php-apache   Deployment/php-apache   cpu: 1%/10%     1         10        10         7m46s
php-apache   Deployment/php-apache   cpu: 0%/10%     1         10        10         8m1s
php-apache   Deployment/php-apache   cpu: 0%/10%     1         10        10         12m
php-apache   Deployment/php-apache   cpu: 0%/10%     1         10        7          12m
php-apache   Deployment/php-apache   cpu: 0%/10%     1         10        1          12m