K8S-CKA模拟
- Kubernetes
- 2024-12-11
- 21264热度
- 0评论
K8S-CKA题目解析
考试大纲
集群架构,安装和配置:25%
• 管理基于角色的访问控制(RBAC)
• 使用Kubeadm安装基本集群
• 管理高可用性的Kubernetes集群
• 设置基础架构以部署Kubernetes集群
• 使用Kubeadm在Kubernetes集群上执行版本升级
• 实施etcd备份和还原
工作负载和调度:15%
• 了解部署以及如何执行滚动更新和回滚
• 使用ConfigMaps和Secrets配置应用程序
• 了解如何扩展应用程序
• 了解用于创建健壮的、自修复的应用程序部署的原语
• 了解资源限制如何影响Pod调度
• 了解清单管理和通用模板工具
服务和网络:20%
• 了解集群节点上的主机网络配置
• 理解Pods之间的连通性
• 了解ClusterIP、NodePort、LoadBalancer服务类型和端点
• 了解如何使用入口控制器和入口资源
• 了解如何配置和使用CoreDNS
• 选择适当的容器网络接口插件
存储:10%
• 了解存储类、持久卷
• 了解卷模式、访问模式和卷回收策略
• 理解持久容量声明原语
• 了解如何配置具有持久性存储的应用程序
故障排除:30%
• 评估集群和节点日志
• 了解如何监视应用程序
• 管理容器标准输出和标准错误日志
• 解决应用程序故障
• 对群集组件故障进行故障排除
• 排除网络故障
注意:
1、若题目提示 Set configuration context(切换配置上下文),则考生需要对应执行对应的命令进行环境切换
kubectl config use-context $context
2、开始前查看kubectl 有没有补全功能,如果没有执行下面命令开启补全功能
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
1、监控Pod日志
题目:Monitor the logs of pod foobar and :
Extract log lines corresponding to error unable-to-access-website
Write them to /opt/KUTR00101/foobar
译文:监控名为foobar的Pod日志,并过滤出具有unable-to-access-website信息的行,将输出内容写入到/opt/KUTR00101/foobar
答案:
kubectl config use-contenxt k8s
kubectl logs foobar | grep unable-to-access-website > /opt/KUTR00101/foobar
验证文件是否有内容输出
2、监控Pod度量指标
题目:From the pod label name=cpu-user, find pods running high CPU worloads and write the name of the pod consuming most CPU to the file /opt/KUTR00401/KUTR00401.txt(which already exists)
译文:从pod标签name=cpu-user中,找到CPU占用高的pod,将占用CPU最多的pod名称写入文件/opt/KUTR00401/KUTR00401.txt(该文件已经存在)
注意:由于题目并没有说明是哪个命令空间下的,所以需要使用-A参数指定所有的命名空间
答案:
kubectl config use-contenxt k8s
kubectl top pods -A -l name=cpu-user
#注意这里的 pod 名字以实际名字为准,按照 CPU 那一列进行选择一个最大的 Pod,另外如果CPU 的数值是 1 2 3 这样的。是大于带 m 这样的,因为 1 颗 CPU 等于 1000m,注意要用>>而不是>
echo "$Pod_Name" >> /opt/KUTR00401/KUTR00401.txt
3、Deployment扩缩容
题目:Scale the deployment loadbalancer to 6 pods
译文:扩容名字为 loadbalancer 的 deployment 的副本数为 6
答案:
kubectl config use-contenxt k8s
kubectl scale --replicas=6 deployment loadbalancer
# 若题目并不是扩容Deployment,将命令对应的控制器类型进行修改即可
4、检查Node节点的健康状态
题目:Check to see how many nodes are ready(not including nodes tainted NoSchedule)and write the number to /opt/KUTR00401/KUTR00402.txt
译文:检查集群中有多少节点为 Ready 状态,并且去除包含 NoSchedule 污点的节点。之后将数字写到/opt/KUSC00402/kusc00402.txt
答案:
kubectl config use-contenxt k8s
kubectl get node | grep -i ready
# 记录就绪节点的个数
kubectl describe node | grep Taint | grep NoSchedule
# 记录带有NoSchedule污点的节点个数
echo $Num >> /opt/KUSC00402/kusc00402.txt
5、节点维护
题目:Set the node named ek8s-node-1 as unavailable and reschedule all the pods running on it
译文:将 ek8s-node-1 节点设置为不可用,然后重新调度该节点上的所有 Pod
答案:
kubectl config use-contenxt k8s
kubectl cordon ek8s-node-1
kubectl drain ek8s-node-1 --delete-emptydir-data --ignore-daemonsets --force
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain
6、指定节点部署
题目:Schedule a pod as follows:
Name: nginx-kusc00401
Image: nginx
Node selector: disk=spinning
译文:创建一个 Pod,名字为 nginx-kusc00401,镜像地址是 nginx,调度到具有 disk=spinning 标签的节点 上
答案:
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disk: spanning
kubectl create -f $File
https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/
#直接调度到某个节点 nodeName: foo-node
7、一个 Pod 内多个容器
题目:Create a pod name kucc1 with a single app container for each of the following images running inside(there may be between 1 and 4 images specified): nginx + redis + memcache + consul .
译文:创 建 一 个 Pod , 名 字 为 kucc1 , 这 个 Pod 可 能 包 含 1-4 容 器 , 该 题 为 四 个 : nginx+redis+memcached+consul
答案:
apiVersion: v1
kind: Pod
metadata:
name: kucc1
spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
- name: memcached
image: memcached
- name: consul
image: consul
kubectl create -f $File
8、Service
题目:Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx .
Create a new service named front-end-svc exposing the container port http .
Config the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled .
译文:重新配置一个已经存在的 deployment front-end,在名字为 nginx 的容器里面添加一个端口配置,名字为 http,暴露端口号为 80,然后创建一个 service,名字为 front-end-svc,暴露该deployment 的 http 端口,并且 service 的类型为 NodePort
注意: Service暴露的端口应为http而非80,在修改deployment配置时需要添加名称(name)和协议(protocol)字段
答案:
kubectl edit deployment front-end
ports:
- containerPort: 80
name: http
protocol: TCP
kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=http --type=NodePort
https://kubernetes.io/zh-cn/docs/tutorials/services/connect-applications-service/
9、Ingress
题目:Create a new nginx Ingress resource as follows:
Name: pong
Namespace: ing-internal
Exposing service hi on path /hi using service port 5678
译文:在 ing-internal 命名空间下创建一个 ingress,名字为 pong,代理的 service hi,端口为 5678, 配置路径/hi
注意: 要先查询ingressClassName的名字, 访问 curl -kL
kubectl get ingressclass
答案:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
namespace: ing-internal
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /hi
pathType: Prefix
backend:
service:
name: hi
port:
number: 5678
kubectl create -f $File
https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/
10、Sidecar
题目:Add a busybox sidecar container to the existing Pod legacy-app . The new sidecar container has to run the following conmand:
/bin/sh -c tail -n+1 -f /var/log/legacy-app.log
Use a volume mount named logs to make the file /var/log/legacy-app.log available to the sidecar container .
Don't modify the existing container.
Don't modify the path of the log file,both containers must access it at /var/log/legacy-app.log .
译文:添加一个名为 busybox 且镜像为 busybox 的 sidecar 到一个已经存在的名为 legacy-app 的Pod 上,这个 sidecar 的启动命令为/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log'
并且这个 sidecar 和原有的镜像挂载一个名为 logs 的 volume,挂载的目录为/var/log/
注意: 导出配置时,需要删除某些不需要的元素,否则通过文件无法启动Pod
答案:
#导出原始配置
kubectl get po legacy-app -oyaml > c-sidecar.yaml
apiVersion: v1
kind: Pod
metadata:
name: legacy-app
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$(date) INFO $i" >> /var/log/legacy-ap.log;
i=$((i+1));
sleep 1;
done
#修改导出的配置文件
vim c-sidecar.yaml
apiVersion: v1
kind: Pod
metadata:
name: legacy-app
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$(date) INFO $i" >> /var/log/legacy-ap.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: logs
mountPath: /var/log
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- 'tail -n+1 -f /var/log/legacy-ap.log'
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {}
kubectl delete -f c-sidecar.yaml ; kubectl create -f c-sidecar.yaml
https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/logging/
11、RBAC
题目:Create a new ClusterRole named deployment-clusterrole , which only allows to create the following resource types:
Deployment
StatefulSet
DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1 .
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccounnt cicd-token , limited to the namespace app-team1
译文:创建一个名为 deployment-clusterrole 的 clusterrole,该 clusterrole 只允许创建 Deployment、Daemonset、Statefulset 的 create 操作 在名字为 app-team1 的 namespace 下创建一个名为 cicd-token 的 serviceAccount,并且将上一步创建 clusterrole 的权限绑定到该 serviceAccount
注意: 由于题目要求有命名空间的隔离性,所以需要的是rolebinding
答案:
vim dp-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# "namespace" 被忽略,因为 ClusterRoles 不受名字空间限制
name: deployment-clusterrole
rules:
- apiGroups: ["apps"]
# 在 HTTP 层面,用来访问 Secret 资源的名称为 "secrets"
resources: ["deployments","statefulsets","daemonsets"]
verbs: ["create"]
kubectl create -f $File
kubectl create sa cicd-token -n app-team1
kubectl create rolebinging deployment-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token -n app-team
12、NetworkPolicy
题目:Create a new NetworkPolicy named allow-port-from-namespace that allows Pods in the existinng namespace internal to connect to port 9000 of other Pods in the same namespace.
Ensure that the new NetworkPolicy:
does not allow accesss to Pods not listening on port 9000
does not allow access from Pods not in namespace internal
译文:创建一个名字为 allow-port-from-namespace 的 NetworkPolicy,这个 NetworkPolicy 允许internal 命名空间下的 Pod 访问该命名空间下的 9000 端口。并且不允许不是 internal 命令空间的下的 Pod 访问,不允许访问没有监听 9000 端口的 Pod。
答案:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: internal
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
ports:
- port: 9000
protocol: TCP
kubectl create -f $File
https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/
13、NetworkPolicy-2
题目:在my-app命名空间下创建一个名为 allow-port-from-namespace 的 NetworkPolicy,确保这个NetworkPolicy 允许my-app 命名空间中的 pods 可以连接到big-corp 命名空间中的 8080端口。并且不允许不是 my-app 命令空间的下的 Pod 访问,不允许访问没有监听 8080 端口的 Pod
注意:注意 namespaceSelector 的 labels 配置,首先需要查看 big-corp 命名空间有没有标签:kubectl get ns big-corp --show-labels 如果有,可以更改name: big-corp 为查看到的即可。如果没有需要添加一个 label:kubectl label ns big-corp name=big-corp
答案:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: my-app
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: big-corp
ports:
- protocol: TCP
port: 8080
ingress:
- from:
- podSelector: {}
ports:
- port: 8080
protocol: TCP
14、NetworkPolicy-3
题目:在big-corp命名空间下创建一个名为 allow-port-from-namespace 的 NetworkPolicy,确保这个NetworkPolicy 允许internal 命名空间中的 pods 可以连接到big-corp 命名空间中的 9200端口。并且不允许不是 internal 命令空间的下的 Pod 访问,不允许对没有监听9200的Pods访问
答案:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: big-corp
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: internal
ports:
- port: 9200
protocol: TCP
15、PersistentVolume
题目:Create a persistent volume with name app-config , of capacity 2Gi and access mode ReadWriteMany . The type of volume is hostPath and its location is /srv/app-config .
译文:创建一个 pv,名字为 app-config,大小为 2Gi,访问权限为 ReadWriteMany。Volume 的类型为 hostPath,路径为/srv/app-config
答案:
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/srv/app-config"
kubectl create -f $File
https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
16、CSI & PersistentVolumeClaim
题目:Create a new PersistentVolumeClaim:
Name: pv-volume
Class: csi-hostpath-sc
Capacity: 10Mi
Create a new Pod which mounts the PersistentVolumeClaim as a volume
Name: web-server
Image: nginx
Mount path: /usr/share/nginx/html
Configure the new Pod to have ReadWriteOnce access on the volume.
Finally, using kubectl edit or kubectl path expand the PersistentVolumeClaim to a capacity 70Mi and record that change.
译文:创建一个名字为 pv-volume 的 pvc,指定 storageClass 为 csi-hostpath-sc,大小为 10Mi, 然后创建一个 Pod,名字为 web-server,镜像为 nginx,并且挂载该 PVC 至/usr/share/nginx/html,挂载的权限为 ReadWriteOnce。之后通过 kubectl edit 或者 kubectl path 将 pvc 改成 70Mi,并且记录修改记录。
答案:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
storageClassName: csi-hostpath-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
kubectl create -f $PVC_File
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
volumes:
- name: pv-volume
persistentVolumeClaim:
claimName: pv-volume
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pv-volume
kubectl create -f $Pod_File
#扩容方式1
kubectl path pvc pv-volume -p '{"spec":{"resources":{"requests":{"storage":"70Mi"}}}}' --record
#扩容方式2
kubectl edit pvc pv-volume
#添加annotation
kubernetes.io/change-cause: 'resize'
#修改容量后保存退出
https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
17、Etcd备份恢复
题目:First,create a snapshot of the existing etcd instance running at https://127.0.0.1:2379 , saving the snapshot to /srv/data/etcd-snapshot.db
Creating a snapshot of the given instance is expected to complete in seconds. If the operation seems to hang, something's likely wrong with your command. Use CRTL + c to cancel the operation and try again.
Next, restore an existing, previous snapshot located at /var/lib/backup/etcd-snapshot-previous.db
The following TLS certificates/key are supplied for connecting to the server with etcdctl:
CA certificate: /opt/KUIN00601/ca.crt
Client certificate: /opt/KUIN00601/etcd-client.crt
Client key : /opt/KUIN00601/etcd-client.key
译文:针对 etcd 实例 https://127.0.0.1:2379 创建一个快照,保存到/srv/data/etcd-snapshot.db。在创建快照的过程中,如果卡住了,就键入 ctrl+c 终止,然后重试。
然后恢复一个已经存在的快照: /var/lib/backup/etcd-snapshot-previous.db
执行 etcdctl 命令的证书存放在:
ca 证书:/opt/KUIN00601/ca.crt
客户端证书:/opt/KUIN00601/etcd-client.crt
客户端密钥:/opt/KUIN00601/etcd-client.key
注意:辨别etcd的安装方式
答案:
#备份
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key \
snapshot save /srv/data/etcd-snapshot.db
#考试时可能是二进制部署的,需要关闭 etcd,然后备份数据目录
systemctl stop etcd
# 先找到数据目录是哪个位置 ps aux |grep etcd | grep data-dir
mv /var/lib/etcd /var/lib/etcd.bak
#还原
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key \
--data-dir=/var/lib/etcd snapshot restore /var/lib/backup/etcd-snapshot-previous.db
systemctl restart etcd
https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/
18、集群升级
题目:Given an existing Kubernetes cluster running version 1.18.8 upgrade all of the Kubernetes control plane and node components on the master node only to version 1.19.0 .
You are also expected to upgrade kubelet and kubectl on the master node .
Be sure to drain the master node before upgrading it and uncordon it after the upgrade .
Do not upgrade the worker nodes, etcd , the container manager, the CNI plugin , the DNS service or any other addons
译文:给现有的集群进行升级,假设现有 Kubernetes 集群运行的是 1.18.8 版本,则将主节点上的所有 Kubernetes 控制平面和节点组件仅升级到 1.19.0 版本,还需要升级主节点上的 kubelet 和 kubectl。升级前务必排空主节点,升级后取消封锁。请勿升级工作节点、etcd、容器管理器、CNI 插件、DNS 服务或任何其他附加组件
答案:
# 设置为维护状态
kubectl cordon k8s-master
# 驱逐 Pod
kubectl drain k8s-master --ignore-daemonsets
#需要按照题目提示 ssh 到一个 master 节点
apt update
apt-cache madison kubeadm | grep 1.31.1 # (注意版本的差异,有可能并非1.31.1)
sudo apt-mark unhold kubeadm && \
sudo apt-get install -y kubeadm='1.31.1-*' && \
sudo apt-mark hold kubeadm
kubeadm version
#验证升级计划
kubeadm upgrade plan
# 开始升级 Master 节点, 注意看题需不需要升级 etcd
kubeadm upgrade apply v1.31.1 --etcd-upgrade=false -f
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get install -y kubelet='1.31.1-*' kubectl='1.31.1-*' && \
sudo apt-mark hold kubelet kubectl
systemctl daemon-reload
systemctl restart kubelet
#解除限制
kubectl uncordon k8s-master
https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
19、集群故障排查 – kubelet 故障
题目:A Kubernetes worker node , named wk8s-node-0 is in state NotReady .
Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent
译文:一个名为 wk8s-node-0 的节点状态为 NotReady,让其他恢复至正常状态,并确认所有的更改开机自动完成
答案:
ssh wk8s-node-0
sudo -i
systemctl status kubelet
systemctl start kubelet
systemctl enable kubelet