K8S-服务发布
- Kubernetes
- 2024-08-02
- 304热度
- 0评论
K8S服务发布
在Kubernetes(K8S)中,Label和Selector是两个核心概念,它们在组织、管理和选择集群资源方面发挥着重要作用。以下是关于K8S Label和Selector的详细解释:
Label的作用和功能
1. 标记资源对象
- Label是一种附加到Kubernetes资源对象(如Pod、Node、Service等)上的键值对(Key-Value pairs),用于对资源对象进行分类和分组。
- 通过给资源对象打上标签,用户可以实现更精细化的资源管理,比如按应用类型、环境、版本等分类资源。
2. 筛选资源对象
- Label允许用户通过标签选择器(Selector)对资源对象进行筛选,从而实现对特定资源对象的操作。
- 这种筛选机制可以应用于各种场景,如查询、删除、更新等,极大地提高了资源管理的灵活性。
3. 组织资源对象
- 通过将带有相同标签的资源对象组织到一起,用户可以更方便地进行统一管理。
- 例如,可以将所有属于同一应用的Pod组织到一个Deployment中,便于统一管理和维护。
4. 监控资源对象
- 基于Label进行监控,可以跟踪资源对象的变化情况,如Pod的创建、删除、更新等。
- 这对于实现自动化运维和故障排查具有重要意义。
Selector的作用和功能
1. 标识和选择资源对象
- Selector是Kubernetes中用于标识和选择具有特定标签的资源对象的机制。
- 它通过匹配Label来实现对资源对象的精确或基于条件的选择。
2. 精确匹配
- Selector支持通过键值对精确匹配对象的标签。例如,可以选择所有带有特定标签的Pods。
3. 基于条件的选择
- 对于更复杂的筛选需求,Selector支持基于条件的操作符,如In、NotIn、Exists、和DoesNotExist等,提供更为灵活的匹配方式。
4. 管理与资源对象的关联
- Selector在Kubernetes中扮演着至关重要的角色,用于确保资源对象之间的关联性和一致性。
- 例如,在Deployment或ReplicaSet中,Selector用于管理一组具有相同标签的Pods,确保在更新或扩展时只影响符合条件的Pods。
总结
- Label和Selector是Kubernetes中用于组织、管理和选择资源对象的两个核心概念。
- Label通过键值对形式为资源对象打上标签,实现资源的分类和分组;Selector则通过匹配Label来筛选资源对象,实现资源的精确选择和管理。
- 这两个概念的结合使用,使得Kubernetes集群中的资源管理变得更加高效和简便,为实现自动化运维、资源调度和故障排查等功能提供了基础。
label标签
结合案例进行理解:Core喵公司需要对用户的信息进行存储,但是某些经常需要被读写的数据,规划将其存储在具有SSD磁盘的节点上,此时需要对节点打上SSD的标签,此处以node2节点为例
[root@master-01 ~]# kubectl label node node-02 disk=ssd
node/node-02 labeled
使用Selector对标签进行筛选
[root@master-01 ~]# kubectl get nodes -l disk=ssd
NAME STATUS ROLES AGE VERSION
node-02 Ready <none> 25d v1.30.1
创建Deployment资源,指定Pod部署到该节点
[root@master-01 ~]# vim /k8spod/controllers/deployment-label.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1
nodeSelector:
disk: ssd
查看Pod资源的部署情况
[root@master-01 ~]# kubectl get pods -owide | awk '{print $1":"$7}'
NAME:NODE
nginx-deployment-6fc8d5d557-7mg7l:node-02
nginx-deployment-6fc8d5d557-l9lfm:node-02
nginx-deployment-6fc8d5d557-x9lx7:node-02
删除节点标签(需要确保该标签没被资源所使用,否则资源在重建时会因为无法匹配标签而报错)
[root@master-01 ~]# kubectl label node node-02 disk-
node/node-02 unlabeled
Selector选择器
使用--show-labels查看指定资源目前已有的label
[root@master-01 ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master-01 Ready <none> 25d v1.30.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master-01,kubernetes.io/os=linux,node.kubernetes.io/node=
master-02 Ready <none> 25d v1.30.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master-02,kubernetes.io/os=linux,node.kubernetes.io/node=
master-03 Ready <none> 25d v1.30.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master-03,kubernetes.io/os=linux,node.kubernetes.io/node=
node-01 Ready <none> 25d v1.30.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-01,kubernetes.io/os=linux,node.kubernetes.io/node=
node-02 Ready <none> 25d v1.30.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-02,kubernetes.io/os=linux,node.kubernetes.io/node=
选择匹配kubernetes.io/hostname为node-01或node-02的节点
[root@master-01 ~]# kubectl get nodes -l 'kubernetes.io/hostname in (node-01,node-02)' --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node-01 Ready <none> 25d v1.30.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-01,kubernetes.io/os=linux,node.kubernetes.io/node=
node-02 Ready <none> 25d v1.30.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-02,kubernetes.io/os=linux,node.kubernetes.io/node=
选择匹配beta.kubernetes.io/arch为amd64但不包括kubernetes.io/hostname为master-01或master-03的节点
[root@master-01 ~]# kubectl get nodes -l 'beta.kubernetes.io/arch=amd64,kubernetes.io/hostname notin (master-01,master-03)' --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master-02 Ready <none> 25d v1.30.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master-02,kubernetes.io/os=linux,node.kubernetes.io/node=
node-01 Ready <none> 25d v1.30.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-01,kubernetes.io/os=linux,node.kubernetes.io/node=
node-02 Ready <none> 25d v1.30.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-02,kubernetes.io/os=linux,node.kubernetes.io/node=
选择匹配标签的key为app的Pod
[root@master-01 ~]# kubectl get pods -l app --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-794cb4fc9f-99s5r 1/1 Running 0 118s app=nginx,pod-template-hash=794cb4fc9f
nginx-deployment-794cb4fc9f-lsw5w 1/1 Running 0 118s app=nginx,pod-template-hash=794cb4fc9f
nginx-deployment-794cb4fc9f-mp9r7 1/1 Running 0 118s app=nginx,pod-template-hash=794cb4fc9f
使用key为app对展示的容器数据进行切片(人话:设置新列,列名为APP)
[root@master-01 ~]# kubectl get pods -Lapp
NAME READY STATUS RESTARTS AGE APP
nginx-deployment-794cb4fc9f-99s5r 1/1 Running 0 12m nginx
nginx-deployment-794cb4fc9f-lsw5w 1/1 Running 0 12m nginx
nginx-deployment-794cb4fc9f-mp9r7 1/1 Running 0 12m nginx
修改标签
为nginx-deployment(Deployment控制器)添加version=v1标签
[root@master-01 ~]# kubectl label deployments nginx-deployment version=v1
deployment.apps/nginx-deployment labeled
修改标签值为v2
[root@master-01 ~]# kubectl label deployments nginx-deployment version=v2 --overwrite
deployment.apps/nginx-deployment labeled
查看当前标签
[root@master-01 ~]# kubectl get deployments nginx-deployment --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
nginx-deployment 3/3 3 3 45m app=nginx,version=v2
Service
K8S Service在Kubernetes集群中扮演着至关重要的角色,其作用和意义主要体现在以下几个方面:
- 服务发现与负载均衡
- Service为Pod提供了一个统一的入口地址,使得客户端能够稳定地访问后端应用,而无需关心后端Pod的动态变化。
- 通过Service,Kubernetes能够自动实现服务的负载均衡,将客户端的请求分发到后端的多个Pod上,提高系统的可用性和吞吐量。
- Service支持多种负载均衡策略,如轮询、最少连接等,可以根据实际需求进行配置。
- 屏蔽后端Endpoint的变化
- 在Kubernetes中,Pod的IP地址是动态分配的,可能会因为各种原因而发生变化。Service通过标签选择器(Label Selector)与Pod进行关联,无论Pod如何变化,只要其标签保持不变,Service就能够继续为客户端提供稳定的服务。
- 这意味着客户端无需关心后端Pod的具体IP地址和端口号,只需要通过Service的地址和端口进行访问即可。
- 提供稳定的访问地址
- 当Service被创建时,Kubernetes会为其分配一个Cluster IP地址(虚拟IP),该地址在集群内部是唯一的,并且可以在集群内部进行访问。
- 如果集群配置了DNS服务(如CoreDNS),客户端还可以通过Service的名称进行访问,DNS服务会自动将Service名称解析为对应的Cluster IP地址。
- 支持多种访问方式
- Kubernetes支持多种类型的Service,包括ClusterIP、NodePort、LoadBalancer和ExternalName等,可以根据实际需求选择适合的访问方式。
- 例如,ClusterIP类型的Service只能在集群内部进行访问;NodePort类型的Service可以通过节点的IP地址和端口号进行访问;LoadBalancer类型的Service则可以通过云提供商的负载均衡器进行外部访问。
- 支持多端口定义
- 对于需要公开多个端口的服务,Service可以配置多个端口定义,通过端口名称进行区分。这使得Service能够支持更复杂的网络需求。
- 实现微服务架构
- 在微服务架构中,Service是实现服务间通信和调用的关键组件。通过Service,不同的微服务可以相互发现和调用,实现松耦合、高内聚的服务架构。
综上所述,K8S Service在Kubernetes集群中扮演着服务发现、负载均衡、屏蔽后端Endpoint变化、提供稳定访问地址、支持多种访问方式和实现微服务架构等重要角色。它是Kubernetes实现微服务架构和云原生应用的关键组件之一。
定义Service资源和后端对应Pod
[root@master-01 ~]# vim /k8spod/service/service.yaml
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1
ports:
- containerPort: 80
查看创建的SVC
[root@master-01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25d
my-service ClusterIP 10.96.231.131 <none> 80/TCP 34m
通过集群内部同一命名空间下的容器查看my-service对应解析
[root@master-01 ~]# kubectl exec cluster-test-665f554bcc-bcw5v -- nslookup my-service
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: my-service.default.svc.cluster.local
Address: 10.96.231.131
通过容器访问my-service,观察到my-service不仅已经代理了后端的Pod,并且自带轮询的功能
[root@master-01 ~]# kubectl exec -it cluster-test-665f554bcc-bcw5v -- curl my-service
2
[root@master-01 ~]# kubectl exec -it cluster-test-665f554bcc-bcw5v -- curl my-service
3
[root@master-01 ~]# kubectl exec -it cluster-test-665f554bcc-bcw5v -- curl my-service
1
NodePort
果将 Service 的 type 字段设置为 NodePort,则 Kubernetes 将从--service-node-port-range 参数指定的范围(默认为 30000-32767)中自动分配端口,也可以手动指定 NodePort,创建该 Service后,集群每个节点都将暴露一个端口,通过某个宿主机的 IP+端口即可访问到后端的应用
定义一个 NodePort 类型的 Service 格式如下:
[root@master-01 ~]# vim /k8spod/service/NodePort.yaml
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 32333
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1
ports:
- containerPort: 80
创建资源,并查看资源映射状态
[root@master-01 service]# kubectl apply -f NodePort.yaml
service/my-service-nodeport created
deployment.apps/nginx-deployment unchanged
[root@master-01 service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 65d
my-service-nodeport NodePort 10.96.24.163 <none> 80:32333/TCP 5s
通过物理机IP+端口对容器进行访问
[root@master-01 service]# curl 192.168.132.169:32333
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...省略部分输出...
Service代理K8S外部服务
使用场景: 希望在生产环境中使用某个固定的名称而非IP地址访问外部中间件服务。系统Service指向另一个Namespace中或其他集群中的服务。正在将工作负载转移到Kubernetes集群,但是一部分服务仍运行在Kubernetes集群之外的后端
创建Service和Endpoints格式如下:
[root@master-01 ~]# cat /k8spod/service/nginx-svc-ep-external.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-svc-external
name: nginx-svc-external
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
labels:
app: nginx-svc-external
name: nginx-svc-external
subsets:
- addresses:
- ip: 8.138.107.10
ports:
- name: https
port: 443
protocol: TCP
创建资源并查看资源映射状态
[root@master-01 ~]# kubectl apply -f /k8spod/service/nginx-svc-ep-external.yaml
service/nginx-svc-external created
endpoints/nginx-svc-external created
[root@master-01 service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 65d
nginx-svc-external ClusterIP 10.96.9.172 <none> 443/TCP 5m57s
[root@master-01 service]# kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.132.169:6443,192.168.132.170:6443,192.168.132.171:6443 65d
nginx-svc-external 8.138.107.10:443 7m9s
在master-01直接访问Service服务
[root@master-01 ~]# curl -I -k https://10.96.9.172
HTTP/1.1 200 OK
Server: nginx
Date: Wed, 31 Jul 2024 22:16:50 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding
Link: <https://blog.caijxlinux.work/wp-json/>; rel="https://api.w.org/"
Strict-Transport-Security: max-age=31536000
连接容器,在容器内使用域名访问外部服务
[root@master-01 ~]# kubectl exec -ti cluster-test-665f554bcc-bcw5v -- sh
# curl -I -k https://nginx-svc-external
HTTP/2 200
server: nginx
date: Wed, 31 Jul 2024 22:18:47 GMT
content-type: text/html; charset=UTF-8
vary: Accept-Encoding
link: <https://blog.caijxlinux.work/wp-json/>; rel="https://api.w.org/"
strict-transport-security: max-age=31536000
# exit
ExternalName Service 是 Service 的特例,它没有 Selector,也没有定义任何端口和 Endpoint,它通过返回该外部服务的别名来提供服务。比如可以定义一个 Service,后端设置为一个外部域名,这样通过 Service 的名称即可访问到该域名。使用 nslookup 解析以下文件定义的 Service,集群的 DNS 服务将返回一个值为my.database.example.com 的 CNAME 记录:
[root@master-01 ~]# cat /k8spod/service/exter-name.yaml
apiVersion: v1
kind: Service
metadata:
name: exter-service
spec:
type: ExternalName
externalName: blog.caijxlinux.work
ports:
- name: https
port: 443
创建资源,并查看资源状态,可以观察到此SVC没有IP地址
[root@master-01 ~]# kubectl create -f /k8spod/service/exter-name.yaml
service/exter-service created
[root@master-01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exter-service ExternalName <none> blog.caijxlinux.work 443/TCP 5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 66d
进入测试容器,使用dig命令查看SVC和后端域名的代理状态
[root@master-01 ~]# kubectl exec -it cluster-test-665f554bcc-bcw5v -- sh
# nslookup exter-service
Server: 10.96.0.10
Address: 10.96.0.10#53
exter-service.default.svc.cluster.local canonical name = blog.caijxlinux.work.
Name: blog.caijxlinux.work
Address: 8.138.107.10
# dig exter-service.default.svc.cluster.local +short
blog.caijxlinux.work.
8.138.107.10
通过SVC名称访问到后端域名
# curl -kI exter-service
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 01 Aug 2024 18:36:23 GMT
Content-Type: text/html
Content-Length: 138
Last-Modified: Thu, 29 Feb 2024 14:59:27 GMT
Connection: keep-alive
ETag: "65e09bcf-8a"
Accept-Ranges: bytes
多端口SVC
多端口SVC(Service)是指在Kubernetes中,一个Service对象可以配置多个端口,使得外部访问可以映射到后端Pod的不同端口上。这种配置允许一个应用程序在同一个Service中提供多种服务,每个服务通过不同的端口进行访问。在Kubernetes的Service定义中,可以通过ports
字段来配置多个端口,每个端口可以指定一个name
和targetPort
,其中name
用于标识服务端口,targetPort
指定后端Pod的端口。
[root@master-01 ~]# vim /k8spod/service/muti-ports.yaml
apiVersion: v1
kind: Service
metadata:
name: muti-ports
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: custom
protocol: TCP
port: 9000
targetPort: 9000
创建资源并查看资源状态,可以观察到SVC已代理多个端口
[root@master-01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 66d
muti-ports ClusterIP 10.96.61.162 <none> 80/TCP,9000/TCP 55s
Ingress
Kubernetes (k8s) Ingress 是一个API对象,用于管理集群中服务的外部访问。它提供了一种方式来定义外部访问集群服务的HTTP路由规则。Ingress 控制器是实现这些规则的具体组件,它监听Ingress资源的变化,并根据定义的规则配置负载均衡器或代理服务,以实现外部流量的转发。
Ingress控制器官方维护地址:https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters(需要修改镜像地址)
创建Ingress-controller配置如下:
点击查看完整配置
[root@master-01 ~]# cat /k8spod/service/deploy-nginx.yaml
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resourceNames:
- ingress-nginx-leader
resources:
- leases
verbs:
- get
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: v1
data:
allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-controller
namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
minReadySeconds: 0
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
spec:
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: registry.cn-beijing.aliyuncs.com/dotbalo/ingress-nginx-controller:v1.7.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission-create
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.cn-beijing.aliyuncs.com/dotbalo/kube-webhook-certgen:v20230312
imagePullPolicy: IfNotPresent
name: create
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission-patch
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.cn-beijing.aliyuncs.com/dotbalo/kube-webhook-certgen:v20230312
imagePullPolicy: IfNotPresent
name: patch
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: nginx
spec:
controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.7.1
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: ingress-nginx
path: /networking/v1/ingresses
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None
创建Ingress控制器
[root@master-01 ~]# kubectl create -f /k8spod/service/deploy-nginx.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
通过域名+Ingress对外提供服务,配置如下,此处将Pod创建、SVC创建、Ingress创建都写在同一个yaml文件内,方便读者理清三者的关系
[root@master-01 service]# cat /k8spod/service/ingress-domain.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: back-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: nginx
rules:
- host: nginx.test.com
http:
paths:
- backend:
service:
name: back-service
port:
number: 80
path: /
pathType: ImplementationSpecific
参数 | 解释 |
---|---|
pathType | 路径的匹配方式,目前有 ImplementationSpecific、Exact 和 Prefix 方式 |
Exact | 精确匹配,比如配置的 path 为/bar,那么/bar/将不能被路由 |
Prefix | 前缀匹配,基于以 / 分隔的 URL 路径。比如 path 为/abc,可以匹配到/abc/bbb 等,比较常用的配置 |
ImplementationSpecific | 匹配方法取决于 IngressClass。 具体实现可以将其作为单独的 pathType处理或者作与Prefix或Exact类型相同的处理 |
创建资源并查看资源状态
[root@master-01 ~]# kubectl create -f /k8spod/service/ingress-domain.yaml
deployment.apps/nginx created
service/back-service created
ingress.networking.k8s.io/nginx-ingress created
[root@master-01 ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress nginx nginx.test.com 80 16s
[root@master-01 service]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.155.204 <none> 80:31058/TCP,443:31804/TCP 21m
ingress-nginx-controller-admission ClusterIP 10.96.239.106 <none> 443/TCP 21m
[root@master-01 ~]# kubectl describe ingress nginx-ingress
Name: nginx-ingress
Labels: <none>
Namespace: default
Address: 192.168.132.172
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
nginx.test.com
/ back-service:80 (172.16.133.138:80,172.16.190.29:80,172.16.222.7:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m50s (x2 over 10m) nginx-ingress-controller Scheduled for sync
由于是实验环境,无DNS服务器,直接修改hosts文件,将域名与随意一台宿主机进行映射
[root@master-01 service]# cat /etc/hosts | grep nginx
192.168.132.173 nginx.test.com
访问该域名,流量可以正常转发到Pod内,完成域名+Ingress对外提供服务
[root@master-01 service]# curl nginx.test.com:31058
pod-3
[root@master-01 service]# curl nginx.test.com:31058
pod-2
[root@master-01 service]# curl nginx.test.com:31058
pod-2
[root@master-01 service]# curl nginx.test.com:31058
pod-1
Ingress 特例:不配置域名发布服务
配置如下:
[root@master-01 ~]# vim /k8spod/service/ingress-nodomain.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.cn-guangzhou.aliyuncs.com/caijxlinux/nginx:v1.15.1
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: back-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: back-service
port:
number: 80
path: /
pathType: ImplementationSpecific
创建资源并查看资源创建状态
[root@master-01 service]# kubectl create -f ingress-nodomain.yaml
deployment.apps/nginx created
service/back-service created
ingress.networking.k8s.io/nginx-ingress created
[root@master-01 service]# kubectl get pod -n ingress-nginx -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-admission-create-8pmqn 0/1 Completed 0 97m 172.16.133.136 master-03 <none> <none>
ingress-nginx-admission-patch-pcvpq 0/1 Completed 0 97m 172.16.190.26 node-01 <none> <none>
ingress-nginx-controller-6b7b45b68f-mkh6h 1/1 Running 0 97m 172.16.190.27 node-01 <none> <none>
[root@master-01 ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress nginx * 192.168.132.172 80 7m5s
访问Ingress控制器创建的NGINX容器,可以成功代理到Pod内
[root@master-01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
cluster-test-665f554bcc-bcw5v 1/1 Running 116 (14m ago) 50d
nginx-76649d58b-svq5h 1/1 Running 0 7s
[root@master-01 ~]# kubectl exec -it nginx-76649d58b-svq5h -- sh
# echo 1 > /usr/share/nginx/html/index.html
# exit
[root@master-01 ~]# curl 172.16.190.27
1
[root@master-01 ~]# curl 192.168.132.172:31058
1
登录NGINX容器,可以看到在nginx.conf配置文件内实现了代理
[root@master-01 ~]# kubectl exec -it ingress-nginx-controller-6b7b45b68f-mkh6h -n ingress-nginx -- sh
/etc/nginx $ cat nginx.conf | grep -A 5 back-service
set $service_name "back-service";
set $service_port "80";
set $location_path "/";
set $global_rate_limit_exceeding n;
rewrite_by_lua_block {
--
set $proxy_upstream_name "default-back-service-80";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;