service代理pod

service代理pod

软件版本
docker最新版
kubernetes1.23.1
calico3.25
节点IP系统功能CPU内存硬盘
node110.80.10.1centos7.9k8s-master4核心8GB20GB
node210.80.10.2centos7.9k8s-node4核心8GB20GB

为什么要有service:

在kubernetes中,pod是有生命周期的,如果pod重启它的ip很有可能会发生变化。如果我们的服务都是将pod的ip地址写死,pod挂掉或者重启,和刚才重启的pod相关联的其他服务将会找不到它所关联的pod,为了解决这个问题,在kubernetes中定义了service资源对象,service定义了一个服务访问的入口,客户端通过这个入口即可访问服务背后的应用集群实例,service是一组pod的逻辑集合,这一组pod能够被service访问到,通常是通过label selector实现的。

service概述:

service是一个固定接入层,客户端可以通过访问service的ip和端口访问到service关联的后端pod,这个service工作依赖于在kubernetes集群之上部署的一个附件,就是kubernetes的dns服务(不同kubernetes版本的dns默认使用的也是不一样的,1.11之前的版本使用的是kubedns,较新的版本使用的是coredns),service的名称解析是依赖于dns附件的,因此在部署完k8s之后需要再部署dns附件,kubernetes要想给客户端提供网络功能,需要依赖第三方的网络插件(flannel,calico等)。每个k8s节点上都有一个组件叫做kube-proxy,kube-proxy这个组件将始终监视着apiserver中有关service资源的变动信息,需要跟master之上的apiserver交互,随时连接到apiserver上获取任何一个与service资源相关的资源变动状态,这种是通过kubernetes中固有的一种请求方法watch(监视)来实现的,一旦有service资源的内容发生变动(如创建,删除),kube-proxy都会将它转化成当前节点之上的能够实现service资源调度,把我们请求调度到后端特定的pod资源之上的规则,这个规则可能是iptables,也可能是ipvs,取决于service的实现方式。

service工作原理:

k8s在创建service时,会根据标签选择器selector(lable selector)来查找pod,据此创建与service同名的endpoint对象,当pod地址发生变化时,endpoint也会随之发生变化,service接收前端client请求的时候,就会通过endpoint,找到转发到哪个pod进行访问的地址。(至于转发到哪个节点的pod,由负载均衡kube-proxy决定)。

kubernetes集群中有三类ip地址:

  • node network:节点网络。

  • pod network:pod网络。

  • cluster network:集群地址。

node1

查看service字段:

1
# kubectl explain service

service的四种类型:

  • externalname:适用于k8s集群内部容器访问外部资源,它没有selector,也没有定义任何的端口和endpoint。

  • clusterip:通过k8s集群内部IP暴露服务,选择该值,服务只能够在集群内部访问,这也是默认的service type。

  • nodeport:通过每个node节点上的ip和静态端口暴露k8s集群内部的服务。通过请求<nodeip>:<nodeport>可以把请求代理到内部的pod。client—>nodeip:nodeport—>service ip:serviceport—>podip:containerport。

  • loadbalancer:使用云提供商的负载均衡器,可以向外部暴露服务。外部的负载均衡器可以路由到nodeport服务和clusterip服务。

创建clusterip的deployment:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# mkdir /root/service && cd /root/service
# vim pod_test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:1.19.10
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
1
2
3
4
5
# kubectl apply -f pod_test.yaml
# kubectl get pods -l run=my-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-nginx-5fff879697-2rmvj 1/1 Running 0 45s 10.244.36.96 k8s-node1 <none> <none>
my-nginx-5fff879697-r6b22 1/1 Running 0 45s 10.244.36.97 k8s-node1 <none> <none>

访问测试:

1
2
# curl 10.244.36.96
# curl 10.244.36.97

创建service,type类型是clusterip:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# vim service_test.yaml
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: my-nginx
1
2
3
4
# kubectl apply -f service_test.yaml
# kubectl get svc -l run=my-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx ClusterIP 10.98.84.78 <none> 80/TCP 48s

访问测试:

1
# curl 10.98.84.78

查看service详细信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# kubectl describe svc my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Annotations: <none>
Selector: run=my-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.84.78
IPs: 10.98.84.78
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.36.96:80,10.244.36.97:80
Session Affinity: None
Events: <none>
1
2
3
# kubectl get ep my-nginx
NAME ENDPOINTS AGE
my-nginx 10.244.36.96:80,10.244.36.97:80 115s

创建nodeport的deployment:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# vim pod_nodeport.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx-nodeport
spec:
selector:
matchLabels:
run: my-nginx-nodeport
replicas: 2
template:
metadata:
labels:
run: my-nginx-nodeport
spec:
containers:
- name: my-nginx-nodeport-container
image: nginx:1.19.10
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
1
2
3
4
5
# kubectl apply -f pod_nodeport.yaml
# kubectl get pods -l run=my-nginx-nodeport
NAME READY STATUS RESTARTS AGE
my-nginx-nodeport-644bd8789-qdjnk 1/1 Running 0 11s
my-nginx-nodeport-644bd8789-wglc9 1/1 Running 0 11s

创建service,type类型是nodeport:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# vim service_nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: my-nginx-nodeport
labels:
run: my-nginx-nodeport
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30380
selector:
run: my-nginx-nodeport
1
2
3
4
# kubectl apply -f service_nodeport.yaml
# kubectl get svc -l run=my-nginx-nodeport
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx-nodeport NodePort 10.103.132.197 <none> 80:30380/TCP 13s

浏览器访问:http://10.80.10.1:30380/

1
2
3
4
5
6
7
8
9
10
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d20h
my-nginx ClusterIP 10.98.84.78 <none> 80/TCP 12m
my-nginx-nodeport NodePort 10.103.132.197 <none> 80:30380/TCP 2m13s
# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.80.10.1:6443 2d20h
my-nginx 10.244.36.96:80,10.244.36.97:80 12m
my-nginx-nodeport 10.244.36.98:80,10.244.36.99:80 2m18s

创建externalname的deployment:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# vim client.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
command: ["/bin/sh","-c","sleep 36000"]
1
# kubectl apply -f client.yaml

创建service,type类型是externalname:

1
2
3
4
5
6
7
8
9
10
11
12
# vim client_svc.yaml
apiVersion: v1
kind: Service
metadata:
name: client-svc
spec:
type: ExternalName
externalName: nginx-svc.nginx-ns.svc.cluster.local
ports:
- name: http
port: 80
targetPort: 80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# kubectl apply -f client_svc.yaml
# kubectl describe svc client
Name: client-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ExternalName
IP Families: <none>
IP:
IPs: <none>
External Name: nginx-svc.nginx-ns.svc.cluster.local
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>

创建命名空间:

1
# kubectl create ns nginx-ns

创建nginx的pod和svc:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# vim server_nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-ns
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.10
imagePullPolicy: IfNotPresent
1
2
3
4
# kubectl apply -f server_nginx.yaml
# kubectl get pods -n nginx-ns
NAME READY STATUS RESTARTS AGE
nginx-596d77d94b-98zxk 1/1 Running 0 29s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# vim nginx_svc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
1
2
3
4
# kubectl apply -f nginx_svc.yaml
# kubectl get svc -n nginx-ns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-svc ClusterIP 10.109.186.244 <none> 80/TCP 15m

进入pod进行测试,结果一样:

1
2
3
4
5
6
# kubectl get pod | grep client
client-5c75576967-gd6jl 1/1 Running 0 23m
# kubectl exec -it client-5c75576967-gd6jl -- /bin/sh
# wget -q -O - client-svc.default.svc.cluster.local
# wget -q -O - nginx-svc.nginx-ns.svc.cluster.local
# exit
1
# for i in `ls /root/service`; do kubectl delete -f ${i}; done