Pod创建过程如上图所示,首先用户向apiserver发送创建pod的请求,apiserver收到用于创建pod请求后,对应会对该用户身份信息进行验证,该用户是否是合法的用户,是否具有创建pod的权限,如果能够通过apiserver的验证,则进行下一步,对用户提交的资源进行准入控制,所谓准入控制是指对用户提交的资源做格式,语法的验证,是否满足apiserver中定义的对应资源的api格式和语法;如果上述身份验证和准入控制能够顺利通过,接下来,apiserver才会把对应创建pod的信息存入etcd中,否者就直接拒绝用户创建pod;etcd将对应数据存放好以后,会返回给apiserver一个事件,即创建pod的相关信息已经存入etcd中了;apiserver收到etcd的资源信息存入完成事件后,会返回给用户一个pod创建完成的消息;随后,scheduler通过监视到apiserver上的资源变动事件,会对pod进行调度,调度规则就是先预选(预选就是把不符合pod运行的节点先踢出去,然后在剩下的节点中进行优选),然后再优选(优选就是在满足预选留下的节点中进行打分,得分高者负责运行pod);scheduler最后通过预选+优选的方式把pod调度到后端某个node节点上运行的结果返回给apiserver,由apiserver将最终调度信息存入etcd中,等待etcd将对应调度信息更新完毕后,再返回给apiserver一个pod状态信息更新完毕,apiserver再将对应状态返回给scheduler;随后负责运行pod的node节点上的kubelet通过监视apiserver的资源变动事件,会发现一个和自己相关的事件,此时对应节点上的kubelet会调用本地容器引擎,将对应pod在本地运行起来;当本地容器引擎将pod正常运行起来后,对应容器引擎会返回给本地kubelet一个pod运行完成的事件,随后再由kubelet将对应事件返回给apiserver,随后apiserver再将pod状态信息存入etcd中,etcd将更新pod状态信息完成的事件通过apiserver将对应事件返回给kubelet;如果此时用户查询pod状态,就能够正常通过apiserver在etcd中检索出来的pod状态;以上就是pod创建的一个大概过程;
Pause 容器,又叫 Infra 容器,是pod的基础容器,镜像体积只有几百KB左右,配置在kubelet中,主要的功能是一个pod中多个容器的网络通信。
Infra 容器被创建后会初始化 Network Namespace,之后其它容器就可以加入到 Infra 容器中共享Infra 容器的网络了,因此如果一个 Pod 中的两个容器 A 和 B,那么关系如下 :
1、 运行pod,进入容器查看iflink编号
2、到pod所在宿主机验证网卡
error_log stderr; events { worker_connections 1024; } http { access_log /dev/stdout; server { listen 80 default_server; server_name www.mysite.com; location / { index index.html index.php; root /usr/share/nginx/html; } location ~ \.php$ { root /usr/share/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } }
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
nerdctl run -d -p 80:80 --name pause-container-test registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
root@deploy:~# mkdir html root@deploy:~# cd html root@deploy:~/html# echo "<h1>pause container web test</h1>" >index.html root@deploy:~/html# cat >> index.php << EOF > <?php > phpinfo(); > ?> > EOF root@deploy:~/html# ll total 16 drwxr-xr-x 2 root root 4096 May 26 00:03 ./ drwxr-xr-x 9 root root 4096 May 26 00:02 ../ -rw-r--r-- 1 root root 34 May 26 00:02 index.html -rw-r--r-- 1 root root 25 May 26 00:03 index.php root@deploy:~/html# cat index.html <h1>pause container web test</h1> root@deploy:~/html# cat index.php <?php phpinfo(); ?> root@deploy:~/html#
nerdctl run -d --name nginx-container-test \ -v `pwd`/nginx.conf:/etc/nginx/nginx.conf \ -v `pwd`/html:/usr/share/nginx/html \ --net=container:pause-container-test \ nginx:1.20.2
nerdctl run -d --name php-container-test \ -v `pwd`/html:/usr/share/nginx/html \ --net=container:pause-container-test \ php:5.6.40-fpm
访问宿主机的80端口的index.php,看看是否能够访问到php页面?
kind: Deployment #apiVersion: extensions/v1beta1 apiVersion: apps/v1 metadata: labels: app: myserver-myapp name: myserver-myapp-deployment-name namespace: myserver spec: replicas: 1 selector: matchLabels: app: myserver-myapp-frontend template: metadata: labels: app: myserver-myapp-frontend spec: containers: - name: myserver-myapp-container image: nginx:1.20.0 #imagePullPolicy: Always volumeMounts: - mountPath: "/usr/share/nginx/html/myserver" name: myserver-data - name: tz-config mountPath: /etc/localtime initContainers: - name: init-web-data image: centos:7.9.2009 command: ['/bin/bash','-c',"for i in `seq 1 10`;do echo '<h1>'$i web page at $(date +%Y%m%d%H%M%S) '<h1>' >> /data/nginx/html/myserver/index.html;sleep 1;done"] volumeMounts: - mountPath: "/data/nginx/html/myserver" name: myserver-data - name: tz-config mountPath: /etc/localtime - name: change-data-owner image: busybox:1.28 command: ['/bin/sh','-c',"/bin/chmod 644 /data/nginx/html/myserver/* -R"] volumeMounts: - mountPath: "/data/nginx/html/myserver" name: myserver-data - name: tz-config mountPath: /etc/localtime volumes: - name: myserver-data hostPath: path: /tmp/data/html - name: tz-config hostPath: path: /etc/localtime --- kind: Service apiVersion: v1 metadata: labels: app: myserver-myapp-service name: myserver-myapp-service-name namespace: myserver spec: type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 30080 selector: app: myserver-myapp-frontend
上述配置清单,主要利用两个初始化容器对nginx主容器生成数据和修改数据文件权限的操作;在spec.template.spec字段下用initcontainers来定义初始化容器相关内容;
应用配置清单
访问nginx服务,看看对应数据是否生成?
health check是指对容器做健康状态检测;该检测主要确保容器里的某些服务是否处于健康状态;该检测是一个周期性的动作,即每隔几秒或指定时间周期内进行检测;
version: '3.6' services: nginx-service: image: nginx:1.20.2 container_name: nginx-web1 expose: - 80 - 443 ports: - "80:80" - "443:443" restart: always healthcheck: #添加服务健康状态检查 test: ["CMD", "curl", "-f", "http://localhost"] interval: 5s #健康状态检查的间隔时间,默认为30s timeout: 5s #单次检查的失败超时时间,默认为30s retries: 3 #连续失败次数默认3次,当连续失败retries次数后将容器置为unhealthy状态 start_period: 60s #60s后每间隔interval的时间检查一次,连续retries次后才将容器置为unhealthy状态, 但是start_period时间内检查成功就认为是检查成功并装容器置于healthy状态
应用配置清单
docker-compose -f docker-compose-demo.yaml up -d
验证容器健康状态
FROM nginx:1.20.2 HEALTHCHECK --interval=5s --timeout=2s --retries=3 \ CMD curl --silent --fail localhost:80 || exit 1
生成镜像
docker build -t mynginx:1.20.2 -f ./dockerfile .
运行容器
docker run -it -d -p 80:80 mynginx:1.20.2
验证健康状态检测
root@k8s-deploy:/compose# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c3af9bdd5a41 mynginx:1.20.2 "/docker-entrypoint.…" 4 seconds ago Up 2 seconds (health: starting) 0.0.0.0:80->80/tcp, :::80->80/tcp keen_brown root@k8s-deploy:/compose# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c3af9bdd5a41 mynginx:1.20.2 "/docker-entrypoint.…" 9 seconds ago Up 8 seconds (healthy) 0.0.0.0:80->80/tcp, :::80->80/tcp keen_brown root@k8s-deploy:/compose#
在检测通过之前容器处于starting状态,检测通过(检测返回状态码为 0)之后为healthy状态,检测失败(检测返回状态码为 1)之后为unhealthy状态;
pod的生命周期( pod lifecycle),从pod start时候可以配置postStart检测,运行过程中可以配置livenessProbe和readinessProbe,最后在 stop前可以配置preStop操作。
探针是由 kubelet 对容器执行的定期诊断,以保证Pod的状态始终处于运行状态,要执行诊断,kubelet 调用由容器实现的Handler(处理程序),也成为Hook(钩子),有三种类型的处理程序:
每次探测都将获得以下三种结果之一:
Pod 重启策略:Pod 一旦配置探针,在检测失败时候,会基于restartPolicy 对 Pod进行下一步操作:
探针有很多配置字段,可以使用这些字段精确的控制存活和就绪检测的行为,官方文档https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
HTTP 探测器可以在 httpGet 上配置额外的字段
httpGet
实现pod存活性探测apiVersion: apps/v1 kind: Deployment metadata: name: myserver-myapp-frontend-deployment namespace: myserver spec: replicas: 1 selector: matchLabels: #rs or deployment app: myserver-myapp-frontend-label #matchExpressions: # - {key: app, operator: In, values: [myserver-myapp-frontend,ng-rs-81]} template: metadata: labels: app: myserver-myapp-frontend-label spec: containers: - name: myserver-myapp-frontend-label image: nginx:1.20.2 ports: - containerPort: 80 readinessProbe: livenessProbe: httpGet: #path: /monitor/monitor.html path: /index.html port: 80 initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 --- apiVersion: v1 kind: Service metadata: name: myserver-myapp-frontend-service namespace: myserver spec: ports: - name: http port: 81 targetPort: 80 nodePort: 40012 protocol: TCP type: NodePort selector: app: myserver-myapp-frontend-label
上述配置清单,主要描述了使用httpget探针对nginxpod进行存活性探测,探测方法就是对容器的80端口,路径为/index.html进行每隔3秒访问一次,探测超时等待1秒,如果连续3次访问失败,则该pod存活性探测失败;只要有一次访问成功,则该pod存活性探测成功;
tcpSocket
实现pod存活性探测apiVersion: apps/v1 kind: Deployment metadata: name: myserver-myapp-frontend-deployment namespace: myserver spec: replicas: 1 selector: matchLabels: #rs or deployment app: myserver-myapp-frontend-label #matchExpressions: # - {key: app, operator: In, values: [myserver-myapp-frontend,ng-rs-81]} template: metadata: labels: app: myserver-myapp-frontend-label spec: containers: - name: myserver-myapp-frontend-label image: nginx:1.20.2 ports: - containerPort: 80 livenessProbe: #readinessProbe: tcpSocket: port: 80 #port: 8080 initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 --- apiVersion: v1 kind: Service metadata: name: myserver-myapp-frontend-service namespace: myserver spec: ports: - name: http port: 81 targetPort: 80 nodePort: 40012 protocol: TCP type: NodePort selector: app: myserver-myapp-frontend-label
exec
执行命令的方式实现pod存活性探测apiVersion: apps/v1 kind: Deployment metadata: name: myserver-myapp-redis-deployment namespace: myserver spec: replicas: 1 selector: matchLabels: #rs or deployment app: myserver-myapp-redis-label #matchExpressions: # - {key: app, operator: In, values: [myserver-myapp-redis,ng-rs-81]} template: metadata: labels: app: myserver-myapp-redis-label spec: containers: - name: myserver-myapp-redis-container image: redis ports: - containerPort: 6379 livenessProbe: #readinessProbe: exec: command: #- /apps/redis/bin/redis-cli - /usr/local/bin/redis-cli - quit initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 --- apiVersion: v1 kind: Service metadata: name: myserver-myapp-redis-service namespace: myserver spec: ports: - name: http port: 6379 targetPort: 6379 nodePort: 40016 protocol: TCP type: NodePort selector: app: myserver-myapp-redis-label
apiVersion: apps/v1 kind: Deployment metadata: name: myserver-myapp-frontend-deployment namespace: myserver spec: replicas: 1 selector: matchLabels: #rs or deployment app: myserver-myapp-frontend-label #matchExpressions: # - {key: app, operator: In, values: [myserver-myapp-frontend,ng-rs-81]} template: metadata: labels: app: myserver-myapp-frontend-label spec: containers: - name: myserver-myapp-frontend-label image: nginx:1.20.2 ports: - containerPort: 80 startupProbe: httpGet: path: /index.html port: 80 initialDelaySeconds: 5 #首次检测延迟5s failureThreshold: 3 #从成功转为失败的次数 periodSeconds: 3 #探测间隔周期 --- apiVersion: v1 kind: Service metadata: name: myserver-myapp-frontend-service namespace: myserver spec: ports: - name: http port: 81 targetPort: 80 nodePort: 40012 protocol: TCP type: NodePort selector: app: myserver-myapp-frontend-label
apiVersion: apps/v1 kind: Deployment metadata: name: myserver-myapp-frontend-deployment namespace: myserver spec: replicas: 3 selector: matchLabels: #rs or deployment app: myserver-myapp-frontend-label #matchExpressions: # - {key: app, operator: In, values: [myserver-myapp-frontend,ng-rs-81]} template: metadata: labels: app: myserver-myapp-frontend-label spec: terminationGracePeriodSeconds: 60 containers: - name: myserver-myapp-frontend-label image: nginx:1.20.2 ports: - containerPort: 80 startupProbe: httpGet: path: /index.html port: 80 initialDelaySeconds: 5 #首次检测延迟5s failureThreshold: 3 #从成功转为失败的次数 periodSeconds: 3 #探测间隔周期 readinessProbe: httpGet: #path: /monitor/monitor.html path: /index.html port: 80 initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 livenessProbe: httpGet: #path: /monitor/monitor.html path: /index.html port: 80 initialDelaySeconds: 5 periodSeconds: 3 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 --- apiVersion: v1 kind: Service metadata: name: myserver-myapp-frontend-service namespace: myserver spec: ports: - name: http port: 81 targetPort: 80 nodePort: 40012 protocol: TCP type: NodePort selector: app: myserver-myapp-frontend-label
官方文档https://kubernetes.io/zh/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
postStart 和 preStop handlers 处理函数
apiVersion: apps/v1 kind: Deployment metadata: name: myserver-myapp1-lifecycle labels: app: myserver-myapp1-lifecycle namespace: myserver spec: replicas: 1 selector: matchLabels: app: myserver-myapp1-lifecycle-label template: metadata: labels: app: myserver-myapp1-lifecycle-label spec: terminationGracePeriodSeconds: 60 containers: - name: myserver-myapp1-lifecycle-label image: tomcat:7.0.94-alpine lifecycle: postStart: exec: #command: 把自己注册到注册在中心 command: ["/bin/sh", "-c", "echo 'Hello from the postStart handler' >> /usr/local/tomcat/webapps/ROOT/index.html"] #httpGet: # #path: /monitor/monitor.html # host: www.magedu.com # port: 80 # scheme: HTTP # path: index.html preStop: exec: #command: 把自己从注册中心移除 command: - /bin/bash - -c - 'sleep 10000000' #command: ["/usr/local/tomcat/bin/catalina.sh","stop"] #command: ['/bin/sh','-c','/path/preStop.sh'] ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: myserver-myapp1-lifecycle-service namespace: myserver spec: ports: - name: http port: 80 targetPort: 8080 nodePort: 30012 protocol: TCP type: NodePort selector: app: myserver-myapp1-lifecycle-label