Kubernetes

k8s之进程版部署

本文主要是介绍k8s之进程版部署,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

环境

192.168.102.53 k8s-master etcd registry
192.168.102.54 k8s-node1
192.168.102.55 k8s-node2
所有机器关闭防火墙和selinux

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service
sed -ir 's/^SELINUX=.*/SELINUX=disabled/' /etc/sysconfig/selinux
setenforce 0
getenforce

所有机器安装epel-release源

yum -y install epel-release

所以机器添加hosts

Vim /etc/hosts
192.168.102.53 k8s-master
192.168.102.53 etcd
192.168.102.54 k8s-node1
192.168.102.55 k8s-node2

部署master

安装etcd:

[root@k8s-master ~]# Yum -y install etcd
[root@k8s-master ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak
[root@k8s-master ~]# Vim /etc/etcd/etcd.conf
ETCD_LISTEN_CLIENT_URLS=”http://0.0.0.0:2379,http://0.0.0.0:4001”
ETCD_NAME=”master”
ETCD_ADVERTISE_CLIENT_URLS=”http://etcd:2379,http://etcd:4001”

启动并验证

[root@k8s-master ~]# systemctl start etcd 
[root@k8s-master ~]# systemctl enable etcd
[root@k8s-master ~]# systemctl status etcd

测试etcd数据的可用性

[root@k8s-master ~]#Etcdctl get testdir/testkey0 //查看值
[root@k8s-master ~]#Etcdctl -C http://etcd:4001 cluster-health //查看集群的健康性
[root@k8s-master ~]#Etcdctl -C http://etcd:2379 cluster-health

安装docker

[root@k8s-master ~]#Yum -y install docker

配置docker配置文件,使其允许从registry中拉取镜像

[root@k8s-master ~]# cp /etc/sysconfig/docker /etc/sysconfig/docker.bak
[root@k8s-master ~]# Vim /etc/sysconfig/docker
OPTIONS=”--insecure-registry registry:5000” 
[root@k8s-master ~]# Systemctl enable docker
[root@k8s-master ~]# Systemctl start docker

安装kubernetes

[root@k8s-master ~]#Yum -y install kubernetes

配置并启动kubernetes:在kubernetes master上需要运行一下组件,Kubernetes API server、Kubernetes controller manager、Kubernetes scheduler

[root@k8s-master ~]# cp /etc/kubernetes/apiserver /etc/kubernetes/apiserver.bak
[root@k8s-master ~]# Vim /etc/kubernetes/apiserver
KUBE_API_ADDRESS=”--insercure-bind-address=0.0.0.0” //监控本地所有的网卡
KUBE_API_PORT=”--port=8080”
KUBE_ETCD_SERVERS=”--etcd-servers=http://etcd:2379”
KUBE_ADMINSSION_CONTROL=”--adminssion-control=NamespaceLifecycle,NamespaceExists,LImitRanger,SecurityContexDeny,ResourceQuota” //配置控制器
[root@k8s-master ~]# cp /etc/kubernetes/config /etc/kubernetes/config.bak
[root@k8s-master ~]# Vim /etc/kubernetes/config
KUBE_MASTER=”--master=http://k8s-master:8080”

启动服务并设置开机启动

[root@k8s-master ~]# systemctl enable kube-apiserver.service
[root@k8s-master ~]# Systemctl start kube-apiserver.service
[root@k8s-master ~]# Systemctl enable kube-controller-manager.service
[root@k8s-master ~]# Systemctl start kube-controller-manager.service
[root@k8s-master ~]# Systemctl enable kube-scheduler.service
[root@k8s-master ~]# Systemctl start kube-scheduler.service

部署node

安装docker

安装配置启动docker(两台node一样)

yum -y install docker
cp /etc/sysconfig/docker /etc/sysconfig/docker.bak
Vim /etc/sysconfig/docker
OPTIONS=”--insecure-registry registry:5000”
Systemctl enable docker
Systemctl start docker

安装kubernetes

安装配置启动kubernetes(两台node一样)

Yum -y install kubernetes

在kubernetes node上需要运行以下组件:kubelet、kubernetes proxy

cp /etc/kubernetes/config /etc/kubernetes/config.bak
Vim /etc/kubernetes/config
KUBE_MASTER=”--master=http://k8s-master:8080”
cp /etc/kubernetes/kubelet /etc/kubernetes/kubelet.bak
Vim /etc/kubernetes/kubelet
KUBELET_ADDRESS=”--address=0.0.0.0”
KUBELET_HOSTNAME=”--hostname-override=node节点主机名”
KUBELET_API_SERVER=”--api-servers=http://k8s-master:8080”

启动服务并设置开机自启动

Systemctl enable kubelet.service
Systemctl start kubelet.service
Systemctl enable kube-proxy.service
Systemctl start kube-proxy.service

查看状态,在master上查看集群中节点及节点状态

[root@k8s-master ~]# Kubectl -s http://k8s-master:8080 get node
[root@k8s-master ~]# Kubectl get nodes

安装flannel

在master、node上均执行如下命令,进行安装

Yum -y install flannel

配置flannel,在master、node上均配置

cp /etc/sysconfig/flanneld /etc/sysconfig/flanneld.bak
vim /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS=”http://etcd:2379”

配置etcd中关于flannel的key,flannel使用etcd进行配置,来保证多个flannel实例之间的配置一致性,所以需要在etcd上进行如下配置,管理员配置flannel使用的network,并将配置保存在etcd中。

[root@k8s-master ~]# Etcdctl mk /atomic.io/network/config ‘{ “Network”:”172.17.0.0/16”}’
[root@k8s-master ~]# Etcdctl update /atomic.io/network/config ‘{ “Network”:”172.17.0.0/16”}’(排错时用)

在每个节点上flannel启动,它从etcd中获取network配置,并为本节点产生一个subnet,也保存在etcd中。并且产生/run/flannel/subnet.env文件

cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.17.0.0/16 //全局的falnnel network
FALNNEL_SUBNET=172.17.78.1/24 //本节点上falnnel subnet
FALANNEL_MTU=1400 //本节点上的flannel mtu
FALANNEL_IPMASQ=false

启动,启动flannel之后,需要依次启动相关组件
在master执行

[root@k8s-master ~]# systemctl enable flanneld.service
[root@k8s-master ~]# systemctl start flanneld.service 
[root@k8s-master ~]# Systemctl restart kube-apiserver.service
[root@k8s-master ~]# Systemctl restart kube-controller-manager.service
[root@k8s-master ~]# Systemctl restart kube-scheduller.service

在node上执行

systemctl enable flanneld.service
systemctl start flanneld.service
Systemctl restart docker
Systemctl restart kubelet.service
Systemctl restart kube-proxy.service

部署本地镜像仓库

可以使用独立的宿主机
部署docker registry,在master上搭建registry
拉取registry镜像

[root@k8s-master ~]# docker pull docker.io/registry
[root@k8s-master ~]# Docker images

启动registry

[root@k8s-master ~]# docker run -d -p 5000:5000 --name=registry --restart=always --privileged=true --log-driver=none -v /home/data/registrydata:/tmp/registry registry

其中,/home/data/registrydata是一个比较大的系统分区,今后镜像仓库中的全部数据都会保存在这个外挂目录下。
更改名称和标签并推送

[root@k8s-master ~]# docker pull nginx
[root@k8s-master ~]# docker pull centos
[root@k8s-master ~]# docker tag  docker.io/nginx:latest registry:5000/nginx:v1 
[root@k8s-master ~]# docker tag docker.io/centos:latest registry:5000/centos:v1
[root@k8s-master ~]# docker push registry:5000/nginx:v1 
[root@k8s-master ~]# docker push registry:5000/centos:v1 
[root@k8s-master ~]# curl -XGET http://192.168.102.53:5000/v2/_catalog //查看仓库中的镜像
[root@k8s-master ~]# curl -XGET http://192.168.102.53:5000/v2/centos/tags/list //查看镜像版本
这篇关于k8s之进程版部署的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!