主机名 | 系统版本 | IP地址 | cpu/内存/磁盘 | 用途 | 软件版本 |
---|---|---|---|---|---|
k8s_nfs | CentOS7.5 | 172.16.1.60 | 2核/2GB/60GB | nfs存储 | nfs-utils-1.3.0-0.68 |
k8s-master1 | CentOS7.5 | 172.16.1.81 | 2核/4GB/60GB | kubernetes master1节点 | k8s v1.20.0 |
k8s-master2 | CentOS7.5 | 172.16.1.82 | 2核/4GB/60GB | kubernetes master2节点 | k8s v1.20.0 |
k8s-node1 | CentOS7.5 | 172.16.1.83 | 4核/8GB/60GB | kubernetes node1节点 | k8s v1.20.0 |
k8s-node2 | CentOS7.5 | 172.16.1.84 | 4核/8GB/60GB | kubernetes node2节点 | k8s v1.20.0 |
补充: kubernetes集群的控制节点我打了污点不能被pod调度使用。
1 nfs服务部署 节点: k8s_nfs 用途: k8s pod 数据持久化存储 说明: nfs服务的搭建过程不再赘述 验证: [root@k8s_nfs ~]# showmount -e 172.16.1.60 Export list for 172.16.1.60: /ifs/kubernetes * nfs-subdir-external-provisioner插件部署 节点: kubernetes集群 用途: 为中间件pod提供pvc自动供给 说明: nfs pvc自动供给插件的部署过程不再赘述。修改"deployment.yaml"文件中连接nfs服务的地址和nfs共享目录参数;修改"class.yaml" 文件中"archiveOnDelete"(删除时是否存档)参数为 archiveOnDelete: "true",删除pod时保留pod数据,默认为false时为不保留数据。 注意: 在部署前需要在k8s各个节点上部署nfs的客户端(yum install nfs-utils -y),否则无法部署成功。 补充: (1) gitlab项目地址: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner (2) 下载 deploy 目录如下文件 class.yaml、deployment.yaml、rbac.yaml 查看: [root@k8s-master1 nfs-subdir-external-provisioner-master]# ls | xargs -i kubectl apply -f {} [root@k8s-master1 nfs-subdir-external-provisioner-master]# kubectl get deployment,pod,svc,sc -n default
[root@k8s-master1 ~]# mkdir -p mongodb-cluster/ [root@k8s-master1 ~]# cd mongodb-cluster/ [root@k8s-master1 mongodb-cluster]# kubectl create namespace mongodb-cluster namespace/mongodb-cluster created [root@k8s-master1 mongodb-cluster]#
秘钥文件用于mongodb副本集之间进行数据复制时免密码,MongoDB将使用此密钥与内部集群通信。
kubectl create secret generic: 表示根据配置文件、目录或指定的literal-value创建secret。
[root@k8s-master1 mongodb-cluster]# openssl rand -base64 741 > ./key.txt [root@k8s-master1 mongodb-cluster]# kubectl create secret generic mongodb-replica-sets-key -n mongodb-cluster \ --from-file=internal-auth-mongodb-keyfile=./key.txt secret/mongodb-replica-sets-key created [root@k8s-master1 mongodb-cluster]#
[root@k8s-master1 mongodb-cluster]# cat mongodb-statefulset.yaml apiVersion: v1 kind: Service metadata: name: mongodb-cluster namespace: mongodb-cluster labels: name: mongo spec: ports: - port: 27017 targetPort: 27017 clusterIP: None selector: role: mongo --- apiVersion: apps/v1 kind: StatefulSet metadata: name: mongodb-cluster namespace: mongodb-cluster spec: serviceName: mongodb-cluster replicas: 3 selector: matchLabels: role: mongo environment: produce replicaset: MainRepSet template: metadata: labels: role: mongo environment: produce replicaset: MainRepSet spec: containers: - name: mongodb-container image: registry.cn-hangzhou.aliyuncs.com/k8s-image01/mongodb:4.2.21-bionic command: - "numactl" - "--interleave=all" - "mongod" - "--bind_ip" - "0.0.0.0" - "--replSet" - "MainRepSet" - "--auth" - "--clusterAuthMode" - "keyFile" - "--keyFile" - "/etc/secrets-volume/internal-auth-mongodb-keyfile" - "--setParameter" - "authenticationMechanisms=SCRAM-SHA-1" resources: requests: cpu: 0.5 memory: 500Mi ports: - containerPort: 27017 volumeMounts: - name: secrets-volume readOnly: true mountPath: /etc/secrets-volume - name: mongodb-persistent-storage-claim mountPath: /data/db volumes: - name: secrets-volume secret: secretName: mongodb-replica-sets-key defaultMode: 256 volumeClaimTemplates: - metadata: name: mongodb-persistent-storage-claim #annotations: # volume.beta.kubernetes.io/storage-class: "standard" spec: storageClassName: "managed-nfs-storage" accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 10Gi
[root@k8s-master1 mongodb-cluster]# kubectl apply -f mongodb-statefulset.yaml service/mongodb-cluster created statefulset.apps/mongodb-cluster created [root@k8s-master1 mongodb-cluster]#
[root@k8s-master1 mongodb-cluster]# kubectl get statefulset/mongodb-cluster -n mongodb-cluster -o wide [root@k8s-master1 mongodb-cluster]# kubectl get pod -n mongodb-cluster -o wide
[root@k8s-master1 mongodb-cluster]# kubectl get pvc -n mongodb-cluster -o wide
[root@k8s-master1 mongodb-cluster]# kubectl get pv -o wide
[root@k8s_nfs ~]# ls -l /ifs/kubernetes/
[root@k8s-master1 mongodb-cluster]# kubectl get svc/mongodb-cluster -n mongodb-cluster -o wide [root@k8s-master1 mongodb-cluster]# kubectl get ep/mongodb-cluster -n mongodb-cluster -o wide
# kubectl run -i --tty --image busybox:1.28.4 dns-test --restart=Never --rm /bin/sh If you don't see a command prompt, try pressing enter. / # nslookup mongodb-cluster.mongodb-cluster.svc.cluster.local Server: 172.28.0.2 Address 1: 172.28.0.2 kube-dns.kube-system.svc.cluster.local Name: mongodb-cluster.mongodb-cluster.svc.cluster.local Address 1: 172.27.169.165 mongodb-cluster-1.mongodb-cluster.mongodb-cluster.svc.cluster.local Address 2: 172.27.169.166 mongodb-cluster-2.mongodb-cluster.mongodb-cluster.svc.cluster.local Address 3: 172.27.36.97 mongodb-cluster-0.mongodb-cluster.mongodb-cluster.svc.cluster.local / # exit
我们需要连接到"mongodb"容器进程之一来配置副本集,运行以下命令以连接到第一个容器,在shell中启动副本集,由于使用了StatefulSet,我们 可以依赖主机名始终相同。 [root@k8s-master1 mongodb-cluster]# kubectl exec -it pod/mongodb-cluster-0 -n mongodb-cluster -- bash root@mongodb-cluster-0:/# mongo MongoDB shell version v4.2.21 connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("cf61cdba-55f9-4720-8c82-d9f2ab2cf6cd") } MongoDB server version: 4.2.21 Welcome to the MongoDB shell. For interactive help, type "help". For more comprehensive documentation, see https://docs.mongodb.com/ Questions? Try the MongoDB Developer Community Forums https://community.mongodb.com > use admin switched to db admin > rs.initiate({_id: "MainRepSet", version: 1, members: [ { _id: 0, host : "mongodb-cluster-0.mongodb-cluster.mongodb-cluster.svc.cluster.local:27017" }, { _id: 1, host : "mongodb-cluster-1.mongodb-cluster.mongodb-cluster.svc.cluster.local:27017" }, { _id: 2, host : "mongodb-cluster-2.mongodb-cluster.mongodb-cluster.svc.cluster.local:27017" } ]}); { "ok" : 1 } MainRepSet:SECONDARY> MainRepSet:PRIMARY>
检查mongodb副本集的状态,直到mongodb副本集完全初始化并且存在一个主副本和两个辅助副本
MainRepSet:PRIMARY> rs.status();
mongodb-cluster-0角色变为PRIMARY,mongodb-cluster-1、mongodb-cluster-2角色变为SECONDARY。
执行此操作会导致"mongodb匿名登录"被自动永久禁用
MainRepSet:PRIMARY> db.getSiblingDB("admin").createUser({ user : "root", pwd : "liuchang123456", roles: [ { role: "root", db: "admin" } ] }); Successfully added user: { "user" : "root", "roles" : [ { "role" : "root", "db" : "admin" } ] } MainRepSet:PRIMARY> rs.status(); { "operationTime" : Timestamp(1656255867, 4), "ok" : 0, "errmsg" : "command replSetGetStatus requires authentication", "code" : 13, "codeName" : "Unauthorized", "$clusterTime" : { "clusterTime" : Timestamp(1656255951, 1), "signature" : { "hash" : BinData(0,"hjfd5q6urr0hk0ldnWwQzliMAIE="), "keyId" : NumberLong("7113556038019710977") } } } MainRepSet:PRIMARY> exit
root@mongodb-cluster-0:/# mongo --host mongodb-cluster-0.mongodb-cluster.mongodb-cluster.svc.cluster.local \ --port 27017 -uroot -p'liuchang123456' admin MongoDB shell version v4.2.21 connecting to: mongodb://mongodb-cluster-0.mongodb-cluster.mongodb-cluster.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("dc18d908-e263-498f-adbe-dd3ca1d41659") } MongoDB server version: 4.2.21 Server has startup warnings: 2022-06-26T14:23:53.744+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2022-06-26T14:23:53.745+0000 I CONTROL [initandlisten] 2022-06-26T14:23:53.745+0000 I CONTROL [initandlisten] 2022-06-26T14:23:53.745+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2022-06-26T14:23:53.745+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2022-06-26T14:23:53.745+0000 I CONTROL [initandlisten] 2022-06-26T14:23:53.745+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2022-06-26T14:23:53.745+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2022-06-26T14:23:53.745+0000 I CONTROL [initandlisten] --- Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() To permanently disable this reminder, run the following command: db.disableFreeMonitoring() --- MainRepSet:PRIMARY> db.getName(); admin MainRepSet:PRIMARY> exit
补充: 验证登录方式
root@mongodb-cluster-0:/# mongo MongoDB shell version v4.2.21 connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("0cd67115-a016-4a69-82a6-99ba3fa74b54") } MongoDB server version: 4.2.21 MainRepSet:PRIMARY> db.getSiblingDB('admin').auth("root", "liuchang123456"); MainRepSet:PRIMARY> db.getName(); test MainRepSet:PRIMARY> exit
root@mongodb-cluster-0:/# mongo --host mongodb-cluster-0.mongodb-cluster.mongodb-cluster.svc.cluster.local \ --port 27017 -uroot -p'liuchang123456' admin MainRepSet:PRIMARY> use master_slave_test switched to db master_slave_test MainRepSet:PRIMARY> function add(){var i = 0;for(;i<7;i++){db.persons.insert({"name":"master_slave_test"+i})}} MainRepSet:PRIMARY> add() MainRepSet:PRIMARY> db.persons.find() { "_id" : ObjectId("62b87c308c229e35afd84e63"), "name" : "master_slave_test0" } { "_id" : ObjectId("62b87c308c229e35afd84e64"), "name" : "master_slave_test1" } { "_id" : ObjectId("62b87c308c229e35afd84e65"), "name" : "master_slave_test2" } { "_id" : ObjectId("62b87c308c229e35afd84e66"), "name" : "master_slave_test3" } { "_id" : ObjectId("62b87c308c229e35afd84e67"), "name" : "master_slave_test4" } { "_id" : ObjectId("62b87c308c229e35afd84e68"), "name" : "master_slave_test5" } { "_id" : ObjectId("62b87c308c229e35afd84e69"), "name" : "master_slave_test6" } MainRepSet:PRIMARY> exit
root@mongodb-cluster-0:/# mongo --host mongodb-cluster-1.mongodb-cluster.mongodb-cluster.svc.cluster.local \ --port 27017 -uroot -p'liuchang123456' admin MainRepSet:SECONDARY> db.getMongo().setSecondaryOk() MainRepSet:SECONDARY> use master_slave_test switched to db master_slave_test MainRepSet:SECONDARY> db.persons.find() { "_id" : ObjectId("62b87c308c229e35afd84e63"), "name" : "master_slave_test0" } { "_id" : ObjectId("62b87c308c229e35afd84e64"), "name" : "master_slave_test1" } { "_id" : ObjectId("62b87c308c229e35afd84e67"), "name" : "master_slave_test4" } { "_id" : ObjectId("62b87c308c229e35afd84e66"), "name" : "master_slave_test3" } { "_id" : ObjectId("62b87c308c229e35afd84e69"), "name" : "master_slave_test6" } { "_id" : ObjectId("62b87c308c229e35afd84e65"), "name" : "master_slave_test2" } { "_id" : ObjectId("62b87c308c229e35afd84e68"), "name" : "master_slave_test5" } MainRepSet:SECONDARY> exit root@mongodb-cluster-0:/# exit
1 删除整个mongodb副本集的pod
[root@k8s-master1 mongodb-cluster]# kubectl delete -f mongodb-statefulset.yaml service "mongodb-cluster" deleted statefulset.apps "mongodb-cluster" deleted [root@k8s-master1 mongodb-cluster]# kubectl get all -n mongodb-cluster -o wide No resources found in mongodb-cluster namespace.
注意:
这里只是删除了mongodb副本集的pod和service,对应的pvc、pv、存储并没有被删除。
当重建pod时会重新占用对应的pvc, 因为pvc中使用了pod的名字,而statefulset又保持了pod的名称固定不变
2 重新创建mongodb副本集的pod
[root@k8s-master1 mongodb-cluster]# kubectl apply -f mongodb-statefulset.yaml service/mongodb-cluster created statefulset.apps/mongodb-cluster created [root@k8s-master1 mongodb-cluster]# kubectl get all -n mongodb-cluster -o wide
3 验证mongodb副本集的数据
[root@k8s-master1 mongodb-cluster]# kubectl exec -it pod/mongodb-cluster-0 -n mongodb-cluster -- bash root@mongodb-cluster-0:/# mongo --host mongodb-cluster-0.mongodb-cluster.mongodb-cluster.svc.cluster.local \ --port 27017 -uroot -p'liuchang123456' admin MainRepSet:PRIMARY> use master_slave_test switched to db master_slave_test MainRepSet:PRIMARY> db.persons.find() { "_id" : ObjectId("62b87c308c229e35afd84e63"), "name" : "master_slave_test0" } { "_id" : ObjectId("62b87c308c229e35afd84e64"), "name" : "master_slave_test1" } { "_id" : ObjectId("62b87c308c229e35afd84e65"), "name" : "master_slave_test2" } { "_id" : ObjectId("62b87c308c229e35afd84e66"), "name" : "master_slave_test3" } { "_id" : ObjectId("62b87c308c229e35afd84e67"), "name" : "master_slave_test4" } { "_id" : ObjectId("62b87c308c229e35afd84e68"), "name" : "master_slave_test5" } { "_id" : ObjectId("62b87c308c229e35afd84e69"), "name" : "master_slave_test6" } MainRepSet:PRIMARY> exit bye root@mongodb-cluster-0:/# exit
删除 mongodb-cluster-0 pod 并继续检查 rs.status(),最终剩下的两个节点中的一个节点将成为 PRIMARY。
只要连接到mongodb副本集中的任意一个pod都可以实现mongodb的数据备份,那么怎样才能连接上呢,这里提供两种思路。 (1) 在k8s集群中找一个节点,修改"/etc/resolv.conf"文件,将k8s集群的dns域名解析配置放到首位即可。 (2) 添加一个类型为nodePort的service,这样k8s的每个节点上都会有"27017"监听端口。