目录
概述
安装前提示
安装docker
安装kubeadm
安装kubernete集群master节点
安装 kubeadm/kubectl/kubelet组件
安装kubernete master节点
安装CNI网络插件
部署集群worker节点
安装dashboard
总结
参考文章
kubeadm是一个部署kubernete集群的非常易用的工具,只需要2条命令kubeadm init和kubeadm join就可以搭建起kubernete的集群。它采用的方案是把kubelete直接部署在宿主机上,其他组件部署在容器中。
本文采用的机器资源:
2个vmware虚机,安装系统centos7,2核CPU,4G内存
hostname | 内存 | CPU | 操作系统 | 虚拟类型 |
master | 4G | 2核 | centos7 | vmware |
worker1 | 4G | 2核 | centos7 | vmware |
1.给虚拟机安装系统过程中,一定要指定hostname,不然2个虚拟机hostname默认一致,worker节点加入集群后,不能显示。安装好kubernete后再修改,会有各种问题,我本人没有搞定这些问题,所以我又重装了系统。安装操作系统时指定hostname的地方在页面上的network。参见下图。
2.虚机的硬件条件一定要满足2核CPU,不然会安装失败,错误如下:
3.安装系统时network里面的ens33要选择connection,如下图。不然安装安装后ens33没有ip地址,需要命令修改文件/etc/sysconfig/network-scripts/ifcfg-ens33中onboot=yes,之后执行命令service network restart
4.安装kubelet kubeadm kubectl组件时最好指定版本,阿里云不一定有最新版本,本文采用v1.17.3,详见后面内容
这里我们选择安装17.03.2版本,使用阿里云的镜像。
我选择离线安装
在官网https://download.docker.com/linux/centos/7/x86_64/stable/Packages/下载2个文件
docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
之后拷贝到虚机,执行下面命令即可安装
rpm -ivh docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
或
yum install docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm yum install docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
过程如下:
[root@worker1 docker]# rpm -ivh docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm warning: docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY Preparing... ################################# [100%] Updating / installing... 1:docker-ce-selinux-17.03.2.ce-1.el################################# [100%] Re-declaration of type docker_t Failed to create node Bad type declaration at /etc/selinux/targeted/tmp/modules/400/docker/cil:1 /usr/sbin/semodule: Failed! restorecon: lstat(/var/lib/docker) failed: No such file or directory warning: %post(docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch) scriptlet failed, exit status 255 [root@worker1 docker]# rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm warning: docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY Preparing... ################################# [100%] Updating / installing... 1:docker-ce-17.03.2.ce-1.el7.centos################################# [100%] [root@worker1 docker]# docker -v Docker version 17.03.2-ce, build f5ec1e2
启动docker服务
systemctl start docker.service
有3中方式:二进制安装、***安装和使用阿里云镜像安装,前2种方式不介绍了,大家可以百度一下,使用阿里云镜像是最简单的安装方式。
使用如下代码配置源文件地址
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
再执行如下命令就可以安装了:
yum install -y kubelet kubeadm kubectl
安装好kubeadm后,就可以使用kubeadm来部署kubernete集群了,首先部署master节点,命令如下:
kubeadm init
再次执行kubeadm init命令,报如下错误:
从报错信息中可以找到问题,执行如下2个命令进行修复:
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables swapoff -a
再次查看swap是否已经关闭,可以看出,swap已经是0了
再来,这次报错不一样了,kubernete相关组件的国外镜像拉取不到
使用阿里云镜像来拉取kubernete组件,命令如下:
kubeadm init --image-repository registry.aliyuncs.com/google_containers
再次报错,找不到v1.18.3版本,只好降一个版本,使用v1.17.3,这时候需要重新安装kubeadm组件,首先删除之前的组件
yum remove kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64
执行命令安装版本v1.17.3
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
安装成功日志如下:
kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
这次竟然默认安装v1.17.6版本,又是找不到,我们还是指定版本号v1.17.3吧
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.17.3 --pod-network-cidr=192.168.59.0/16
这次一路成功,最后日志如下:
[root@master docker]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.17.3 --pod-network-cidr=192.168.59.0/16 W0528 09:08:40.669505 24739 validation.go:28] Cannot validate kube-proxy config - no validator is available W0528 09:08:40.669664 24739 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.3 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Hostname]: hostname "master" could not be reached [WARNING Hostname]: hostname "master": lookup master on 192.168.59.2:53: server misbehaving [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.59.132] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.59.132 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.59.132 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0528 09:08:55.439922 24739 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0528 09:08:55.442026 24739 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 217.504294 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: kw377c.z478de8wq0i41ksq [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.59.132:6443 --token kw377c.z478de8wq0i41ksq \ --discovery-token-ca-cert-hash sha256:d32410d53b7b4546dd4cc4967b8e2c7779d5fd9244c8342b7f8ffa16e855a12f
可以看到安装成功了,我们查看一下pod状态,
[root@localhost docker]# kubectl get pods --all-namespaces The connection to the server localhost:8080 was refused - did you specify the right host or port?
执行如下命令可以解决:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
执行这3条命令的原因是:Kubernetes 集群默认需要加密方式访问。将刚刚部署生成的 Kubernetes 集群的安全配置文件,保存到当前用户的.kube 目录下,kubectl 默认就会使用这个目录下的授权信息访问 Kubernetes 集群。
如果遇到问题:Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") 也可以执行上面命令解决
再次查看pod状态
[root@localhost docker]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-9d85f5447-c9vg5 0/1 Pending 0 19h kube-system coredns-9d85f5447-w4w9n 0/1 Pending 0 19h kube-system etcd-localhost.localdomain 1/1 Running 0 19h kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 19h kube-system kube-controller-manager-localhost.localdomain 1/1 Running 1 19h kube-system kube-proxy-zvq6z 1/1 Running 0 19h kube-system kube-scheduler-localhost.localdomain 1/1 Running 1 19h
下面,我们再看一下nodes状态:
[root@localhost docker]# kubectl get nodes NAME STATUS ROLES AGE VERSION Master NotReady master 20h v1.17.3
从上面可以看出master节点还没有准备好,原因在于网络插件没有部署好,下面我们部署网络插件。Kubernete网络方案主要是基于CNI的实现,比如Flannel、Calico、Canal、Romana,这儿我们使用flannel。命令如下:
kubectl apply -f ttps://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
我使用的yaml文件地址:
https://github.com/jinjunzhu/kubernete/blob/master/kube-flannel.yml
注意:这个过程会慢一些,看本地网络状况,耐心等待。
之后在查看pod状态:
[root@localhost k8s]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-9d85f5447-lcs5s 0/1 Running 0 5m31s coredns-9d85f5447-wllzv 0/1 Running 0 5m31s etcd-localhost.localdomain 1/1 Running 0 5m40s kube-apiserver-localhost.localdomain 1/1 Running 0 5m40s kube-controller-manager-localhost.localdomain 1/1 Running 0 5m40s kube-flannel-ds-amd64-9vv4m 1/1 Running 0 38s kube-proxy-qv6z5 1/1 Running 0 5m31s kube-scheduler-localhost.localdomain 1/1 Running 0 5m40s
再查看节点状态
[root@localhost flannel]# kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 3m20s v1.17.3
至此master节点启动成功了
注意:在部署过程中,可以会遇到网络失败的问题,这时候如果各种命令都不好解决,可以执行kubeadm reset 之后重新执行init过程
部署worker节点比部署master节点简单,不用运行 kube-apiserver、kube-scheduler、kube-controller-manger这3个节点。
Worker节点机器上也要执行下面命令:
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables swapoff -a
首先跟上一节一样安装docker和kubeadm,然后执行如下命令,也就是部署master节点的输出
kubeadm join 192.168.59.132:6443 --token kw377c.z478de8wq0i41ksq --discovery-token-ca-cert-hash sha256:d32410d53b7b4546dd4cc4967b8e2c7779d5fd9244c8342b7f8ffa16e855a12f
注意:worker节点要关闭防火墙:
systemctl stop firewalld service iptables stop
执行命令后报错:
[root@localhost pki]# kubeadm join 192.168.59.132:6443 --token kw377c.z478de8wq0i41ksq --discovery-token-ca-cert-hash sha256:d32410d53b7b4546dd4cc4967b8e2c7779d5fd9244c8342b7f8ffa16e855a12f W0527 13:15:35.059010 18952 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' error execution phase kubelet-start: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher
从日志中可以看出,kubelete启动异常,这个原因是因为这个虚机上我之前搭建过minikube。我尝试了多种方式删除之前的环境,但是都没有成功。只能重新装系统了。我给这个虚机重新装了系统后,重新安装docker,kubeadm,之后执行kubectl join命令,结果如下:
[root@worker1 ~]# kubeadm join 192.168.59.132:6443 --token kw377c.z478de8wq0i41ksq --discovery-token-ca-cert-hash sha256:d32410d53b7b4546dd4cc4967b8e2c7779d5fd9244c8342b7f8ffa16e855a12f W0528 21:57:11.003270 16810 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Hostname]: hostname "worker1" could not be reached [WARNING Hostname]: hostname "worker1": lookup worker1 on 192.168.59.2:53: server misbehaving [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 在master节点上执行kubectl get nodes结果如下: [root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 12h v1.17.3 worker1 Ready <none> 29s v1.17.3
安装dashboard理论上非常简单,实际上坑非常多,命令如下:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml
安装后发现dashboard的pod一直启动失败,如下图:
用如下命令查看日志
[root@master kubernetes]# kubectl logs kubernetes-dashboard-64999dbccd-gmk5x --namespace=kubernetes-dashboard Error from server: Get https://192.168.59.136:10250/containerLogs/kubernetes-dashboard/kubernetes-dashboard-64999dbccd-gmk5x/kubernetes-dashboard: dial tcp 192.168.59.136:10250: connect: no route to host
根据网上的一些文档,master上默认不能部署pod,在master上搭建dashboard,需要注释掉如下3行:
之后再报这个错误:
initializing csrf token from kubernetes-dashboard-csrf secret panic: Get https://10.2.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.244.0.1:443: i/o timeout
网上找了一下原因,Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部。最后参照网上的步骤做了修复,源码文件如下:
https://github.com/jinjunzhu/kubernete/blob/master/recommended.yaml
改好yaml文件后,执行kubectl apply -f recommended.yaml 安装成功
登陆dashboard页面,如下,需要token
创建用户,命令如下:
kubectl create serviceaccount dashboard-admin -n kube-system kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk ‘/dashboard-admin/{print $1}‘)
最后一条命令截取一部分输出如下:
Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkF5bjJEM3g0cjJCS282TlNCcjU0aVdTRE4wT0JqaE05LUxuODlTRFVkR1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVycm9sZS1hZ2dyZWdhdGlvbi1jb250cm9sbGVyLXRva2VuLWs2ejd2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXJyb2xlLWFnZ3JlZ2F0aW9uLWNvbnRyb2xsZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2OWMyN2NlYy04MDY5LTRkOWItOTdkNi1lZjVjMzk5NGI1Y2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3RlcnJvbGUtYWdncmVnYXRpb24tY29udHJvbGxlciJ9.BFORGRlEK7i7kPvGnDQDZKr5ow6feuWWymhor_BPecd1YUUMXnwDy9JvPPUizHMQnRxmA4HO-WlRcAcYXOFsWBQ9fz3KqLQuSJEDICz128XyA5bUEesS_MKqGTh7p4drc2OuduW7EHm2_UEs8g9SUeogTrp9JksQlEXUoln5TnactpzMr2J6w3hPKO85z3eUv_14f240kfYgN0jR6Q9owlDEcG27onNlDHvT2hGNs-9IJaBFSuPobf7zuJLY4GR2qkLGclszgFKHGsl8NObrS2c5_Ep7iQBBfw4STTCzuW5tG9gNKWzwXKwAnJTM2wu6oePBJ34df6rGAjzjXNlvHg Name: coredns-token-z26jv Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: coredns kubernetes.io/service-account.uid: ddd1ae67-1045-4674-8277-11f4ccee2e65 Type: kubernetes.io/service-account-token
把上面的token复制到上面截图的token输入框,登陆成功。查看集群信息,截图如下:
部署kubernete集群还是有不小的门槛,先得了解一下kubernete的工作原理和各个组件的作用,遇到不能拉取的镜像首先选择阿里云,阿里云一般都可以找到,遇到问题多看官网上的issue,好多中文的博客讲不透彻。
https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11xCHTc0 https://www.kubernetes.org.cn/7189.html https://github.com/kubernetes/kubernetes/issues/54542 https://kubernetes.io/docs/setup/production-environment/container-runtimes/ https://github.com/kubernetes/dashboard/blob/master/docs/user/installation.md https://github.com/kubernetes/dashboard/blob/master/README.md http://www.mamicode.com/info-detail-2961782.html
微信公众号,欢迎关注,共同学习进步