没有可用于服务\“ kubernetes-dashboard \”的端点

时间:2018-10-19 03:29:49

标签: kubernetes kubectl

我正在尝试关注GitHub - kubernetes/dashboard: General-purpose web UI for Kubernetes clusters

部署/访问:

# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
# kubectl proxy
Starting to serve on 127.0.0.1:8001

卷曲:

# curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}# 

请告知。

每个@VKR

$ kubectl get pods --all-namespaces 
NAMESPACE     NAME                                              READY   STATUS              RESTARTS   AGE
kube-system   coredns-576cbf47c7-56vg7                          0/1     ContainerCreating   0          57m
kube-system   coredns-576cbf47c7-sn2fk                          0/1     ContainerCreating   0          57m
kube-system   etcd-wcmisdlin02.uftwf.local                      1/1     Running             0          56m
kube-system   kube-apiserver-wcmisdlin02.uftwf.local            1/1     Running             0          56m
kube-system   kube-controller-manager-wcmisdlin02.uftwf.local   1/1     Running             0          56m
kube-system   kube-proxy-2hhf7                                  1/1     Running             0          6m57s
kube-system   kube-proxy-lzfcx                                  1/1     Running             0          7m35s
kube-system   kube-proxy-rndhm                                  1/1     Running             0          57m
kube-system   kube-scheduler-wcmisdlin02.uftwf.local            1/1     Running             0          56m
kube-system   kubernetes-dashboard-77fd78f978-g2hts             0/1     Pending             0          2m38s
$ 

logs

$ kubectl logs kubernetes-dashboard-77fd78f978-g2hts -n kube-system
$ 

describe

$ kubectl describe pod kubernetes-dashboard-77fd78f978-g2hts -n kube-system
Name:               kubernetes-dashboard-77fd78f978-g2hts
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kubernetes-dashboard
                    pod-template-hash=77fd78f978
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/kubernetes-dashboard-77fd78f978
Containers:
  kubernetes-dashboard:
    Image:      k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
    Port:       8443/TCP
    Host Port:  0/TCP
    Args:
      --auto-generate-certificates
    Liveness:     http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-gp4l7 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  kubernetes-dashboard-token-gp4l7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-gp4l7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                      From               Message
  ----     ------            ----                     ----               -------
  Warning  FailedScheduling  4m39s (x21689 over 20h)  default-scheduler  0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.
$ 

2 个答案:

答案 0 :(得分:3)

您似乎正在尝试利用kubeadm部署Kubernetes,但是跳过了Installing a pod network add-on (CNI)的步骤。注意警告:

  

必须在任何应用程序之前部署网络。另外,在安装网络之前,CoreDNS将不会启动。 kubeadm仅支持基于容器网络接口(CNI)的网络(不支持kubenet)。

执行此操作后,CoreDNS吊舱将正常运行。可以通过以下方式验证: kubectl -n kube-system -l=k8s-app=kube-dns get pods

然后kubernetes-dashboard吊舱也应该健康起来。

答案 1 :(得分:1)

我遇到了同样的问题。最后证明是 Calico 网络配置问题。但是一步一步...

首先我检查仪表板 Pod 是否正在运行:

kubectl get pods --all-namespaces

我的结果是:

NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
kube-system            calico-kube-controllers-bcc6f659f-j57l9      1/1     Running            2          19h
kube-system            calico-node-hdxp6                            0/1     CrashLoopBackOff   13         15h
kube-system            calico-node-z6l56                            0/1     Running            68         19h
kube-system            coredns-74ff55c5b-8l6m6                      1/1     Running            2          19h
kube-system            coredns-74ff55c5b-v7pkc                      1/1     Running            2          19h
kube-system            etcd-got-virtualbox                          1/1     Running            3          19h
kube-system            kube-apiserver-got-virtualbox                1/1     Running            3          19h
kube-system            kube-controller-manager-got-virtualbox       1/1     Running            3          19h
kube-system            kube-proxy-q99s5                             1/1     Running            2          19h
kube-system            kube-proxy-vrpcd                             1/1     Running            1          15h
kube-system            kube-scheduler-got-virtualbox                1/1     Running            2          19h
kubernetes-dashboard   dashboard-metrics-scraper-7b59f7d4df-qc9ms   1/1     Running            0          28m
kubernetes-dashboard   kubernetes-dashboard-74d688b6bc-zrdk4        0/1     CrashLoopBackOff   9          28m

最后一行表明仪表板 pod 无法启动(状态=CrashLoopBackOff)。 第2行显示calico节点有问题。最有可能的根本原因是 Calico。

下一步是查看 pod 日志 (更改您的 pod 列表中列出的命名空间/名称)

kubectl logs kubernetes-dashboard-74d688b6bc-zrdk4 -n kubernetes-dashboard

我的结果是:

2021/03/05 13:01:12 Starting overwatch
2021/03/05 13:01:12 Using namespace: kubernetes-dashboard
2021/03/05 13:01:12 Using in-cluster config to connect to apiserver
2021/03/05 13:01:12 Using secret token for csrf signing
2021/03/05 13:01:12 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout

嗯 - 不是很有帮助。在搜索“dial tcp 10.96.0.1:443: i/o timeout”后,我找到了这个信息,它说...

<块引用>

如果您按照 kubeadm 的说明进行操作...这意味着使用 Kubeadm 托管的说明安装 docker、kubernetes(kubeadm、kubectl 和 kubelet)和 calico...并且您的计算机节点在192.168.XX 的范围那么你最终会得到上面提到的非工作仪表板。这是因为节点 ip 地址与内部 calico ip 地址冲突。

https://github.com/kubernetes/dashboard/issues/1578#issuecomment-329904648

是的,我确实有一个范围为 192.168.x.x 的物理 IP - 就像许多其他人可能拥有的一样。我希望 Calico 在设置过程中检查一下。

那么让我们将 pod 网络移动到不同的 IP 范围:

您应该为私有网络使用 classless 保留的 IP 范围,例如 10.0.0.0/8(16.777.216 个地址) 172.16.0.0/12(1.048.576 个地址) 192.168.0.0/16(65.536 个地址)。否则 Calico 将终止并显示错误消息“在 CALICO_IPV4POOL_CIDR 中指定的 CIDR 无效”...

sudo kubeadm reset
sudo rm /etc/cni/net.d/10-calico.conflist
sudo rm /etc/cni/net.d/calico-kubeconfig

export CALICO_IPV4POOL_CIDR=172.16.0.0
export MASTER_IP=192.168.100.122
sudo kubeadm init --pod-network-cidr=$CALICO_IPV4POOL_CIDR/12 --apiserver-advertise-address=$MASTER_IP --apiserver-cert-extra-sans=$MASTER_IP

mkdir -p $HOME/.kube
sudo rm -f $HOME/.kube/config
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chown $(id -u):$(id -g) /etc/kubernetes/kubelet.conf

wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O calico.yaml
sudo sed -i "s/192.168.0.0\/16/$CALICO_IPV4POOL_CIDR\/12/g" calico.yaml
sudo sed -i "s/192.168.0.0/$CALICO_IPV4POOL_CIDR/g" calico.yaml
kubectl apply -f calico.yaml

现在我们测试是否所有的 calico pod 都在运行:

kubectl get pods --all-namespaces

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-bcc6f659f-ns7kz   1/1     Running   0          15m
kube-system   calico-node-htvdv                         1/1     Running   6          15m
kube-system   coredns-74ff55c5b-lqwpd                   1/1     Running   0          17m
kube-system   coredns-74ff55c5b-qzc87                   1/1     Running   0          17m
kube-system   etcd-got-virtualbox                       1/1     Running   0          17m
kube-system   kube-apiserver-got-virtualbox             1/1     Running   0          17m
kube-system   kube-controller-manager-got-virtualbox    1/1     Running   0          18m
kube-system   kube-proxy-6xr5j                          1/1     Running   0          17m
kube-system   kube-scheduler-got-virtualbox             1/1     Running   0          17m

看起来不错。如果没有通过编辑节点配置检查 CALICO_IPV4POOL_CIDR:KUBE_EDITOR="nano" kubectl edit -n kube-system ds calico-node

让我们应用 kubernetes-dashboard 并启动代理:

export KUBECONFIG=$HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
kubectl proxy

现在我可以加载 http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/