不同节点上的Pod无法互相ping通

时间:2018-08-11 01:00:33

标签: kubernetes

我根据文档设置了1个主节点2个节点k8s集群。某个Pod可以ping同一个节点上的另一个Pod,但是不能ping同一个节点上的Pod。

为演示该问题,我在具有3个副本的部署下面部署了该软件。他们中的两个坐在同一节点上,而另一个吊舱则在另一个节点上。


    $ cat nginx.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: nginx-svc
    spec:
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80

    $ kubectl get nodes
    NAME                                          STATUS    ROLES     AGE       VERSION
    ip-172-31-21-115.us-west-2.compute.internal   Ready     master    20m       v1.11.2
    ip-172-31-26-62.us-west-2.compute.internal    Ready         19m       v1.11.2
    ip-172-31-29-204.us-west-2.compute.internal   Ready         14m       v1.11.2

    $ kubectl get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP           NODE                                          NOMINATED NODE
    nginx-deployment-966857787-22qq7   1/1       Running   0          11m       10.244.2.3   ip-172-31-29-204.us-west-2.compute.internal   
    nginx-deployment-966857787-lv7dd   1/1       Running   0          11m       10.244.1.2   ip-172-31-26-62.us-west-2.compute.internal    
    nginx-deployment-966857787-zkzg6   1/1       Running   0          11m       10.244.2.2   ip-172-31-29-204.us-west-2.compute.internal   

    $ kubectl get svc
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    kubernetes   ClusterIP   10.96.0.1               443/TCP   21m
    nginx-svc    ClusterIP   10.105.205.10           80/TCP    11m

一切都很好。

让我为您展示容器。


    # docker exec -it 489b180f512b /bin/bash
    root@nginx-deployment-966857787-zkzg6:/# ifconfig
    eth0: flags=4163  mtu 8951
            inet 10.244.2.2  netmask 255.255.255.0  broadcast 0.0.0.0
            inet6 fe80::cc4d:61ff:fe8a:5aeb  prefixlen 64  scopeid 0x20

    root@nginx-deployment-966857787-zkzg6:/# ping 10.244.2.3
    PING 10.244.2.3 (10.244.2.3) 56(84) bytes of data.
    64 bytes from 10.244.2.3: icmp_seq=1 ttl=64 time=0.066 ms
    64 bytes from 10.244.2.3: icmp_seq=2 ttl=64 time=0.055 ms
    ^C

因此它在同一节点上ping它的邻居pod。


    root@nginx-deployment-966857787-zkzg6:/# ping 10.244.1.2
    PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
    ^C
    --- 10.244.1.2 ping statistics ---
    2 packets transmitted, 0 received, 100% packet loss, time 1059ms

并且无法在另一个节点上ping它的副本。

这是主机接口:


    # ifconfig
    cni0: flags=4163  mtu 8951
            inet 10.244.2.1  netmask 255.255.255.0  broadcast 0.0.0.0

    docker0: flags=4099  mtu 1500
            inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

    eth0: flags=4163  mtu 9001
            inet 172.31.29.204  netmask 255.255.240.0  broadcast 172.31.31.255

    flannel.1: flags=4163  mtu 8951
            inet 10.244.2.0  netmask 255.255.255.255  broadcast 0.0.0.0

    lo: flags=73  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0

    veth09fb984a: flags=4163  mtu 8951
            inet6 fe80::d819:14ff:fe06:174c  prefixlen 64  scopeid 0x20

    veth87b3563e: flags=4163  mtu 8951
            inet6 fe80::d09c:d2ff:fe7b:7dd7  prefixlen 64  scopeid 0x20

    # ifconfig
    cni0: flags=4163  mtu 8951
            inet 10.244.1.1  netmask 255.255.255.0  broadcast 0.0.0.0

    docker0: flags=4099  mtu 1500
            inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

    eth0: flags=4163  mtu 9001
            inet 172.31.26.62  netmask 255.255.240.0  broadcast 172.31.31.255

    flannel.1: flags=4163  mtu 8951
            inet 10.244.1.0  netmask 255.255.255.255  broadcast 0.0.0.0

    lo: flags=73  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0

    veth9733e2e6: flags=4163  mtu 8951
            inet6 fe80::8003:46ff:fee2:abc2  prefixlen 64  scopeid 0x20

节点上的进程:


    # ps auxww|grep kube
    root      4059  0.1  2.8  43568 28316 ?        Ssl  00:31   0:01 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
    root      4260  0.0  3.4 358984 34288 ?        Ssl  00:31   0:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
    root      4455  1.1  9.6 760868 97260 ?        Ssl  00:31   0:14 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni

由于此网络问题,clusterIP也无法访问:

$ curl 10.105.205.10:80

有什么建议吗?

谢谢。

2 个答案:

答案 0 :(得分:1)

docker虚拟网桥接口docker0现在在两个主机上都具有IP 172.17.0.1

但是根据docker / flannel集成指南,docker0虚拟网桥应该位于每台主机上的法兰网中。

下面的法兰绒/码头工人网络集成的高级工作流程

  • Flannel在/run/flannel/subnet.env启动期间根据etcd网络配置创建flanneld
  • Docker在/run/flannel/subnet.env启动期间引用文件--bip并设置dockerd标志,并将绒布网络的IP分配给docker0

有关更多详细信息,请参阅docker / flannel集成文档: http://docker-k8s-lab.readthedocs.io/en/latest/docker/docker-flannel.html#restart-docker-daemon-with-flannel-network

答案 1 :(得分:1)

我发现了问题。

Flannel使用被AWS安全组阻止的UDP端口8285和8472。我只打开了TCP端口。

我启用UDP端口8285和UDP端口8472以及TCP 6443、10250、10256。