kubernetes pod can not connect to service

时间:2018-03-25 19:14:39

标签: kubernetes

I am running a 4 node cluster (on datacenter VM's) : 2 pods exposed via 2 services * 1st service postgresql rinning fine exposed via service postgresql-k8s-service on port 5432. * 2nd service Artifcatory which is basically a tomcat container trying to c onnect to this postgresql unfortunately the pod can not connect to service not sure what's going on.

ERROR: Waiting for DB postgresql to be ready on postgresql-k8s-service/5432 within 30 seconds

Login to Artifactory pod and run " ping postgresql-k8s-service" 
PING postgresql-k8s-service.default.svc.cluster.local (10.102.108.132): 56 data bytes
^C--- postgresql-k8s-service.default.svc.cluster.local ping statistics ---
7 packets transmitted, 0 packets received, 100% packet loss    

The service works just fine if run the Artifactory po on same node as postgresql which makes me believe something is off in iptables on the nodes.

Setup : Kubernetes using kubeadm with flannel as network provider.

What have i tried ?

  • Running both pods on same node everything works gr8 ...
  • run iptables -P FORWARD ACCEPT on all nodes

    root@osl-p10y:~# cat /var/run/flannel/subnet.env FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=10.244.1.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true

    kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
    10.244.2.0/24 10.244.3.0/24 10.244.1.0/24 10.244.0.0/24
    

enter code hereon Postgres node .

    iptables -t nat -S
    -P PREROUTING ACCEPT
    -P INPUT ACCEPT
    -P OUTPUT ACCEPT
    -P POSTROUTING ACCEPT
    -N DOCKER
    -N KUBE-MARK-DROP
    -N KUBE-MARK-MASQ
    -N KUBE-NODEPORTS
    -N KUBE-POSTROUTING
    -N KUBE-SEP-IT2ZTR26TO4XFPTO
    -N KUBE-SEP-R6ZMYJ3DNNU76P45
    -N KUBE-SEP-SDMS26WNQN2B6OVJ
    -N KUBE-SEP-YIL6JZP7A3QYXJU2
    -N KUBE-SERVICES
    -N KUBE-SVC-6BVLUYEF2BUG3NBU
    -N KUBE-SVC-D57225OKWQOKDCSS
    -N KUBE-SVC-ERIFXISQEP7F7OF4
    -N KUBE-SVC-NPX46M4PTMTKRN6Y
    -N KUBE-SVC-TCOU7JCQXEZGVUNU
    -A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
    -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
    -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
    -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
    -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
    -A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
    -A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
    -A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.1.0/24 -j RETURN
    -A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
    -A DOCKER -i docker0 -j RETURN
    -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
    -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
    -A KUBE-NODEPORTS -p tcp -m comment --comment "default/postgresql-k8s-service:" -m tcp --dport 30197 -j KUBE-MARK-MASQ
    -A KUBE-NODEPORTS -p tcp -m comment --comment "default/postgresql-k8s-service:" -m tcp --dport 30197 -j KUBE-SVC-D57225OKWQOKDCSS
    -A KUBE-NODEPORTS -p tcp -m comment --comment "default/artifactory:" -m tcp --dport 30419 -j KUBE-MARK-MASQ
    -A KUBE-NODEPORTS -p tcp -m comment --comment "default/artifactory:" -m tcp --dport 30419 -j KUBE-SVC-6BVLUYEF2BUG3NBU
    -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
    -A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
    -A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
    -A KUBE-SEP-R6ZMYJ3DNNU76P45 -s 10.5.12.113/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
    -A KUBE-SEP-R6ZMYJ3DNNU76P45 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-R6ZMYJ3DNNU76P45 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.5.12.113:6443
    -A KUBE-SEP-SDMS26WNQN2B6OVJ -s 172.17.0.2/32 -m comment --comment "default/postgresql-k8s-service:" -j KUBE-MARK-MASQ
    -A KUBE-SEP-SDMS26WNQN2B6OVJ -p tcp -m comment --comment "default/postgresql-k8s-service:" -m tcp -j DNAT --to-destination 172.17.0.2:5432
    -A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
    -A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
    -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
    -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
    -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
    -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.102.108.132/32 -p tcp -m comment --comment "default/postgresql-k8s-service: cluster IP" -m tcp --dport 5432 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.102.108.132/32 -p tcp -m comment --comment "default/postgresql-k8s-service: cluster IP" -m tcp --dport 5432 -j KUBE-SVC-D57225OKWQOKDCSS
    -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.101.173.241/32 -p tcp -m comment --comment "default/artifactory: cluster IP" -m tcp --dport 5432 -j KUBE-MARK-MASQ
    -A KUBE-SERVICES -d 10.101.173.241/32 -p tcp -m comment --comment "default/artifactory: cluster IP" -m tcp --dport 5432 -j KUBE-SVC-6BVLUYEF2BUG3NBU
    -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
    -A KUBE-SVC-D57225OKWQOKDCSS -m comment --comment "default/postgresql-k8s-service:" -j KUBE-SEP-SDMS26WNQN2B6OVJ
    -A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-IT2ZTR26TO4XFPTO
    -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-R6ZMYJ3DNNU76P45 --mask 255.255.255.255 --rsource -j KUBE-SEP-R6ZMYJ3DNNU76P45
    -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-R6ZMYJ3DNNU76P45
    -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-YIL6JZP7A3QYXJU2
    root@osl-p10y-db:~#

Also Here is the iptables nat rules from the app server (Artifactory) .

iptables -t nat -vnL | grep -i postgres
    5   300 KUBE-MARK-MASQ  all  --  *      *       172.17.0.2           0.0.0.0/0            /* default/postgresql-k8s-service: */
    5   300 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/postgresql-k8s-service: */ tcp to:172.17.0.2:5432
    5   300 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.105.106.161        /* default/postgresql-k8s-service: cluster IP */ tcp dpt:5432
    5   300 KUBE-SVC-D57225OKWQOKDCSS  tcp  --  *      *       0.0.0.0/0            10.105.106.161       /* default/postgresql-k8s-service: cluster IP */ tcp dpt:5432
    5   300 KUBE-SEP-SDMS26WNQN2B6OVJ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/postgresql-k8s-service: */

Please advise what am i doing wrong ? Sample yaml are here :

Arifcatory.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: artifactory-k8s-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: artifactory-pro-k8s
        group: artifactory-k8s
    spec:
      nodeSelector:
         name: artfapp2
      containers:
      - name: artifactory-pro-k8s
        image: docker.bintray.io/jfrog/artifactory-pro:5.9.1
        env:
        - name: DB_TYPE
          valueFrom:
            configMapKeyRef:
              name: k8s-artifactory-db-config
              key: DB_TYPE
        - name: DB_USER
          valueFrom:
            secretKeyRef:
              name: k8s-artifactory-db-secret
              key: POSTGRES_USER
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: k8s-artifactory-db-secret
              key: POSTGRES_PASSWORD
        - name: DB_HOST
          valueFrom:
            configMapKeyRef:
              name: k8s-artifactory-db-config
              key: DB_HOST
        # Make sure to keep the memory java args aligned with the resources definitions
        - name: EXTRA_JAVA_OPTIONS
          valueFrom:
            configMapKeyRef:
              name: k8s-artifactory-config
              key:  JAVA_OPTS
        ports:
        - containerPort: 8081
        volumeMounts:
        - mountPath: "/var/opt/jfrog/artifactory"
          name: artifactory-pro-volume
        # Make sure to keep the resources set with values matching EXTRA_JAVA_OPTIONS above
        resources:
          requests:
            memory: "1Gi"
            cpu: "500m"
          limits:
            memory: "2Gi"
            cpu: "1"
        readinessProbe:
          httpGet:
            path: '/artifactory/webapp/#/login'
            port: 8081
          initialDelaySeconds: 60
          periodSeconds: 10
          failureThreshold: 10
        livenessProbe:
          httpGet:
            path: '/artifactory/webapp/#/login'
            port: 8081
          initialDelaySeconds: 180
          periodSeconds: 10
        securityContext:
          allowPrivilegeEscalation: false
      volumes:
      - name: artifactory-pro-volume
        hostPath:
          # directory location on host
          path: /srv/data0/artifactory
          # this field is optional
          type: Directory
          ---
apiVersion: v1
kind: Service
metadata:
  name: artifactory
  labels:
    app: artifactory
    group: artifactory-k8s
spec:
  type: NodePort
  ports:
  - port: 8081
    targetPort: 8081
    protocol: TCP
  selector:
    app: artifactory-pro-k8s

Postgresql.yml

kind: Deployment
metadata:
  name: postgresql-k8s-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: postgresql-k8s
        group: artifactory-k8s
    spec:
      nodeSelector:
        name: artfdb
      initContainers:
      - name: "remove-lost-found"
        image: "busybox:1.26.2"
        imagePullPolicy: "IfNotPresent"
        command:
        - 'sh'
        - '-c'
        - 'rm -rf /var/lib/postgresql/data/lost+found'
        volumeMounts:
        - mountPath: "/var/lib/postgresql/data"
          name: postgresql-volume
      containers:
      - name: postgresql-k8s
        image: sauce-registry.eng.nutanix.com:5000/nutanix-postgres:latest
        env:
        - name: POSTGRES_DB
          valueFrom:
            configMapKeyRef:
              name: k8s-artifactory-db-config
              key: POSTGRES_DB
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: k8s-artifactory-db-secret
              key: POSTGRES_USER
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
               name: k8s-artifactory-db-secret
              key: POSTGRES_PASSWORD
        ports:
        - containerPort: 5432
        resources:
          requests:
            memory: "500Mi"
            cpu: "100m"
          limits:
            memory: "1Gi"
            cpu: "500m"
        volumeMounts:
        - mountPath: "/var/lib/postgresql/data"
          name: postgresql-volume
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - exec pg_isready -U postgres
          initialDelaySeconds: 60
          timeoutSeconds: 5
          failureThreshold: 6
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - exec pg_isready -U postgres
          initialDelaySeconds: 30
          timeoutSeconds: 3
          periodSeconds: 5
      volumes:
      - name: postgresql-volume
        hostPath:
          path: /srv/data0/artf_db
          type: Directory
---
apiVersion: v1
kind: Service
metadata:
  name: postgresql-k8s-service
  labels:
    app: postgresql-k8s-service
    group: artifactory-k8s
spec:
  ports:
  - port: 5432
    protocol: TCP
  selector:
    app: postgresql-k8s

1 个答案:

答案 0 :(得分:0)

对服务进行ping操作永远不会给你回复。相反,pod可以ping你。

查看您的文件:

我注意到你定义了一个服务两次(在同一个端口上),但是有不同的选择器。这可能会使服务变得混乱。

相关问题