Istio允许所有出站流量

时间:2018-10-20 21:23:40

标签: kubernetes istio

因此,在此处进行详细说明以更好地进行说明。我的服务由专用名称空间中的以下属性组成(不使用ServiceEntry)

  1. 部署(1个部署)
  2. 配置映射(1个配置映射)
  3. 服务
  4. VirtualService
  5. GW

在命名空间中启用了Istio,当我创建/运行部署时,它会按需创建2个容器。现在,正如问题主题中所述,我希望允许所有传出流量进行部署,因为我的服务需要连接2个服务发现服务器:

  1. 在端口8200上运行的保管库
  2. 在http上运行的
  3. spring配置服务器
  4. 下载依赖项并与其他服务(不属于vpc / k8的一部分)通信

使用以下部署文件将不会打开传出连接。 https request on port 443唯一有效的方法是简单的,就像我运行curl https://google.com时成功但对curl http://google.com没有响应一样。显示与Vault的连接的日志也没有建立。

我几乎在部署中使用了所有组合,但似乎都不起作用。我有什么想念的或者做错了吗?非常感谢在此方面的贡献:)

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: my-application-service
  name: my-application-service-deployment
  namespace: temp-nampesapce
  annotations:
    traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: my-application-service-deployment
    spec:
      containers:
      - envFrom:
        - configMapRef:
            name: my-application-service-env-variables
        image: image.from.dockerhub:latest
        name: my-application-service-pod
        ports:
        - containerPort: 8080
          name: myappsvc
        resources:
          limits:
            cpu: 700m
            memory: 1.8Gi
          requests:
            cpu: 500m
            memory: 1.7Gi

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-application-service-ingress
  namespace: temp-namespace
spec:
  hosts:
  - my-application.mydomain.com
  gateways:
  - http-gateway
  http:
  - route:
    - destination:
        host: my-application-service
        port:
          number: 80


kind: Service
apiVersion: v1
metadata:
  name: my-application-service
  namespace: temp-namespace
spec:
  selector:
    app: api-my-application-service-deployment
  ports:
  - port: 80
    targetPort: myappsvc
    protocol: TCP


apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: http-gateway
  namespace: temp-namespace
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*.mydomain.com"

启用了istio的命名空间:

Name:         temp-namespace
Labels:       istio-injection=enabled
Annotations:  <none>
Status:       Active

No resource quota.

No resource limits. 

描述显示istio和sidecare有效的豆荚。

Name:           my-application-service-deployment-fb897c6d6-9ztnx
Namespace:      temp-namepsace
Node:           ip-172-31-231-93.eu-west-1.compute.internal/172.31.231.93
Start Time:     Sun, 21 Oct 2018 14:40:26 +0500
Labels:         app=my-application-service-deployment
                pod-template-hash=964537282
Annotations:    sidecar.istio.io/status={"version":"2e0c897425ef3bd2729ec5f9aead7c0566c10ab326454e8e9e2b451404aee9a5","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status:         Running
IP:             100.115.0.4
Controlled By:  ReplicaSet/my-application-service-deployment-fb897c6d6
Init Containers:
  istio-init:
    Container ID:  docker://a47003a092ec7d3dc3b1d155bca0ec53f00e545ad1b70e1809ad812e6f9aad47
    Image:         docker.io/istio/proxy_init:1.0.2
    Image ID:      docker-pullable://istio/proxy_init@sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
    Port:          <none>
    Host Port:     <none>
    Args:
      -p
      15001
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      8080,
      -d

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 21 Oct 2018 14:40:26 +0500
      Finished:     Sun, 21 Oct 2018 14:40:26 +0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:         <none>
Containers:
  my-application-service-pod:
    Container ID:   docker://1a30a837f359d8790fb72e6b8fda040e121fe5f7b1f5ca47a5f3732810fd4f39
    Image:          image.from.dockerhub:latest
    Image ID:       docker-pullable://848569320300.dkr.ecr.eu-west-1.amazonaws.com/k8_api_env@sha256:98abee8d955cb981636fe7a81843312e6d364a6eabd0c3dd6b3ff66373a61359
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 21 Oct 2018 14:40:28 +0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     700m
      memory:  1932735283200m
    Requests:
      cpu:     500m
      memory:  1825361100800m
    Environment Variables from:
      my-application-service-env-variables  ConfigMap  Optional: false
    Environment:
      vault.token:  <set to the key 'vault_token' in secret 'vault.token'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rc8kc (ro)
  istio-proxy:
    Container ID:  docker://3ae851e8ded8496893e5b70fc4f2671155af41c43e64814779935ea6354a8225
    Image:         docker.io/istio/proxyv2:1.0.2
    Image ID:      docker-pullable://istio/proxyv2@sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
    Port:          <none>
    Host Port:     <none>
    Args:
      proxy
      sidecar
      --configPath
      /etc/istio/proxy
      --binaryPath
      /usr/local/bin/envoy
      --serviceCluster
      my-application-service-deployment
      --drainDuration
      45s
      --parentShutdownDuration
      1m0s
      --discoveryAddress
      istio-pilot.istio-system:15007
      --discoveryRefreshDelay
      1s
      --zipkinAddress
      zipkin.istio-system:9411
      --connectTimeout
      10s
      --statsdUdpAddress
      istio-statsd-prom-bridge.istio-system:9125
      --proxyAdminPort
      15000
      --controlPlaneAuthPolicy
      NONE
    State:          Running
      Started:      Sun, 21 Oct 2018 14:40:28 +0500
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      POD_NAME:                      my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
      POD_NAMESPACE:                 temp-namepsace (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      ISTIO_META_POD_NAME:           my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
    Mounts:
      /etc/certs/ from istio-certs (ro)
      /etc/istio/proxy from istio-envoy (rw)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-rc8kc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rc8kc
    Optional:    false
  istio-envoy:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  Memory
  istio-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio.default
    Optional:    true
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                                                  Message
  ----    ------                 ----  ----                                                  -------
  Normal  Started                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Started container
  Normal  SuccessfulMountVolume  3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  MountVolume.SetUp succeeded for volume "istio-certs"
  Normal  SuccessfulMountVolume  3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  MountVolume.SetUp succeeded for volume "default-token-rc8kc"
  Normal  SuccessfulMountVolume  3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  MountVolume.SetUp succeeded for volume "istio-envoy"
  Normal  Pulled                 3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Container image "docker.io/istio/proxy_init:1.0.2" already present on machine
  Normal  Created                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Created container
  Normal  Scheduled              3m    default-scheduler                                     Successfully assigned my-application-service-deployment-fb897c6d6-9ztnx to ip-172-42-231-93.eu-west-1.compute.internal
  Normal  Pulled                 3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Container image "image.from.dockerhub:latest" already present on machine
  Normal  Created                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Created container
  Normal  Started                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Started container
  Normal  Pulled                 3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Container image "docker.io/istio/proxyv2:1.0.2" already present on machine
  Normal  Created                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Created container
  Normal  Started                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Started container

1 个答案:

答案 0 :(得分:2)

问题是我试图通过在Pod中添加sidecar而不是在Pod中添加来解决此问题。从这里获得帮助:

https://github.com/istio/istio/issues/9304