Kafka NodePort服务在群集外部无法访问

时间:2019-01-24 14:56:12

标签: apache-spark kubernetes apache-kafka kubernetes-helm

我一直在尝试使用Helm charts部署Kafka。因此,我为Kafka Pod定义了NodePort服务。我检查了具有相同主机和端口的控制台Kafka生产者和使用者-它们正常工作。但是,当我将Spark应用程序创建为数据使用者并将Kafka创建为生产者时,它们无法连接到Kafka service0。我将minikube ip(而不是节点ip)用于主机和服务NodePort端口。 尽管在Spark日志中,我看到NodePort服务可以解析端点,并且代理被发现为Pod寻址和端口:

.Net

如何更改此行为?

NodePort服务定义如下:

INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Discovered group coordinator 172.17.0.20:9092 (id: 2147483645 rack: null)
INFO ConsumerCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Revoking previously assigned partitions []
INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] (Re-)joining group
WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 2147483645 (/172.17.0.20:9092) could not be established. Broker may not be available.
INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Group coordinator 172.17.0.20:9092 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery
WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 2 (/172.17.0.20:9092) could not be established. Broker may not be available.
WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 0 (/172.17.0.12:9092) could not be established. Broker may not be available.

火花使用者配置:

kind: Service
apiVersion: v1
metadata:
  name: kafka-service
spec:
  selector:
    app: cp-kafka
    release: my-confluent-oss
  ports:
    - protocol: TCP
      targetPort: 9092
      port: 32400
      nodePort: 32400
  type: NodePort

Kafka生产者配置:

def kafkaParams() = Map[String, Object](
  "bootstrap.servers" -> "192.168.99.100:32400",
  "schema.registry.url" -> "http://192.168.99.100:8081",
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[KafkaAvroDeserializer],
  "group.id" -> "avro_data",
  "auto.offset.reset" -> "earliest",
  "enable.auto.commit" -> (false: java.lang.Boolean)
)

Kafka的所有K8s服务:

  props.put("bootstrap.servers", "192.168.99.100:32400")
  props.put("client.id", "avro_data")
  props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
  props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
  props.put("schema.registry.url", "http://192.168.99.100:32500")

1 个答案:

答案 0 :(得分:2)

当我尝试从外部访问在minikube上运行的kafka代理(cp-helm-chart)时,我遇到了类似的问题。

这是我如何解决的。使用helm安装之前,请从本地存储库进行安装。

  1. 在此文件https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-kafka/values.yaml内编辑
  2. 搜索节点端口:并将其启用字段更改为true。
    节点端口:
    已启用:true
  3. 通过删除#:
    取消注释这两行 “ advertised.listeners”:|-
    EXTERNAL:// $ {HOST_IP}:$(((31090 + $ {KAFKA_BROKER_ID}))
  4. 用您的minikube ip替换$ {HOST_IP}(在cmd中输入minikube ip以检索您的k8s主机ip,例如196.169.99.100)
  5. 用经纪人ID替换$ {KAFKA_BROKER_ID}(如果仅运行一个经纪人,则默认情况下,它将仅是0)
  6. 最后看起来像这样:
    “ advertised.listeners”:|-
    EXTERNAL://196.169.99.100:31090

现在,您可以通过将bootstrap.servers指向196.169.99.100:31090从外部访问在k8s集群中运行的kafka代理