如何调试Fluentd为什么不向Elasticsearch发送数据?

时间:2018-07-02 09:36:11

标签: elasticsearch fluentd

启动Fluentd Docker容器时有0条错误消息,因此难以调试。

流体容器的

卷曲http://elasticsearch:9200/_cat/indices显示索引,但是不显示流体索引。

docker logs 7b
2018-06-29 13:56:41 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2018-06-29 13:56:41 +0000 [info]: starting fluentd-0.12.19
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-rename-key' version '0.1.3'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.12.19'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.10.61'
2018-06-29 13:56:41 +0000 [info]: adding filter pattern="**" type="record_transformer"
2018-06-29 13:56:41 +0000 [info]: adding match pattern="docker.*" type="rename_key"
2018-06-29 13:56:41 +0000 [info]: Added rename key rule: rename_rule1 {:key_regexp=>/^log$/, :new_key=>"message"}
2018-06-29 13:56:41 +0000 [info]: adding match pattern="**" type="elasticsearch"
2018-06-29 13:56:41 +0000 [info]: adding source type="forward"
2018-06-29 13:56:41 +0000 [info]: adding source type="monitor_agent"
2018-06-29 13:56:41 +0000 [info]: using configuration file: <ROOT>
  <source>
    @type forward
  </source>
  <source>
    @type monitor_agent
    bind 0.0.0.0
    port 24220
  </source>
  <filter **>
    type record_transformer
    <record>
      node /
      role app
      environment dev
      tenant xxx
      tag ${tag}
    </record>
  </filter>
  <match docker.*>
    type rename_key
    rename_rule1 ^log$ message
    append_tag message
  </match>
  <match **>
    type elasticsearch
    host elasticsearch
    port 9200
    index_name fluentd
    type_name fluentd
    include_tag_key true
    logstash_format true
  </match>
</ROOT>
2018-06-29 13:56:41 +0000 [info]: listening fluent socket on 0.0.0.0:24224
...
2018-06-29 14:16:38 +0000 [info]: listening fluent socket on 0.0.0.0:24224
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=49
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=50
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=51
... many repeats
2018-07-01 06:21:52 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 08:39:07 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
  2018-07-01 06:21:52 +0000 [warn]: suppressed same stacktrace
2018-07-01 08:39:07 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 13:02:17 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
  2018-07-01 08:39:07 +0000 [warn]: suppressed same stacktrace
2018-07-01 13:02:17 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 21:04:48 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
  2018-07-01 13:02:17 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [warn]: failed to flush the buffer. error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 21:04:48 +0000 [warn]: retry count exceededs limit.
  2018-07-01 21:04:48 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [error]: throwing away old logs.

我能够通过卷曲将数据成功插入ElasticSearch的测试索引中。如何解决流利失败的地方?

2 个答案:

答案 0 :(得分:2)

我无法发表评论,因此请在此处添加一些意见。

文档说使用@type elasticsearch。另外,如果elastic和fluentd都作为docker容器运行,请确保在适当的网络上运行它们,以便它们可以相互通信(可以先尝试IP)。

另外,您的Dockerfile是什么样的,以便我们可以将冗长的信息传递给fluentd命令?

答案 1 :(得分:1)

我已成功将此配置用于fluentd + elastisearch:

</source>
  @type      forward
  @label     @mainstream
  bind       0.0.0.0
  port       24224
</source>

<label @mainstream>
  <match **>
    @type copy

    <store>
      @type               elasticsearch
      host                elasticsearch
      port                9200
      logstash_format     true
      logstash_prefix     fluentd
      logstash_dateformat %Y%m%d
      include_tag_key     true
      type_name           access_log
      tag_key             @log_name
      <buffer>
        flush_mode            interval
        flush_interval        1s
        retry_type            exponential_backoff
        flush_thread_count    2
        retry_forever         true
        retry_max_interval    30
        chunk_limit_size      2M
        queue_limit_length    8
        overflow_action       block
      </buffer>
    </store>

  </match>
</label>

要进行调试,您可以使用tcpdump

sudo tcpdump -i eth0 tcp port 24224 -X -s 0 -nn