流利的位不会将数据推送到亚马逊ES服务

时间:2020-10-12 12:19:51

标签: elasticsearch kubernetes-helm amazon-eks fluent-bit

我使用helm安装了fluentbit,Fluent-bit Verison是1.13.11,fluentbit pod运行良好,仍然无法将数据发送到Amazon ES,以下是错误和yamls文件。

请提供任何可以帮助我轻松安装此URL的URL。

错误:-出现两种错误:-

1st -
[2020/10/12 12:05:06] [error] [out_es] could not pack/validate JSON response
{"took":0,"errors":true,"items":[{"index":{"_index":"log-test-2020.10.12","_type":"flb_type","_id":null,"status":400,"error":{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [991]/[1000] maximum shards open;"}}},{"index":{"_index":"log-test-2020.10.12","_type":"flb_type","_id":null,"status":400,"error":{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [10] total shards, but this cluster currently has [991]/[1000] maximum shards open;"}}},{"index":{"_index":"log-test-2020.10.12","_type":"flb_type","_id":null,"status":400,"error"{"type":"validat```

2nd :- 
[2020/10/12 12:05:06] [ warn] [engine] failed to flush chunk '1-1602504304.544264456.flb', retry in 6 seconds: task_id=23, input=tail.0 > output=es.0
[2020/10/12 12:05:06] [ warn] [engine] failed to flush chunk '1-1602504304.79518090.flb', retry in 10 seconds: task_id=21, input=tail.0 > output=es.0
[2020/10/12 12:05:07] [ warn] [engine] failed to flush chunk '1-1602504295.264072662.flb', retry in 81 seconds: task_id=8, input=tail.0 > out```



fluentbit config file :- 
[INPUT]
    Name              tail
    Tag               kube.*
    Path              /var/log/containers/*.log
    Parser            docker
    DB                /var/log/flb_kube.db
    Mem_Buf_Limit     30MB
    Skip_Long_Lines   On
    Refresh_Interval  10
[OUTPUT]
    Name            es
    Match           *
    Host            ${FLUENT_ELASTICSEARCH_HOST}
    Port            ${FLUENT_ELASTICSEARCH_PORT}
    Logstash_Format On
    Logstash_Prefix log-test
    Time_Key        @timestamp
    tls             On
    Retry_Limit     False

customParsers: |

[PARSER]
    Name   apache
    Format regex
    Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
    Name   apache2
    Format regex
    Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
    Name   apache_error
    Format regex
    Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
[PARSER]

[PARSER]
    Name   json
    Format json
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S %z

[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L
    Time_Keep   On

2 个答案:

答案 0 :(得分:0)

更改OUTPUT:Retry_Limit 10或更低,并与INTPUT:Buffer_Max_Size保持平衡,这应有助于使缓冲区充满要重试的项目

答案 1 :(得分:0)

您必须增加 kibana 中的分片数量,因为它在错误日志中明确说明了最大分片打开数:-

validation_exception","re​​ason":"验证失败:1:此操作将添加 [10] 个总分片,但此集群当前有 [991]/[1000] 个打开的最大分片;"}

在 kibana 开发工具 UI 中使用下面的 cmd 来增加分片数量:-

PUT /_cluster/settings {“持久”:{“cluster.max_shards_per_node”:}}

相关问题