使用Logstash从弹性搜索中删除数据或文档

时间:2019-12-27 11:06:45

标签: elasticsearch logstash document logstash-jdbc

我正在尝试使用Logstash配置删除弹性搜索数据或文档,但是删除似乎无法正常工作。 我正在使用Logstash 5.6.8版本 下面是logstash配置文件:

```input {
   jdbc {
    #db configuration
      '''
      statement => " select * from table "

  }
output {
    elasticsearch {
        action => "delete"
        hosts => "localhost"
        index => "myindex"
        document_type => "doctype"
        document_id => "%{id}"
    }
    stdout { codec => json_lines } 
}```

但是上面的配置是删除我的数据库表中存在的ID,而不是删除不存在的ID。 当我使用logstash从数据库同步到弹性搜索时,我希望数据库中的已删除行也可以同步,并且应该保持一致。

我也尝试了以下配置,但出现了一些错误:

```input {
   jdbc {
    #db configuration
      '''
      statement => " select * from table "

  }
output {
    elasticsearch {
        action => "delete"
        hosts => "localhost"
        index => "myindex"
        document_type => "doctype"
    }
    stdout { codec => json_lines } 
}```

logstash控制台中的错误: “ current_call” =>“ [...] /供应商/捆绑/jruby/1.9/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}} [2019-12-27T16:30:16,087][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>9, "stalling_thread_info"=>{"other"=>[{"thread_id"=>22, "name"=>"[main]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'”}]} } [2019-12-27T16:30:18,623] [错误] [logstash.outputs.elasticsearch]遇到可重试的错误。将以指数退避重试{:code => 400,:url =>“ http://localhost:9200/_bulk”} [2019-12-27T16:30:21,086] [WARN] [logstash.shutdownwatcher] {“ inflight_count” => 9,“ stalling_thread_info” => {“ other” => [{“ thread_id” => 22,“名称” =>“ [main]> worker0”,“ current_call” =>“ [...] /供应商/捆绑/jruby/1.9/gems/stud-0.0.23/lib/stud/interval.rb:89:in`睡觉'“}]}}

有人可以告诉我如何删除文档并同步db数据,或者如何处理弹性搜索中的已删除记录吗?

0 个答案:

没有答案