解决NEST不支持模式替换char过滤器的问题

时间:2014-03-25 20:45:41

标签: nest

NEST似乎不支持此处描述的pattern replace char filter

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-pattern-replace-charfilter.html

我在https://github.com/elasticsearch/elasticsearch-net/issues/543创建了一个问题。

我的大部分索引都在工作,所以我想继续使用NEST。有没有办法在索引配置期间使用某些手动json注入来解决这个问题?我是NEST的新手,所以不确定这是否可行。

具体而言,我希望在通过自定义分析器运行之前,使用pattern replace char filter从街道地址中删除单元号(即#205 - 1260 Broadway 变为< em> 1260 Broadway )。由于自定义分析器,我相信我需要使用此char过滤器来完成此任务。

我目前的配置如下:

  elasticClient.CreateIndex("geocoding", c => c
            .Analysis(ad => ad
                .Analyzers(ab => ab
                    .Add("address-index", new CustomAnalyzer()
                    {
                        Tokenizer = "whitespace",
                        Filter = new List<string>() { "lowercase", "synonym" }
                    })
                    .Add("address-search", new CustomAnalyzer()
                    {
                        Tokenizer = "whitespace",
                        Filter = new List<string>() { "lowercase" },
                        CharFilter = new List<string>() { "drop-unit" }
                    })
                )
                .CharFilters(cfb => cfb
                    .Add("drop-unit", new CharFilter()) //missing char filter here
                )
                .TokenFilters(tfb => tfb
                    .Add("synonym", new SynonymTokenFilter()
                    {
                        Expand = true,
                        SynonymsPath = "analysis/synonym.txt"
                    })
                )
             )

更新

截至2014年5月,NEST现在支持pattern replace char filterhttps://github.com/elasticsearch/elasticsearch-net/pull/637

3 个答案:

答案 0 :(得分:1)

您可以使用Settings.Add方法以更加手动的方式添加到FluentDictionary,而不是在索引创建过程中使用流畅设置,但可以完全控制传入的设置。例如,显示在Create IndexNEST Documentation中。我使用这种方法的原因非常相似。

您的配置看起来类似于以下内容:

 elasticClient.CreateIndex("geocoding", c => c.
       .Settings(s => s.
           .Add("analysis.analyzer.address-index.type", "custom")
           .Add("analysis.analyzer.address-index.tokenizer", "whitespace")
           .Add("analysis.analyzer.address-index.filter.0", "lowercase")
           .Add("analysis.analyzer.address-index.filter.1", "synonym")
           .Add("anaylsis.analyzer.address-search.type", "custom")
           .Add("analysis.analyzer.address-search.tokenizer", "whitespace")
           .Add("analysis.analyzer.address-search.filter.0", "lowercase")
           .Add("analysis.analyzer.address-search.char_filter.0", "drop-unit")
           .Add("analysis.char_filter.drop-unit.type", "mapping")
           .Add("analysis.char_filter.drop-unit.mappings.0", "<mapping1>")
           .Add("analysis.char_filter.drop-unit.mappings.1", "<mapping2>")
           ...
       )
  );

您需要将上面的<mapping1><mapping2>替换为您要使用的实际char_filter映射。请注意,之前我没有使用过char_filter,所以设置值可能有点偏,但应该让你朝着正确的方向前进。

答案 1 :(得分:0)

为了提供Paige非常有用的答案的后续行动,看起来您可以结合流利和手动Settings.Add方法。以下对我有用:

     elasticClient.CreateIndex("geocoding", c => c
            .Settings(s => s
                .Add("analysis.char_filter.drop_unit.type", "pattern_replace")
                .Add("analysis.char_filter.drop_unit.pattern", @"#\d+\s-\s")
                .Add("analysis.char_filter.drop_unit.replacement", "")
            )
            .Analysis(ad => ad
                .Analyzers(ab => ab
                    .Add("address_index", new CustomAnalyzer()
                    {
                        Tokenizer = "whitespace",
                        Filter = new List<string>() { "lowercase", "synonym" }
                    })
                    .Add("address_search", new CustomAnalyzer()
                    {
                        CharFilter = new List<string> { "drop_unit" },
                        Tokenizer = "whitespace",
                        Filter = new List<string>() { "lowercase" }
                    })
                )
                .TokenFilters(tfb => tfb
                    .Add("synonym", new SynonymTokenFilter()
                    {
                        Expand = true,
                        SynonymsPath = "analysis/synonym.txt"
                    })
                )
             )

答案 2 :(得分:0)

       EsClient.CreateIndex("universal_de", c => c
         .NumberOfReplicas(1)
         .NumberOfShards(5)
         .Settings(s => s //just as an example
             .Add("merge.policy.merge_factor", "10")
             .Add("search.slowlog.threshold.fetch.warn", "1s")
             .Add("analysis.char_filter.drop_chars.type", "pattern_replace")
             .Add("analysis.char_filter.drop_chars.pattern", @"[^0-9]")
             .Add("analysis.char_filter.drop_chars.replacement", "")
             .Add("analysis.char_filter.drop_specChars.type", "pattern_replace")
             .Add("analysis.char_filter.drop_specChars.pattern", @"[^0-9a-zA-Z]")
             .Add("analysis.char_filter.drop_specChars.replacement", "")
         )
         .Analysis(descriptor => descriptor
            .Analyzers(bases => bases
                .Add("folded_word", new CustomAnalyzer() 
                {
                    Filter = new List<string> { "lowercase", "asciifolding", "trim" },
                    Tokenizer = "standard"
                }
                )
                .Add("trimmed_number", new CustomAnalyzer()
                {
                    CharFilter = new List<string> { "drop_chars" },
                    Tokenizer = "standard",
                    Filter = new List<string>() { "lowercase" }
                })
                .Add("trimmed_specChars", new CustomAnalyzer()
                {
                    CharFilter = new List<string> { "drop_specChars" },
                    Tokenizer = "standard",
                    Filter = new List<string>() { "lowercase" }
                })
            )
         )
            .AddMapping<Business>(m => m
                //.MapFromAttributes()
                .Properties(props => props
                    .MultiField(mf => mf
                        .Name(t => t.DirectoryName)
                        .Fields(fs => fs
                            .String(s => s.Name(t => t.DirectoryName).Analyzer("standard"))
                            .String(s => s.Name(t => t.DirectoryName.Suffix("folded")).Analyzer("folded_word"))
                            )
                    )
                    .MultiField(mf => mf
                        .Name(t => t.Phone)
                        .Fields(fs => fs
                            .String(s => s.Name(t => t.Phone).Analyzer("trimmed_number"))
                            )
                    )

这是您创建索引并添加映射的方法。 现在搜索我有这样的事情:

  var result = _Instance.Search<Business>(q => q
                .TrackScores(true)
                .Query(qq =>
                {
                    QueryContainer termQuery = null;
                if (!string.IsNullOrWhiteSpace(input.searchTerm))
                {
                    var toLowSearchTerm = input.searchTerm.ToLower();
                    termQuery |= qq.QueryString(qs => qs
                        .OnFieldsWithBoost(f => f
                            .Add("directoryName.folded", 5.0)
                         )
                         .Query(toLowSearchTerm));
                        termQuery |= qq.Fuzzy(fz => fz.OnField("directoryName.folded").Value(toLowSearchTerm).MaxExpansions(2));
                        termQuery |= qq.Term("phone", Regex.Replace(toLowSearchTerm, @"[^0-9]", ""));


                 }

                 return termQuery;
                })
                .Skip(input.skip)
                .Take(input.take)
            );

新:我设法以更好的方式使用模式替换:

    .Analysis(descriptor => descriptor
        .Analyzers(bases => bases
            .Add("folded_word", new CustomAnalyzer()
            {
                Filter = new List<string> { "lowercase", "asciifolding", "trim" },
                Tokenizer = "standard"
            }
            )
            .Add("trimmed_number", new CustomAnalyzer()
            {
                CharFilter = new List<string> { "drop_chars" },
                Tokenizer = "standard",
                Filter = new List<string>() { "lowercase" }
            })
            .Add("trimmed_specChars", new CustomAnalyzer()
            {
                CharFilter = new List<string> { "drop_specChars" },
                Tokenizer = "standard",
                Filter = new List<string>() { "lowercase" }
            })
            .Add("autocomplete", new CustomAnalyzer()
            {
                Tokenizer = new WhitespaceTokenizer().Type,
                Filter = new List<string>() { "lowercase", "asciifolding", "trim", "engram" }
            }
            )
     )
     .TokenFilters(i => i
                 .Add("engram", new EdgeNGramTokenFilter
                     {
                         MinGram = 3,
                         MaxGram = 15
                     }
                 )
    )
    .CharFilters(cf => cf
                 .Add("drop_chars", new PatternReplaceCharFilter
                     {
                         Pattern = @"[^0-9]",
                         Replacement = ""
                     }
                 )
                 .Add("drop_specChars", new PatternReplaceCharFilter
                 {
                     Pattern = @"[^0-9a-zA-Z]",
                     Replacement = ""
                 }
                 )
    )
    )