如何在Stanford CoreNLP中选择Coreference解析系统

时间:2017-01-27 08:46:47

标签: java stanford-nlp stanford-parser

我正在尝试从core-nlp测试coreference解析系统。从Running coreference resolution on raw text开始,我了解设置' dcoref系统的常规属性'。

我想根据模块的延迟在共同参考系统[确定性,统计,神经]之间进行选择。命令行使用对我来说很清楚,如何将此选项用作API?

目前,我正在运行默认代码:

public static void main(String[] args) throws Exception {
    Annotation document = new Annotation("Barack Obama was born in Hawaii.  He is the president. Obama was elected in 2008.");
    Properties props = new Properties();
    props.setProperty("annotators", "tokenize,ssplit,pos,lemma,ner,parse,mention,coref");
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
    pipeline.annotate(document);
    System.out.println("---");
    System.out.println("coref chains");
    for (CorefChain cc : document.get(CorefCoreAnnotations.CorefChainAnnotation.class).values()) {
      System.out.println("\t" + cc);
    }
    for (CoreMap sentence : document.get(CoreAnnotations.SentencesAnnotation.class)) {
      System.out.println("---");
      System.out.println("mentions");
      for (Mention m : sentence.get(CorefCoreAnnotations.CorefMentionsAnnotation.class)) {
        System.out.println("\t" + m);
       }
    }

1 个答案:

答案 0 :(得分:1)

W'll,在挖掘corefProperties.class之后,我发现了需要改变的属性。

 props.setProperty("coref.language", "en");
 props.setProperty("coref.algorithm", "statistical");//"statistical" : "neural"

但是,更令人惊讶的是,要执行上面的示例测试文本。 Statistical method需要大约45秒,Neural需要大约30秒。 (Intel i5 @ 2.00Ghz,8GB内存)。我在这里错过了什么吗?