如何在python中使用stanford依赖项解析器

时间:2014-07-11 19:26:46

标签: python parsing stanford-nlp

如何在NLTK中使用Stanford Dependency解析器 我已经厌倦了下面的代码它没有给出任何树形结构。请指导,我是python和NLTK的新手。

-----------------------------------------------------------------------------------------

    import os
    sentence = "this is a foo bar i want to parse."

    os.popen("echo '"+sentence+"' > ~/stanfordtemp.txt")
    parser_out = os.popen("~/stanford-parser-full-2014-06-16/lexparser.sh ~/stanfordtemp.txt").readlines()

    bracketed_parse = " ".join( [i.strip() for i in parser_out if (len(i.strip()) > 0) == "("] )
    print bracketed_parse

-----------------------------------------------------------------------------------------
Output:
Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [1.3 sec].
Parsing file: /home/stanfordtemp.txt
Parsing [sent. 1 len. 10]: this is a foo bar i want to parse .
Parsed file: /home/stanfordtemp.txt [1 sentences].
Parsed 10 words in 1 sentences (18.05 wds/sec; 1.81 sents/sec).

1 个答案:

答案 0 :(得分:0)

您是否尝试过Stanford NLP页面上列出的某个python包装器? Komatsu and Castner似乎达到v3.3(当前版本为3.4)。