如何在Python中为StanfordNLP Server构建查询?

时间:2016-05-24 14:39:51

标签: rest stanford-nlp

我尝试实现一个将文档发送到StanfordNLP服务器的过滤器。但是,我不确定要传输的数据是什么样的。目前我这样做:

    values = {
        paragraph: "true",
        "tokenize.whitespace": "true",
        "annotators": "tokenize,ssplit,pos",
        "outputFormat": "json"
    }

    data     = urllib.urlencode(values)
    req      = urllib2.Request(self.url, data)
    response = urllib2.urlopen(req)
    result   = response.read()

输入文件:

u'I own a dog but bought this fun and helpful book for two of my favorite people that own a cat for a gift.  What a perrrrrfect gift item it is and both my friends just loved it.  Hoping to have It\\'s a Dog\\'s LIfe soon.Linda Hannawalt'

服务器调试输出(data)如下所示:

[/127.0.0.1:36926] API call w/annotators tokenize,ssplit,pos,depparse,lemma,ner,mention,coref,natlog,openie
outputFormat=json&This+remote%2C+for+whatever+reason%2C+was+chosen+by+Time+Warner+to+replace+their+previous+silver+remote%2C+the+Time+Warner+Synergy+V+RC-U62CP-1.12S.++The+actual+function+of+this+CLIKR-5+is+OK%2C+but+the+ergonomic+design+sets+back+remotes+by+20+years.++The+buttons+are+all+the+same%2C+there%27s+no+separation+of+the+number+buttons%2C+the+volume+and+channel+buttons+are+the+same+shape+as+the+other+buttons+on+the+remote%2C+and+it+all+adds+up+to+a+crappy+user+experience.++Why+would+TWC+accept+this+as+a+replacement%3F++This+remote+is+virtually+impossible+to+pick+up+and+use+without+staring+at+it+to+make+sure+where+your+fingers+are.++Heck%2C+you+have+to+feel+around+just+to+figure+out+if+you%27ve+grabbed+it+by+the+top+or+bottom%2C+since+there%27s+no+articulation+in+the+body+of+the+thing+to+tell+you+which+end+is+up.++Horrible%2C+just+horrible+design.++I%27m+skipping+this+and+paying+double+for+a+refurbished+Synergy+V.=true&annotators=tokenize%2Cssplit%2Cpos&tokenize.whitespace=true

但这是我收到的输出:

<type 'list'>: [{'lemma': u'outputformat', 'originalText': u'outputFormat'}, {'lemma': u'json', 'originalText': u'json'}, {'lemma': u'annotator', 'originalText': u'annotators'}, {'lemma': u'%', 'originalText': u'%'}, {'lemma': u'%', 'originalText': u'%'}, {'lemma': u'2cpos', 'originalText': u'2Cpos'}, {'lemma': u'dog', 'originalText': u'dog'}, {'lemma': u'fun', 'originalText': u'fun'}, {'lemma': u'book', 'originalText': u'book'}, {'lemma': u'people', 'originalText': u'people'}, {'lemma': u'cat', 'originalText': u'cat'}, {'lemma': u'gift', 'originalText': u'gift'}, {'lemma': u'perrrrrfect', 'originalText': u'perrrrrfect'}, {'lemma': u'gift', 'originalText': u'gift'}, {'lemma': u'item', 'originalText': u'item'}, {'lemma': u'friend', 'originalText': u'friends'}, {'lemma': u'%', 'originalText': u'%'}, {'lemma': u'dog', 'originalText': u'Dog'}, {'lemma': u'%', 'originalText': u'%'}, {'lemma': u'life', 'originalText': u'LIfe'}, {'lemma': u'soon.linda', 'originalText': u'soon.Linda'}, {'lemma': u'hannawalt', 'originalText': u'Hannawalt'}, {'lemma': u'tokenize.whitespace', 'originalText': u'tokenize.whitespace'}]

所以实际文本在那里,但{'lemma': u'outputformat', 'originalText': u'outputFormat'}显然是错误的。正确的请求字符串将如何显示?

我的过滤器的代码:

def filter(self, paragraph):

    values = {
        paragraph: "true",
        "tokenize.whitespace": "true",
        "annotators": "tokenize,ssplit,pos",
        "outputFormat": "json"
    }

    data = urllib.urlencode(values)

    req = urllib2.Request(self.url, data)
    response = urllib2.urlopen(req)
    result = response.read()

    result = json.loads(result)

    filtered_tokens = list()

    for sentence in result["sentences"]:

        for token in sentence["tokens"]:

            pos = token["pos"]

            if pos in self.whitelist:
                filtered_tokens.append({
                    "originalText": token["originalText"],
                    "lemma": token["lemma"]
                })

    if self.debug is True:
        print "Filtered Tokens:   "
        print filtered_tokens

    return filtered_tokens

2 个答案:

答案 0 :(得分:2)

我怀疑问题在于您是否在POST请求正文中提交了属性。属性应作为URL参数传递; POST的正文应该是要注释的文档。

答案 1 :(得分:1)

很多人都喜欢这个Python包装器:https://github.com/smilli/py-corenlp。我们最终将尝试发布我们自己的代码,帮助人们使用Python中的Stanford CoreNLP。

相关问题