为什么CoreNLP ner tagger和ner tagger将分隔的数字连在一起?

时间:2018-09-10 02:19:35

标签: python nlp nltk stanford-nlp pycorenlp

这是代码段:

In [390]: t
Out[390]: ['my', 'phone', 'number', 'is', '1111', '1111', '1111']

In [391]: ner_tagger.tag(t)
Out[391]: 
[('my', 'O'),
 ('phone', 'O'),
 ('number', 'O'),
 ('is', 'O'),
 ('1111\xa01111\xa01111', 'NUMBER')]

我期望的是:

Out[391]: 
[('my', 'O'),
 ('phone', 'O'),
 ('number', 'O'),
 ('is', 'O'),
 ('1111', 'NUMBER'),
 ('1111', 'NUMBER'),
 ('1111', 'NUMBER')]

如您所见,人工电话号码由\ xa0连起来,这被认为是一个不间断的空格。我可以通过设置CoreNLP而不更改其他默认规则来将其分开。

ner_tagger定义为:

ner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')

1 个答案:

答案 0 :(得分:1)

TL; DR

NLTK正在将令牌列表读入字符串中,然后将其传递给CoreNLP服务器。然后,CoreNLP重新标记输入,并将类似数字的标记与\xa0(不间断空格)连接。


长话

让我们遍历代码,如果我们看一下tag()中的CoreNLPParser函数,我们会看到它调用了tag_sents()函数并将输入的字符串列表转换为字符串调用允许raw_tag_sents()重新标记输入的CoreNLPParser,请参见https://github.com/nltk/nltk/blob/develop/nltk/parse/corenlp.py#L348

def tag_sents(self, sentences):
    """
    Tag multiple sentences.
    Takes multiple sentences as a list where each sentence is a list of
    tokens.

    :param sentences: Input sentences to tag
    :type sentences: list(list(str))
    :rtype: list(list(tuple(str, str))
    """
    # Converting list(list(str)) -> list(str)
    sentences = (' '.join(words) for words in sentences)
    return [sentences[0] for sentences in self.raw_tag_sents(sentences)]

def tag(self, sentence):
    """
    Tag a list of tokens.
    :rtype: list(tuple(str, str))
    >>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
    >>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
    >>> parser.tag(tokens)
    [('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'),
    ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'O')]
    >>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='pos')
    >>> tokens = "What is the airspeed of an unladen swallow ?".split()
    >>> parser.tag(tokens)
    [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'),
    ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'),
    ('unladen', 'JJ'), ('swallow', 'VB'), ('?', '.')]
    """
    return self.tag_sents([sentence])[0]

调用时,raw_tag_sents()使用api_call()将输入传递到服务器:

def raw_tag_sents(self, sentences):
    """
    Tag multiple sentences.
    Takes multiple sentences as a list where each sentence is a string.

    :param sentences: Input sentences to tag
    :type sentences: list(str)
    :rtype: list(list(list(tuple(str, str)))
    """
    default_properties = {'ssplit.isOneSentence': 'true',
                          'annotators': 'tokenize,ssplit,' }

    # Supports only 'pos' or 'ner' tags.
    assert self.tagtype in ['pos', 'ner']
    default_properties['annotators'] += self.tagtype
    for sentence in sentences:
        tagged_data = self.api_call(sentence, properties=default_properties)
        yield [[(token['word'], token[self.tagtype]) for token in tagged_sentence['tokens']]
                for tagged_sentence in tagged_data['sentences']]

所以问题是如何解决问题并在传递令牌时获取令牌?

如果我们在CoreNLP中查看Tokenizer的选项,则会看到tokenize.whitespace选项:

如果我们在调用properties之前对允许的其他api_call()进行了一些更改,则可以在令牌传递给由空格连接的CoreNLP服务器时强制执行令牌,例如更改代码:

def tag_sents(self, sentences, properties=None):
    """
    Tag multiple sentences.

    Takes multiple sentences as a list where each sentence is a list of
    tokens.

    :param sentences: Input sentences to tag
    :type sentences: list(list(str))
    :rtype: list(list(tuple(str, str))
    """
    # Converting list(list(str)) -> list(str)
    sentences = (' '.join(words) for words in sentences)
    if properties == None:
        properties = {'tokenize.whitespace':'true'}
    return [sentences[0] for sentences in self.raw_tag_sents(sentences, properties)]

def tag(self, sentence, properties=None):
    """
    Tag a list of tokens.

    :rtype: list(tuple(str, str))

    >>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
    >>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
    >>> parser.tag(tokens)
    [('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'),
    ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'O')]

    >>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='pos')
    >>> tokens = "What is the airspeed of an unladen swallow ?".split()
    >>> parser.tag(tokens)
    [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'),
    ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'),
    ('unladen', 'JJ'), ('swallow', 'VB'), ('?', '.')]
    """
    return self.tag_sents([sentence], properties)[0]

def raw_tag_sents(self, sentences, properties=None):
    """
    Tag multiple sentences.

    Takes multiple sentences as a list where each sentence is a string.

    :param sentences: Input sentences to tag
    :type sentences: list(str)
    :rtype: list(list(list(tuple(str, str)))
    """
    default_properties = {'ssplit.isOneSentence': 'true',
                          'annotators': 'tokenize,ssplit,' }

    default_properties.update(properties or {})

    # Supports only 'pos' or 'ner' tags.
    assert self.tagtype in ['pos', 'ner']
    default_properties['annotators'] += self.tagtype
    for sentence in sentences:
        tagged_data = self.api_call(sentence, properties=default_properties)
        yield [[(token['word'], token[self.tagtype]) for token in tagged_sentence['tokens']]
                for tagged_sentence in tagged_data['sentences']]

更改上面的代码后:

>>> from nltk.parse.corenlp import CoreNLPParser
>>> ner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> sent = ['my', 'phone', 'number', 'is', '1111', '1111', '1111']
>>> ner_tagger.tag(sent)
[('my', 'O'), ('phone', 'O'), ('number', 'O'), ('is', 'O'), ('1111', 'DATE'), ('1111', 'DATE'), ('1111', 'DATE')]
相关问题