Spacy中的自定义句子分段

时间:2018-09-22 16:03:01

标签: python nlp spacy

我希望spaCy使用我提供的句子分割边界,而不是自己的处理。

例如:

get_sentences("Bob meets Alice. @SentBoundary@ They play together.")
# => ["Bob meets Alice.", "They play together."]  # two sents

get_sentences("Bob meets Alice. They play together.")
# => ["Bob meets Alice. They play together."]  # ONE sent

get_sentences("Bob meets Alice, @SentBoundary@ they play together.")
# => ["Bob meets Alice,", "they play together."] # two sents

这是我到目前为止(从文档here中借用的东西):

import spacy
nlp = spacy.load('en_core_web_sm')

def mark_sentence_boundaries(doc):
    for i, token in enumerate(doc):
        if token.text == '@SentBoundary@':
            doc[i+1].sent_start = True
    return doc

nlp.add_pipe(mark_sentence_boundaries, before='parser')

def get_sentences(text):
    doc = nlp(text)
    return (list(doc.sents))

但是我得到的结果如下:

# Ex1
get_sentences("Bob meets Alice. @SentBoundary@ They play together.")
#=> ["Bob meets Alice.", "@SentBoundary@", "They play together."]

# Ex2
get_sentences("Bob meets Alice. They play together.")
#=> ["Bob meets Alice.", "They play together."]

# Ex3
get_sentences("Bob meets Alice, @SentBoundary@ they play together.")
#=> ["Bob meets Alice, @SentBoundary@", "they play together."]

以下是我面临的主要问题:

  1. 找到断句时,如何摆脱@SentBoundary@令牌。
  2. 如果不存在spaCy,如何禁止@SentBoundary@拆分。

2 个答案:

答案 0 :(得分:2)

以下代码有效:

import spacy
nlp = spacy.load('en_core_web_sm')

def split_on_breaks(doc):
    start = 0
    seen_break = False
    for word in doc:
        if seen_break:
            yield doc[start:word.i-1]
            start = word.i
            seen_break = False
        elif word.text == '@SentBoundary@':
            seen_break = True
    if start < len(doc):
        yield doc[start:len(doc)]

sbd = SentenceSegmenter(nlp.vocab, strategy=split_on_breaks)
nlp.add_pipe(sbd, first=True)

def get_sentences(text):
    doc = nlp(text)
    return (list(doc.sents)) # convert to string if required.

# Ex1
get_sentences("Bob meets Alice. @SentBoundary@ They play together.")
# => ["Bob meets Alice.", "They play together."]  # two sentences

# Ex2
get_sentences("Bob meets Alice. They play together.")
# => ["Bob meets Alice. They play together."]  # ONE sentence

# Ex3
get_sentences("Bob meets Alice, @SentBoundary@ they play together.")
# => ["Bob meets Alice,", "they play together."] # two sentences

正确的是检查SentenceSegmenter而不是手动边界设置(例如here)。 This github问题也很有帮助。

答案 1 :(得分:0)

import spacy
from spacy.attrs import LOWER, POS, ENT_TYPE, IS_ALPHA
from spacy.tokens import Doc
import numpy
nlp = spacy.load('en_core_web_sm')

def mark_sentence_boundaries(doc):
    indexes = []
    for i, token in enumerate(doc):
        if token.text == '@SentBoundary@':
            doc[i+1].sent_start = True
            indexes.append(token.i)

    np_array = doc.to_array([LOWER, POS, ENT_TYPE, IS_ALPHA])
    np_array = numpy.delete(np_array, indexes, axis=0)
    doc2 = Doc(doc.vocab, words=[t.text for i, t in enumerate(doc) if i not in indexes])
    doc2.from_array([LOWER, POS, ENT_TYPE, IS_ALPHA], np_array)
    return doc2

nlp.add_pipe(mark_sentence_boundaries, before='parser')

def get_sentences(text):
    doc = nlp(text)
    return (list(doc.sents))

print(get_sentences("Bob meets Alice. @SentBoundary@ They play together."))
# => ["Bob meets Alice.", "They play together."]  # two sents

print(get_sentences("Bob meets Alice. They play together."))
# => ["Bob meets Alice. They play together."]  # ONE sent

print(get_sentences("Bob meets Alice, @SentBoundary@ they play together."))
# => ["Bob meets Alice,", "they play together."] # two sents
相关问题