从NLTK语料库中检索句子字符串

时间:2015-05-11 14:37:28

标签: python regex nlp token nltk

这是我的数据集:

emma=gutenberg.sents('austen-emma.txt')

它给了我句子

[[u'she',u'was',u'happy',[u'It',u'was',u'her',u'own',u'good']]

但这是我想要的:

['she was happy','It was her own good']

2 个答案:

答案 0 :(得分:2)

使用nltk.corpus API访问的语料库通常返回文档流,即句子列表,每个句子是令牌列表。

>>> from nltk.corpus import gutenberg
>>> emma = gutenberg.sents('austen-emma.txt')
>>> emma[0]
[u'[', u'Emma', u'by', u'Jane', u'Austen', u'1816', u']']
>>> emma[1]
[u'VOLUME', u'I']
>>> emma[2]
[u'CHAPTER', u'I']
>>> emma[3]
[u'Emma', u'Woodhouse', u',', u'handsome', u',', u'clever', u',', u'and', u'rich', u',', u'with', u'a', u'comfortable', u'home', u'and', u'happy', u'disposition', u',', u'seemed', u'to', u'unite', u'some', u'of', u'the', u'best', u'blessings', u'of', u'existence', u';', u'and', u'had', u'lived', u'nearly', u'twenty', u'-', u'one', u'years', u'in', u'the', u'world', u'with', u'very', u'little', u'to', u'distress', u'or', u'vex', u'her', u'.']

对于nltk.corpus.gutenberg语料库,它会加载PlaintextCorpusReader,请参阅 https://github.com/nltk/nltk/blob/develop/nltk/corpus/init.py#L114https://github.com/nltk/nltk/blob/develop/nltk/corpus/reader/plaintext.py

所以它正在阅读一个文本文件目录,其中一个是'austen-emma.txt',它使用默认的sent_tokenizeword_tokenize函数来处理语料库。在代码中,它被tokenizers/punkt/english.pickleWordPunctTokenizer()实例化,请参阅https://github.com/nltk/nltk/blob/develop/nltk/corpus/reader/plaintext.py#L40

因此,要获得所需的句子字符串列表,请使用:

>>> from nltk.corpus import gutenberg
>>> emma = gutenberg.sents('austen-emma.txt')
>>> sents_list = [" ".join(sent) for sent in emma]
>>> sents_list[0]
u'[ Emma by Jane Austen 1816 ]'
>>> sents_list[1]
u'VOLUME I'
>>> sents_list[:1]
[u'[ Emma by Jane Austen 1816 ]']
>>> sents_list[:2]
[u'[ Emma by Jane Austen 1816 ]', u'VOLUME I']
>>> sents_list[:3]
[u'[ Emma by Jane Austen 1816 ]', u'VOLUME I', u'CHAPTER I']

答案 1 :(得分:1)

正如alvas和AShelly所说,你看到的是正确的行为。然而,他们加入每个句子的单词的方法有两个缺点:

  • 您最终会在标点符号周围显示空格(例如"Emma Woodhouse , handsome , clever , and rich , with a comfortable [...]")。
  • 你有PlaintextCorpusReader执行句子标记化只是为了以后还原它,这是可以避免的计算开销。

鉴于PlaintextCorpusReader的实现,很容易派生出一个与PlaintextCorpusReader.sents()完全相同的函数,但没有句子标记化:

def sentences_from_corpus(corpus, fileids = None):

    from nltk.corpus.reader.plaintext import read_blankline_block, concat

    def read_sent_block(stream):
        sents = []
        for para in corpus._para_block_reader(stream):
            sents.extend([s.replace('\n', ' ') for s in corpus._sent_tokenizer.tokenize(para)])
        return sents

    return concat([corpus.CorpusView(path, read_sent_block, encoding=enc)
                   for (path, enc, fileid)
                   in corpus.abspaths(fileids, True, True)])

与我上面所说的相反,这个函数还有一个额外的步骤:由于我们不再进行单词标记化,我们必须用白色空格替换换行符。

gutenberg语料库传递给此函数会导致:

['[Emma by Jane Austen 1816]',
 'VOLUME I',
 'CHAPTER I',
 'Emma Woodhouse, handsome, clever, and rich, with a comfortable home and happy disposition, seemed to unite some of the best blessings of existence; and had lived nearly twenty-one years in the world with very little to distress or vex her.',
 "She was the youngest of the two daughters of a most affectionate, indulgent father; and had, in consequence of her sister's marriage, been mistress of his house from a very early period.",
 ...]