nltk单词语料库不包含"好的"?

时间:2017-06-09 04:24:17

标签: python dictionary nltk corpus

NLTK单词语料库没有短语"好的"," ok","好的"?

> from nltk.corpus import words
> words.words().__contains__("check")
> True

> words.words().__contains__("okay")
> False

> len(words.words())
> 236736

任何想法为什么?

1 个答案:

答案 0 :(得分:10)

TL; DR

from nltk.corpus import words
from nltk.corpus import wordnet 

manywords = words.words() + wordnet.words() 

在长

docs开始,nltk.corpus.words是来自“http://en.wikipedia.org/wiki/Words_(Unix)

的单词列表

在Unix中,您可以这样做:

ls /usr/share/dict/

阅读自述文件:

$ cd /usr/share/dict/
/usr/share/dict$ cat README
#   @(#)README  8.1 (Berkeley) 6/5/93
# $FreeBSD$

WEB ---- (introduction provided by jaw@riacs) -------------------------

Welcome to web2 (Webster's Second International) all 234,936 words worth.
The 1934 copyright has lapsed, according to the supplier.  The
supplemental 'web2a' list contains hyphenated terms as well as assorted
noun and adverbial phrases.  The wordlist makes a dandy 'grep' victim.

     -- James A. Woods    {ihnp4,hplabs}!ames!jaw    (or jaw@riacs)

Country names are stored in the file /usr/share/misc/iso3166.


FreeBSD Maintenance Notes ---------------------------------------------

Note that FreeBSD is not maintaining a historical document, we're
maintaining a list of current [American] English spellings.

A few words have been removed because their spellings have depreciated.
This list of words includes:
    corelation (and its derivatives)    "correlation" is the preferred spelling
    freen               typographical error in original file
    freend              archaic spelling no longer in use;
                    masks common typo in modern text

--

A list of technical terms has been added in the file 'freebsd'.  This
word list contains FreeBSD/Unix lexicon that is used by the system
documentation.  It makes a great ispell(1) personal dictionary to
supplement the standard English language dictionary.

由于它是 234,936 的固定列表,因此该列表中肯定存在单词。

如果您需要扩展单词列表,可以使用nltk.corpus.wordnet.words()使用WordNet中的单词添加到列表中。

最有可能的是,你所需要的只是一个足够大的文本语料库,例如:维基百科转储然后将其标记化并提取所有唯一单词。

相关问题