使用sklearn查找文档中特定单词的tf-idf分数

时间:2015-06-22 09:13:23

标签: python scikit-learn tf-idf

我有一些代码在一组文档上运行基本的TF-IDF矢量化器,返回D X F的稀疏矩阵,其中D是文档数,F是术语数。没问题。

但是如何在文档中找到特定术语的TF-IDF分数?即在术语(在他们的文本表示中)和它们在结果稀疏矩阵中的位置之间是否存在某种字典?

3 个答案:

答案 0 :(得分:5)

是。请参阅拟合/转换的TF-IDF矢量图上的.vocabulary_

In [1]: from sklearn.datasets import fetch_20newsgroups

In [2]: data = fetch_20newsgroups(categories=['rec.autos'])

In [3]: from sklearn.feature_extraction.text import TfidfVectorizer

In [4]: cv = TfidfVectorizer()

In [5]: X = cv.fit_transform(data.data)

In [6]: cv.vocabulary_

这是一个表格的字典:

{word : column index in array}

答案 1 :(得分:1)

这是另一种解决方案,其中CountVectorizerTfidfTransformer为每个单词找到Tfidf得分:

from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
# our corpus
data = ['I like dog', 'I love cat', 'I interested in cat']

cv = CountVectorizer()

# convert text data into term-frequency matrix
data = cv.fit_transform(data)

tfidf_transformer = TfidfTransformer()

# convert term-frequency matrix into tf-idf
tfidf_matrix = tfidf_transformer.fit_transform(data)

# create dictionary to find a tfidf word each word
word2tfidf = dict(zip(cv.get_feature_names(), tfidf_transformer.idf_))

for word, score in word2tfidf.items():
    print(word, score)

输出

(u'love', 1.6931471805599454)
(u'like', 1.6931471805599454)
(u'i', 1.0)
(u'dog', 1.6931471805599454)
(u'cat', 1.2876820724517808)
(u'interested', 1.6931471805599454)
(u'in', 1.6931471805599454)

答案 2 :(得分:0)

@kinkajou,不,TF和IDF不相同,但它们属于同一算法TF-IDF,即术语频率逆文档频率

相关问题