Inter Document Similarity:余弦距离

时间:2014-02-02 13:57:34

标签: python-2.7 numpy nlp nltk scikit-learn

更新了问题:

根据“perimosocordiae”的解决方案,我发现了两个文件之间的余弦相似性。我试图使用该解决方案找出2个文件之间的相似性。但同样在test()中出现错误,即

Traceback (most recent call last):
  File "3.py", line 103, in <module>
    main()
  File "3.py", line 99, in main
    test(tf_idf_matrix,count,nltkutil.cosine_distance)
  File "3.py", line 46, in test
    doc2 = np.asarray(tdMatrix[j-1].todense()).reshape(-1)
  File "/usr/lib/python2.7/dist-packages/scipy/sparse/csr.py", line 281, in __getitem__
    return self[key,:]                                #[i] or [1:2]
  File "/usr/lib/python2.7/dist-packages/scipy/sparse/csr.py", line 233, in __getitem__
    return self._get_row_slice(row, col)      #[i,1:2]
  File "/usr/lib/python2.7/dist-packages/scipy/sparse/csr.py", line 320, in _get_row_slice
    raise IndexError('index (%d) out of range' % i )
IndexError: index (4) out of range

我使用一个文件作为列车集,另一个文件作为测试集,我的目标是使用test()函数使用tf-idf输出2个文件之间的余弦相似度。

我的代码如下:

#! /usr/bin/python -tt
from __future__ import division
from operator import itemgetter

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
import nltk.cluster.util as nltkutil
import numpy as np
import re

def preprocess(fnin, fnout):
    fin = open(fnin, 'rb')
    print fin
    fout = open(fnout, 'wb')
    buf = []

    for line in fin:

        line = line.strip()
        if line.find("-- Document Separator --") > -1:
            if len(buf) > 0:

                    body = re.sub("\s+", " ", " ".join(buf))
                    fout.write("%s\n" % (body))

                rest = map(lambda x: x.strip(), line.split(": "))

                buf = []
            else:
                buf.append(line)

    fin.close()
    fout.close()

def test(tdMatrix,count,fsim):

    sims=[] 

    sims = np.zeros((len(tdMatrix.todense()), count))
    l=len(tdMatrix.todense())

    for i in range(0, l):
        for j in range(0, count):
                doc1 = np.asarray(tdMatrix[i].todense()).reshape(-1)
                doc2 = np.asarray(tdMatrix[j].todense()).reshape(-1)

                sims[i, j] = fsim(doc1, doc2)

        print sims


def main():

    file_set=["corpusA.txt","corpusB.txt"]
    train=[]
    test1=[]

    for file1 in file_set:
        s="x"+file1
        preprocess(file1,s)

    count_vectorizer = CountVectorizer()
    m=open("xcorpusA.txt",'r')
    for i in m:
        train.append(i.strip())
    #print doc
    #print train
    count_vectorizer.fit_transform(train)
    #print "Vocabulary:", count_vectorizer.vocabulary

    # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3}

    m1=open("xcorpusB.txt",'r')
    for i in m1:
        test1.append(i.strip())

    freq_term_matrix = count_vectorizer.transform(test1)
    #print freq_term_matrix.todense()

    tfidf = TfidfTransformer(norm="l2")
    tfidf.fit(freq_term_matrix)

    #print "IDF:", tfidf.idf_

    tf_idf_matrix = tfidf.transform(freq_term_matrix)
    print (tf_idf_matrix.toarray())

    count=0
    s=""
    for i in tf_idf_matrix.toarray():
        for j in i:
            count+=1    
        break

    #print count
    #print type(tf_idf_matrix)
    print "Results with Cosine Distance Similarity Measure"
    test(tf_idf_matrix,count,nltkutil.cosine_distance)


if __name__ == "__main__":
    main()

我正在寻找各自导师的建议。

1 个答案:

答案 0 :(得分:1)

您的错误出现在以下表达式中:

tdMatrix[tdMatrix[i], :]

您的tdMatrix是2x2浮点数数组,索引本身将失败。也许你的意思是:

doc1 = np.asarray(tdMatrix[i].todense()).reshape(-1)