如何使用scikit-learn对文本对进行分类?

时间:2017-03-16 20:48:32

标签: python machine-learning scikit-learn tf-idf text-classification

我已经阅读过很多关于这个主题的博客,但是还没有找到明确的解决方案。我有以下情况:

  1. 我有一组带有标签1或-1的文本对。
  2. 对于每个文本对,我希望以下列方式将要素作为连接:f()= tfidf(t1)" concat" TFIDF(T2)
  3. 有关如何做同样的建议吗?我有以下代码,但它给出了一个错误:

        count_vect = TfidfVectorizer(analyzer=u'char', ngram_range=ngram_range)
        X0_train_counts = count_vect.fit_transform([x[0] for x in training_documents])
        X1_train_counts = count_vect.fit_transform([x[1] for x in training_documents])
        combined_features = FeatureUnion([("x0", X0_train_counts), ("x1", X1_train_counts)])
        clf = LinearSVC().fit(combined_features, training_target)
        average_training_accuracy += clf.score(combined_features, training_target)
    

    这是我得到的错误:

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    scoreEdgesUsingClassifier(None, pos, neg, 1,ngram_range=(2,5), max_size=1000000, test_size=100000)
    
     scoreEdgesUsingClassifier(unc, pos, neg, number_of_iterations, ngram_range, max_size, test_size)
     X0_train_counts = count_vect.fit_transform([x[0] for x in training_documents])
     X1_train_counts = count_vect.fit_transform([x[1] for x in training_documents])
     combined_features = FeatureUnion([("x0", X0_train_counts), ("x1", X1_train_counts)])
     print "Done transforming, now training classifier"
    
    lib/python2.7/site-packages/sklearn/pipeline.pyc in __init__(self, transformer_list, n_jobs, transformer_weights)
    616         self.n_jobs = n_jobs
    617         self.transformer_weights = transformer_weights
    --> 618         self._validate_transformers()
    619 
    620     def get_params(self, deep=True):
    
    lib/python2.7/site-packages/sklearn/pipeline.pyc in _validate_transformers(self)
    660                 raise TypeError("All estimators should implement fit and "
    661                                 "transform. '%s' (type %s) doesn't" %
    --> 662                                 (t, type(t)))
    663 
    664     def _iter(self):
    
    TypeError: All estimators should implement fit and transform. '  (0, 49025) 0.0575144797079
    
     (254741, 38401)    0.184394443164
     (254741, 201747)   0.186080393768
     (254741, 179231)   0.195062580945
     (254741, 156925)   0.211367771299
     (254741, 90026)    0.202458920022' (type <class 'scipy.sparse.csr.csr_matrix'>) doesn't
    

    更新

    以下是解决方案:

        count_vect = TfidfVectorizer(analyzer=u'char', ngram_range=ngram_range)
        training_docs_combined = [x[0] for x in training_documents] + [x[1] for x in training_documents]        
        X_train_counts = count_vect.fit_transform(training_docs_combined)
        concat_features  = hstack((X_train_counts[0:len(training_docs_combined) / 2 ], X_train_counts[len (training_docs_combined) / 2:]))
    
        clf = LinearSVC().fit(concat_features, training_target)
        average_training_accuracy += clf.score(concat_features, training_target)
    

1 个答案:

答案 0 :(得分:1)

来自scikit-learn的

FeatureUnion作为输入估计,而不是数据数组。

您可以简单地将生成的X0_train_countsX1_train_counts数组与scipy.sparse.hstack连接起来,也可以创建两个TfidfVectorizer的独立实例,将FeatureUnion应用于它们,然后调用fit_transform方法。

相关问题