使用TfidfVectorizer和Scikit-learn进行SVM的TF-IDF精度较低

时间:2016-06-10 10:06:00

标签: scikit-learn svm feature-extraction tf-idf text-classification

我尝试使用TF-IDF和SVM将文档分类为欺骗性或真实性。我知道之前已经完成了这项工作,但我并不确定我是否正确实施。我有一组文本,正在构建TF-IDF,如

vectorizer = TfidfVectorizer(min_df=1, binary=0, use_idf=1, smooth_idf=0, sublinear_tf=1)
tf_idf_model = vectorizer.fit_transform(corpus)
features = tf_idf_model.toarray()

对于分类:

seed = random.random()
random.seed(seed)
random.shuffle(features)
random.seed(seed)
random.shuffle(labels)

features_folds = np.array_split(features, folds)
labels_folds = np.array_split(labels, folds)

for C_power in C_powers:
    scores = []
    start_time = time.time()
    svc = svm.SVC(C=2**C_power, kernel='linear')

    for k in range(folds):       
        features_train = list(features_folds)
        features_test = features_train.pop(k)
        features_train = np.concatenate(features_train)
        labels_train = list(labels_folds)
        labels_test = labels_train.pop(k)
        labels_train = np.concatenate(labels_train)
        scores.append(svc.fit(features_train, labels_train).score(features_test, labels_test))

    print(scores)

但我收到的准确率约为50%。我的语料库是1600个文本。

1 个答案:

答案 0 :(得分:0)

我认为您可能希望在将其输入SVM之前减少TF-IDF矩阵,因为SVM在处理大型稀疏矩阵方面不是很好。我建议使用TruncatedSVD来降低TF-IDF矩阵的维数。

vectorizer = TfidfVectorizer(min_df=1, binary=0, use_idf=1, smooth_idf=0, sublinear_tf=1)
svd = TruncatedSVD(n_components=20)

pipeline = Pipeline([
    ('tfidf', vectorizer),
    ('svd', svd)])

features = pipeline.fit_transform(corpus)

当然,您需要调整n_components以找到要保留的最佳组件数。

相关问题