获取每个文档的主题名称

时间:2019-02-17 07:56:33

标签: python scikit-learn topic-modeling

我正在尝试使用此链接https://www.w3cschool.cn/doc_scikit_learn/scikit_learn-auto_examples-applications-topics_extraction_with_nmf_lda.html

中的示例对文档进行主题建模

我的问题 我怎么知道哪些文档与哪个主题相对应?

到目前为止,这是我所做的

n_features = 1000
n_topics = 8
n_top_words = 20

with open('dataset.txt', 'r') as data_file:
    input_lines = [line.strip() for line in data_file.readlines()]
    mydata = [line for line in input_lines]

def print_top_words(model, feature_names, n_top_words):
    for topic_idx, topic in enumerate(model.components_):
        print("Topic #%d:" % topic_idx)
        print(" ".join([feature_names[i]
                    for i in topic.argsort()[:-n_top_words - 1:-1]]))                        

    print()

tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, token_pattern='\\b\\w{2,}\\w+\\b',
                            max_features=n_features,
                            stop_words='english')
tf = tf_vectorizer.fit_transform(mydata)

lda = LatentDirichletAllocation(n_topics=3, max_iter=5,
                            learning_method='online',
                            learning_offset=50.,
                            random_state=0)

lda.fit(tf)

print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()

print_top_words(lda, tf_feature_names, n_top_words)



#And to add find top topic related to each document
doc_topic = lda.transform(tf)
for n in range(doc_topic.shape[0]):
    topic_most_pr = doc_topic[n].argmax()
    print("doc: {} topic: {}\n".format(n,topic_most_pr))

预期输出为

Doc| Assigned Topic |   Words_in_assigned_topic
1       2                science,humanbody,bones    

0 个答案:

没有答案