Kmeans独特单词标签

时间:2020-04-07 17:13:23

标签: python-3.x k-means tagging

我想从K-Means聚类中获得唯一标签的列表。我有以下代码:

def cluster_tagging(variable_a_taggear):

document = result[variable_a_taggear]
vectorizer = TfidfVectorizer(ngram_range=(1, 5))

X = vectorizer.fit_transform(document)

true_k = 180
puntos2= true_k

if model_setting == 'MiniBatchKMeans':
    #model = MiniBatchKMeans(n_clusters=true_k, init='k-means++', max_iter=1000, n_init=1)
    pass
elif model_setting == 'KMeans':
    model = KMeans(n_clusters=true_k, init='k-means++', max_iter=10000000, n_init=1)


model.fit(X)

order_centroids = model.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
#print(terms[:8])

cluster_ = []
key_ = []
ID = []

cluster_col = 'Cluster_%s'%(variable_a_taggear)
keywords_col = 'Keywords_%s'%(variable_a_taggear)
word_cloud = pd.DataFrame(columns=[cluster_col, keywords_col])

for i in range(puntos2):
    print('Cluster %s:' % (i))
    cluster_.append(i)
    key_1 = []
    key_1 = list(set(key_1))
    key_.append(key_1)

    for ind in order_centroids[i, :8]:
        print('%s' % terms[ind])
        terms_ = terms[ind]
        key_1.append(terms_)

print('first key_', key_)
info = {cluster_col:cluster_,keywords_col:key_}

word_cloud = pd.DataFrame(info)
word_cloud.head()


#print('Prediction')

predicted = model.predict(vectorizer.transform(document))
lst2 = result['Ticket ID']
predictions = pd.DataFrame(list(zip(predicted, lst2)), columns =[cluster_col, 'Ticket ID'])

#predictions = pd.DataFrame(predicted,result['Ticket ID'])
predictions.columns = [cluster_col, 'Ticket ID']
#print(predictions)

resultado = pd.merge(predictions, word_cloud, left_on=cluster_col, right_on=cluster_col, how='inner')
print(resultado.head())
return resultado

正如您可以用n-gram观察到的那样,我获得了重复单词作为不同n-gram的一部分。例如,对于一个群集,我具有以下标记:[['fecha iniciar', 'iniciar', 'modificar fecha iniciar cc', 'proceder modificar fecha iniciar', 'proceder modificar fecha iniciar cc', 'fecha iniciar cc', 'iniciar cc', 'fecha']如何获取每个群集的唯一单词列表?

谢谢

1 个答案:

答案 0 :(得分:0)

问题:如何获取每个群集的唯一词列表?

您可以使用nltk来分隔句子中的单词,并使用numpy.unique来获得数组中的唯一值。

import numpy as np
from nltk.tokenize import word_tokenize

cluster_tags  = ['fecha iniciar', 'iniciar', ..., 'fecha']
one_string = ' '.join(cluster_tags)
np.unique(word_tokenize(one_string))

如果您确定所有单词总是用空白' '隔开,则可以将它们分开...

np.unique(' '.join(cluster_tags).split())

奖金提示: 如果需要,您可以计算每个单词的频率。

# See answer by Max Malysh: https://stackoverflow.com/questions/952914/how-to-make-a-flat-list-out-of-list-of-lists
from collections import Counter
from pandas.core.common import flatten

tokenized = [word_tokenize(text) for text in cluster_tags]
Counter(flatten(tokenized))
相关问题