如何在python中自动使用层次聚类分析获得最佳簇数?

时间:2018-06-05 08:10:23

标签: python cluster-analysis hierarchical-clustering

我想使用层次聚类分析自动获得群集的最佳数量(K),然后将此K应用于python中的 K-means群集

在研究了很多文章后,我知道一些方法告诉我们可以绘制图形来确定K,但是有任何方法可以在python中自动输出实数吗?

1 个答案:

答案 0 :(得分:0)

分层聚类方法基于树状图来确定最佳聚类数。使用类似于以下内容的代码绘制树状图:

# General imports
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

# Special imports
from scipy.cluster.hierarchy import dendrogram, linkage

# Load data, fill in appropriately
X = []

# How to cluster the data, single is minimal distance between clusters
linked = linkage(X, 'single')

# Plot dendrogram
plt.figure(figsize=(10, 7))
dendrogram(linked,
            orientation='top',
            labels=labelList,
            distance_sort='descending',
            show_leaf_counts=True)
plt.show()

在树状图中找到节点之间最大的垂直差,并在中间通过一条水平线。相交的垂直线数是最佳聚类数(当使用链接中设置的方法计算亲和力时)。

在此处查看示例:https://stackabuse.com/hierarchical-clustering-with-python-and-scikit-learn/

我也想知道如何自动读取树状图并提取该数字。

已添加修改: 有一种使用SK Learn软件包的方法。请参见以下示例:

#==========================================================================
# Hierarchical Clustering - Automatic determination of number of clusters
#==========================================================================

# General imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from os import path

# Special imports
from scipy.cluster.hierarchy import dendrogram, linkage
import scipy.cluster.hierarchy as shc
from sklearn.cluster import AgglomerativeClustering

# %matplotlib inline

print("============================================================")
print("       Hierarchical Clustering demo - num of clusters       ")
print("============================================================")
print(" ")


folder = path.dirname(path.realpath(__file__)) # set current folder

# Load data
customer_data = pd.read_csv( path.join(folder, "hierarchical-clustering-with-python-and-scikit-learn-shopping-data.csv"))
# print(customer_data.shape)
print("In this data there should be 5 clusters...")

# Retain only the last two columns
data = customer_data.iloc[:, 3:5].values

# # Plot dendrogram using SciPy
# plt.figure(figsize=(10, 7))
# plt.title("Customer Dendograms")
# dend = shc.dendrogram(shc.linkage(data, method='ward'))

# plt.show()


# Initialize hiererchial clustering method, in order for the algorithm to determine the number of clusters
# put n_clusters=None, compute_full_tree = True,
# best distance threshold value for this dataset is distance_threshold = 200
cluster = AgglomerativeClustering(n_clusters=None, affinity='euclidean', linkage='ward', compute_full_tree=True, distance_threshold=200)

# Cluster the data
cluster.fit_predict(data)

print(f"Number of clusters = {1+np.amax(cluster.labels_)}")

# Display the clustering, assigning cluster label to every datapoint 
print("Classifying the points into clusters:")
print(cluster.labels_)

# Display the clustering graphically in a plot
plt.scatter(data[:,0],data[:,1], c=cluster.labels_, cmap='rainbow')
plt.title(f"SK Learn estimated number of clusters = {1+np.amax(cluster.labels_)}")
plt.show()

print(" ")

The clustering results

数据是从这里获取的:https://stackabuse.s3.amazonaws.com/files/hierarchical-clustering-with-python-and-scikit-learn-shopping-data.csv

相关问题