使嵌套'for'循环更加pythonic

时间:2017-03-28 02:18:50

标签: python loops nested vectorization

我是python的新手,我想知道如何通过避免显式嵌套'for'循环并使用python的隐式循环来提高效率。我正在使用图像数据,在这种情况下尝试加速我的k-means算法。以下是我正在尝试做的一个示例:

# shape of image will be something like 140, 150, 3
num_sets, rows_per_set, num_columns = image_values.shape

for set in range(0, num_sets):
    for row in range(0, rows_per_set):
        pos = np.argmin(calc_euclidean(rgb_[set][row], means_list)
        buckets[pos].append(image_values[set][row])

我今天所拥有的东西效果很好,但我想提高效率。

非常感谢您的反馈和建议。

1 个答案:

答案 0 :(得分:1)

这是一个矢量化解决方案。我几乎可以肯定我的尺寸混乱了(3不是列的数量,是吗?),但原则应该是可识别的:

为了演示,我只将(平面)索引收集到桶中的集合和行中。

import numpy as np

k = 6
rgb_=np.random.randint(0, 9, (140, 150, 3))
means_list = np.random.randint(0, 9, (k, 3))

# compute distance table; use some algebra to leverage highly optimised
# dot product
squared_dists = np.add.outer((rgb_*rgb_).sum(axis=-1),
                             (means_list*means_list).sum(axis=-1)) \
    - 2*np.dot(rgb_, means_list.T)
# find best cluster
best = np.argmin(squared_dists, axis=-1)

# find group sizes
counts = np.bincount(best.ravel())
# translate to block boundaries
bnds = np.cumsum(counts[:-1])
# group indices by best cluster; argpartition should be
# a bit cheaper than argsort
chunks = np.argpartition(best.ravel(), bnds)
# split into buckets
buckets = np.split(chunks, bnds)

# check

num_sets, rows_per_set, num_columns = rgb_.shape

def calc_euclidean(a, b):
    return ((a-b)**2).sum(axis=-1)

for set in range(0, num_sets):
    for row in range(0, rows_per_set):
        pos = np.argmin(calc_euclidean(rgb_[set][row], means_list))
        assert pos == best[set, row]
        assert rows_per_set*set+row in buckets[pos]
相关问题