熊猫groupby重叠列表

时间:2019-04-23 06:47:22

标签: python pandas data-science

我有一个这样的数据框

   data
0   1.5
1   1.3
2   1.3
3   1.8
4   1.3
5   1.8
6   1.5

我有一个这样的列表列表:

indices = [[0, 3, 4], [0, 3], [2, 6, 4], [1, 3, 4, 5]]

我想使用列表列表生成数据框中每个组的总和,所以

group1 = df[0] + df[1] + df[2]
group2 = df[1] + df[2] + df[3]
group3 = df[2] + df[3] + df[4]
group4 = df[3] + df[4] + df[5]

所以我正在寻找类似df.groupby(indices).sum

的东西

我知道可以使用for循环并将总和应用于每个df.iloc[sublist],来迭代完成此操作,但是我正在寻找一种更快的方法。

1 个答案:

答案 0 :(得分:1)

使用列表理解:

a = [df.loc[x, 'data'].sum() for x in indices]
print (a)
[4.6, 3.3, 4.1, 6.2]

arr = df['data'].values
a = [arr[x].sum() for x in indices]
print (a)
[4.6, 3.3, 4.1, 6.2]

可以使用groupby + sum解决方案,但不确定是否有更好的性能:

df1 = pd.DataFrame({
    'd' : df['data'].values[np.concatenate(indices)], 
    'g' : np.arange(len(indices)).repeat([len(x) for x in indices])
})

print (df1)
      d  g
0   1.5  0
1   1.8  0
2   1.3  0
3   1.5  1
4   1.8  1
5   1.3  2
6   1.5  2
7   1.3  2
8   1.3  3
9   1.8  3
10  1.3  3
11  1.8  3

print(df1.groupby('g')['d'].sum())
g
0    4.6
1    3.3
2    4.1
3    6.2
Name: d, dtype: float64

在小样本数据中测试的性能-实际数据应有所不同:

In [150]: %timeit [df.loc[x, 'data'].sum() for x in indices]
4.84 ms ± 80.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [151]: %%timeit
     ...: df['data'].values
     ...: [arr[x].sum() for x in indices]
     ...: 
     ...: 
20.9 µs ± 99.3 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [152]: %timeit pd.DataFrame({'d' : df['data'].values[np.concatenate(indices)],'g' : np.arange(len(indices)).repeat([len(x) for x in indices])}).groupby('g')['d'].sum()
1.46 ms ± 234 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

基于真实数据

In [37]: %timeit [df.iloc[x, 0].sum() for x in indices]
158 ms ± 485 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [38]: arr = df['data'].values
    ...: %timeit \
    ...: [arr[x].sum() for x in indices]
5.99 ms ± 18 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In[49]: %timeit pd.DataFrame({'d' : df['last'].values[np.concatenate(sample_indices['train'])],'g' : np.arange(len(sample_indices['train'])).repeat([len(x) for x in sample_indices['train']])}).groupby('g')['d'].sum()
   ...: 
5.97 ms ± 45.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

有趣..底部的两个答案都很快。

相关问题