如何在pandas groupby中连接文本

时间:2016-06-09 15:53:31

标签: python pandas

我有一个带有文本列的pandas数据框。现在我想对这个数据帧进行分组并连接文本列 - 这里是一些生成示例数据帧的代码:

import numpy as np
import pandas as pd

import string
import random

def text_generator(size=6, chars=string.ascii_lowercase):
    return ''.join(random.choice(chars) for _ in range(size))

items, clusters, texts = [], [], []
for item in range(200):
    for cluster in range(1000):
        for line in range(random.randint(1, 4)):
            items.append(item)
            clusters.append(cluster)
            texts.append(text_generator())
df = pd.DataFrame({'item_id': items, 'cluster_id': clusters, 'text': texts})

现在我按列'item_id'和'cluster_id'进行分组,并为汇总结果创建一个新的数据框:

grouped = df.groupby(('item_id', 'cluster_id'))
df_cluster = pd.DataFrame(grouped.size()).rename(columns={0: 'cluster_size'})

也许我错了,但显而易见的解决方案似乎是聚合这样的文字:

df_cluster['texts'] = grouped.text.agg(lambda x: ' '.join(x))

但这需要大约10秒。对于几兆字节的数据?奇怪的。所以我测试了一个标准的python解决方案:

text_lookup = {}
for item_id, cluster_id, text in zip(df.item_id.values, df.cluster_id.values, df.text.values):
    text_lookup.setdefault((item_id, cluster_id), []).append(text)
item_ids, cluster_ids, all_texts = [], [], []
for (item_id, cluster_id), texts in text_lookup.items():
    item_ids.append(item_id)
    cluster_ids.append(cluster_id)
    all_texts.append(' '.join([t for t in texts if t is not np.nan]))
df_tags = pd.DataFrame({'item_id': item_ids, 'cluster_id': cluster_ids, 'texts': all_texts}).set_index(['item_id', 'cluster_id'])
df_cluster = df_cluster.merge(df_tags, left_index=True, right_index=True)

这应该慢得多,因为我在python中做了所有这些for循环,但它只需要3秒。可能我做错了什么,但我现在不知道:)。

1 个答案:

答案 0 :(得分:0)

"""Demonstrate access to Keras batch tensors in a tf.keras custom training step.""" import numpy as np from tensorflow import keras from tensorflow.keras import backend as K from tensorflow.python.keras.engine import data_adapter in_shape = (2,) out_shape = (1,) batch_size = 3 n_samples = 7 class SequentialWithPrint(keras.Sequential): def train_step(self, original_data): # Basically copied one-to-one from https://git.io/JvDTv data = data_adapter.expand_1d(original_data) x, y_true, w = data_adapter.unpack_x_y_sample_weight(data) y_pred = self(x, training=True) # this is pretty much like on_train_batch_begin K.print_tensor(w, "Sample weight (w) =") K.print_tensor(x, "Batch input (x) =") K.print_tensor(y_true, "Batch output (y_true) =") K.print_tensor(y_pred, "Prediction (y_pred) =") result = super().train_step(original_data) # add anything here for on_train_batch_end-like behavior return result # Model model = SequentialWithPrint([keras.layers.Dense(out_shape[0], input_shape=in_shape)]) model.compile(loss="mse", optimizer="adam") # Example data X = np.random.rand(n_samples, *in_shape) Y = np.random.rand(n_samples, *out_shape) model.fit(X, Y, batch_size=batch_size) print("X: ", X) print("Y: ", Y) 被证明比循环快10%。

相关问题