简单聚合期间工作者崩溃

时间:2018-12-28 07:20:53

标签: dask dask-distributed

我正在尝试汇总4.5亿行数据集上的各个列。当我使用Dask的内置聚合(例如“ min”,“ max”,“ std”,“ mean”)时,会使进程崩溃。

我正在使用的文件可以在这里找到:https://www.kaggle.com/c/PLAsTiCC-2018/data寻找test_set.csv

我有一个google kubernetes集群,它由3台8核机器组成,总共有22GB的RAM。

由于这些只是内置的聚合函数,因此我没有做太多尝试。

它也没有使用那么多的RAM,它稳定地保持在6GB左右,而且我还没有看到任何指示内存不足错误的错误。

下面是我的基本代码和被驱逐工人的错误日志:

from dask.distributed import Client, progress
client = Client('google kubernetes cluster address')

test_df = dd.read_csv('gs://filepath/test_set.csv', blocksize=10000000)

def process_flux(df):
flux_ratio_sq = df.flux / df.flux_err
flux_by_flux_ratio_sq = (df.flux * flux_ratio_sq)
df_flux = dd.concat([df, flux_ratio_sq, flux_by_flux_ratio_sq], axis=1)
df_flux.columns = ['object_id', 'mjd', 'passband', 'flux', 'flux_err', 'detected', 'flux_ratio_sq', 'flux_by_flux_ratio_sq']
return df_flux

aggs = {
'flux': ['min', 'max', 'mean', 'std'],

'detected': ['mean'],
'flux_ratio_sq': ['sum'],
'flux_by_flux_ratio_sq': ['sum'],
'mjd' : ['max', 'min'],
}

def featurize(df):

start_df = process_flux(df)
agg_df = start_df.groupby(['object_id']).agg(aggs)
return agg_df

overall_start = timer()
final_df = featurize(test_df).compute()
overall_end = timer()

错误日志:

 distributed.core - INFO - Event loop was unresponsive in Worker for 74.42s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
 distributed.core - INFO - Event loop was unresponsive in Worker for 3.30s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
 distributed.core - INFO - Event loop was unresponsive in Worker for 3.75s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.

其中一些发生,然后:

 distributed.core - INFO - Event loop was unresponsive in Worker for 65.16s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
 distributed.worker - ERROR - Worker stream died during communication: tcp://hidden address
 Traceback (most recent call last):
 File "/opt/conda/lib/python3.6/site-packages/distributed/comm/tcp.py", line 180, in read
n_frames = yield stream.read_bytes(8)
 File "/opt/conda/lib/python3.6/site-packages/tornado/iostream.py", line 441, in read_bytes
self._try_inline_read()
 File "/opt/conda/lib/python3.6/site-packages/tornado/iostream.py", line 911, in _try_inline_read
self._check_closed()
 File "/opt/conda/lib/python3.6/site-packages/tornado/iostream.py", line 1112, in _check_closed
raise StreamClosedError(real_error=self.error)
tornado.iostream.StreamClosedError: Stream is closed

response = yield comm.read(deserializers=deserializers)
File "/opt/conda/lib/python3.6/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
File "/opt/conda/lib/python3.6/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/lib/python3.6/site-packages/distributed/comm/tcp.py", line 201, in read
convert_stream_closed_error(self, e)
File "/opt/conda/lib/python3.6/site-packages/distributed/comm/tcp.py", line 127, in     convert_stream_closed_error
raise CommClosedError("in %s: %s: %s" % (obj, exc.__class__.__name__, exc))
distributed.comm.core.CommClosedError: in <closed TCP>: TimeoutError: [Errno 110] Connection timed out

它运行相当快,我只是希望获得一致的性能而不会导致我的工人崩溃。

谢谢!

0 个答案:

没有答案