如何在不耗尽内存的情况下将数据帧与dask合并?

时间:2019-02-08 14:44:49

标签: python dask

合并多个dask数据帧会使我的计算机崩溃。

嗨,

我正在尝试将大量的csv文件与dask合并。每个csv文件都包含一个变量值更改后的时间戳列表以及该值;例如对于variable1,我们有:

timestamp; value
2016-01-01T00:00:00; 3
2016-01-03T00:00:00; 4

对于变量2,我们有:

timestamp; value
2016-01-02T00:00:00; 8 
2016-01-04T00:00:00; 9

每个csv中的时间戳可以不同(因为它们与变量更改值的时刻链接在一起)。最终结果是,我想获取一个hdf文件,其中每个变量在每个出现的时间戳上都具有值,并向前填充。因此,如下所示:

timestamp; var1; var2, 
2016-01-01T00:00:00; 3 ; nan
2016-01-02T00:00:00; 3 ; 8
2016-01-03T00:00:00; 4 ; 8
2016-01-04T00:00:00; 4 ; 9

下面,我提供了用于实现此解析和合并的元代码。

# import 
from pathlib import Path
from functools import partial
import import dask.dataframe as dd
import dask.bag as db
from dask import delayed
from dask.diagnostics import ProgressBar

# define how to parse the dates 
def parse_dates(df):
    return pd.to_datetime(df['timestamp'], format='%Y-%m-%dT%H:%M:%S', errors='coerce')

# parse csv files to dask dataframe
def parse_csv2filtered_ddf(fn_file, sourcedir): 
    fn = source_dir.joinpath(fn_tag)
    ddf = dd.read_csv(fn, sep=';', usecols=['timestamp', 'value'], 
                      blocksize=10000000, dtype={'value': 'object'})
    meta = ('timestamp', 'datetime64[ns]')
    ddf['timestamp'] = ddf.map_partitions(parse_dates, meta=meta)
    v = fn_file.split('.csv')[0]
    ddf = ddf.dropna() \
        .rename(columns={'value': v}) \
        .set_index('timestamp')
    return ddf

# define how to merge
def merge_ddf(x, y):
    ddf = x.merge(y, how='outer', left_index=True, right_index=True, npartitions=4)
    return ddf

# set source directory 
source_dir = Path('/path_to_list_of_csv_files/')

# get list of files to parse
lcsv = os.listdir(source_dir)

# make partial function to fix sourcedir  
parse_csv2filtered_ddf_partial = partial(parse_csv2filtered_ddf, source_dir)

# make bag of dataframes
b = db.from_sequence(lcsv).map(parse_csv2filtered_ddf_partial)

# merge all dataframes and reduce to 1 dataframe 
df = b.fold(binop=merge_ddf)

# forward fill the NaNs and drop the remaining
#
# please note that I am choosing here npartitions equal to 48 as 
#  experiments with smaller sets of data allow me to estimate 
#  the output size of the df which should be around 48 GB, hence 
#  chosing 48 should lead to partition of 1 GB, I guess. 
df = delayed(df).repartition(npartitions=48). \
    fillna(method='ffill'). \
    dropna()

# write output to hdf file
df = df.to_hdf(output_fn, '/data')

# start computation
with ProgressBar():
    df.compute(scheduler='threads')

不幸的是,该脚本从未运行到成功的结局。特别是,监视内存使用情况时,我可以跟踪内存以使其完全耗尽,然后计算机或程序崩溃。

我试图仅将单个线程与多个进程结合使用;例如

import dask
dask.config.set(scheduler='single-threaded')

结合

with ProgressBar():
    df.compute(scheduler='processes', num_workers=3)

也没有任何成功。

任何朝着正确方向的指针都受到热烈欢迎。

编辑

下面,我提供了一个更简洁的脚本,该脚本应允许生成类似的数据以重现MemoryError。

import numpy as np
import pandas as pd 
from dask import delayed
from dask import dataframe as dd
from dask import array as da
from dask import bag as db
from dask.diagnostics import ProgressBar
from datetime import datetime
from datetime import timedelta
from functools import partial

def make_ddf(col, values, timestamps):
    n = int(col) % 2
    idx_timestamps = timestamps[n::2]
    df = pd.DataFrame.from_dict({str(col): values, 'timestamp': idx_time})
    ddf = dd.from_pandas(df, chunksize=100000000)
    ddf = ddf.dropna() \
        .set_index('timestamp')
    return ddf

def merge_ddf(x, y):
    ddf = x.merge(y, how='outer', left_index=True, right_index=True, npartitions=4)
    return ddf

N_DF_TO_MERGE = 55  # number of dataframes to merge 
N_PARTITIONS_REPARTITION = 55  

values = np.random.randn(5000000, 1).flatten()   
timestamps = [datetime.now() + timedelta(seconds=i*1) for i in range(10000000)]  
columns = list(range(N_DF_TO_MERGE))

# fix values and times
make_ddf_partial = partial(make_ddf, values=values, timestamps=timestamps)

# make bag
b = db.from_sequence(columns).map(make_ddf_partial)

# merge all dataframes and reduce to one 
df = b.fold(binop=merge_ddf)

# forward fill the NaNs and drop the remaining
df = delayed(df).repartition(npartitions=N_PARTITIONS_REPARTITION). \
    fillna(method='ffill'). \
    dropna()

# write output to hdf file
df = df.to_hdf('magweg.hdf', '/data')

with ProgressBar():
    df.compute(scheduler='threads')

这将导致以下错误:

  

回溯(最近通话最近):     在第63行的文件“ mcve.py”中       主要()     在调用中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ click \ core.py”,行764       返回self.main(* args,** kwargs)     主文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ click \ core.py”,第717行       rv = self.invoke(ctx)     调用中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ click \ core.py”,行956       返回ctx.invoke(self.callback,** ctx.params)     调用中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ click \ core.py”,行555       返回回调(* args,** kwargs)     主文件“ mcve.py”,第59行       df.compute(scheduler ='threads')     计算中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ base.py”,行156       (结果)=计算(自我,遍历=假,**扭曲)     计算中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ base.py”,行398       结果=进度表(dsk,keys,** kwargs)     在获取的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ threaded.py”中,行76       pack_exception = pack_exception,** kwargs)     get_async中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ local.py”,行459       raise_exception(exc,tb)     重新列出文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ compatibility.py”,第112行       提高经验     第23行中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ local.py”,       结果= _execute_task(任务,数据)     _execute_task中第119行的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ core.py”       返回func(* args2)     在调用中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ utils.py”,第697行       返回getattr(obj,self.method)(* args,** kwargs)     文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ dataframe \ core.py”,行1154,在to_hdf中       return to_hdf(self,path_or_buf,key,mode,append,** kwargs)     文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ dataframe \ io \ hdf.py”,第227行,位于to_hdf中       scheduler = scheduler,** dask_kwargs)     第166行的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ base.py”在compute_as_if_collection中       返回时间表(dsk2,键,** kwargs)     在获取的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ threaded.py”中,行76       pack_exception = pack_exception,** kwargs)     get_async中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ local.py”,行459       raise_exception(exc,tb)     重新列出文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ compatibility.py”,第112行       提高经验     第23行中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ local.py”,       结果= _execute_task(任务,数据)     _execute_task中第119行的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ core.py”       返回func(* args2)     文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ dask \ dataframe \ methods.py”,第103行,位于boundary_slice中       结果= getattr(df,kind)[开始:停止]      getitem 中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ pandas \ core \ indexing.py”,行1500       返回self._getitem_axis(maybe_callable,axis = axis)     _getitem_axis中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ pandas \ core \ indexing.py”,行1867       返回self._get_slice_axis(key,axis = axis)     文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ pandas \ core \ indexing.py”,行1536,位于_get_slice_axis中       返回self._slice(索引器,axis = axis,kind ='iloc')     文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ pandas \ core \ indexing.py”,第151行,位于_slice中       返回self.obj._slice(obj,axis = axis,kind = kind)     文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ pandas \ core \ generic.py”,行_slice中的3152行       结果= self._constructor(self._data.get_slice(slobj,axis = axis))     get_slice中的第700行的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ pandas \ core \ internals \ managers.py”       bm._consolidate_inplace()     _consolidate_inplace中的第929行“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ pandas \ core \ internals \ managers.py”       self.blocks =元组(_consolidate(self.blocks))     _consolidate中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ pandas \ core \ internals \ managers.py”,行1899,       _can_consolidate = _can_consolidate)     文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ pandas \ core \ internals \ blocks.py”,行3146,在_merge_blocks中       new_values = np.vstack([b中的b值在块中]]     在vstack中的文件“ C:\ Users \ tomasvanoyen \ Miniconda3 \ envs \ stora \ lib \ site-packages \ numpy \ core \ shape_base.py”,第234行       返回_nx.concatenate([tup中_m的atatast_2d(_m),] 0)   MemoryError

1 个答案:

答案 0 :(得分:0)

两件事似乎很奇怪。

  1. 您正在从dask.bag代码中调用dask数据帧代码。
  2. 也许您只想使用concat时,您似乎正在调用merge?