为什么numpy数组从文件中读取会占用如此多的内存?

时间:2014-10-26 04:39:13

标签: python arrays file-io numpy

该文件包含2000000行: 每行包含208列,以逗号分隔,如下所示:

0.0863314058048,0.0208767447842,0.03358010485,0.0,1.0,0.0,0.314285714286,0.336293217457,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0

程序将这个文件读成一个numpy叙述,我预计它会消耗大约(2000000 * 208 * 8B) = 3.2GB个内存。 但是,当程序读取此文件时,我发现该程序消耗大约20GB的内存。

我很困惑为什么我的程序会消耗这么多不满足期望的内存?

2 个答案:

答案 0 :(得分:2)

我正在使用Numpy 1.9.0,np.loadtxt()np.genfromtxt()的内存不足似乎与它们基于临时列表存储数据的事实直接相关:

  • 请参阅np.loadtxt()了解np.genfromtxt()
  • {li>和here代表shape

通过预先知道数组的dtype,你可以想到一个文件阅读器,它将消耗非常接近理论内存量的内存量(对于这种情况为3.2 GB),通过使用存储数据相应的def read_large_txt(path, delimiter=None, dtype=None): with open(path) as f: nrows = sum(1 for line in f) f.seek(0) ncols = len(f.next().split(delimiter)) out = np.empty((nrows, ncols), dtype=dtype) f.seek(0) for i, line in enumerate(f): out[i] = line.split(delimiter) return out

{{1}}

答案 1 :(得分:0)

我认为你应该尝试pandas来处理大数据(文本文件)。熊猫就像是蟒蛇中的擅长。它在内部使用numpy来表示数据。

HDF5文件也是将大数据保存到hdf5二进制文件的另一种方法。

这个问题可以帮助您了解如何处理大文件 - "Large data" work flows using pandas

相关问题