用Python导出2Gb + SELECT到CSV(内存不足)

时间:2016-10-05 15:38:46

标签: python csv export-to-csv netezza

我正在尝试从Netezza导出一个大文件(使用Netezza ODBC + pyodbc),这个解决方案会抛出memoryError,如果我循环没有“list”,那就非常慢。你有没有想到一个不会杀死我的服务器/ python进程但可以更快运行的中间解决方案?

cursorNZ.execute(sql)
archi = open("c:\test.csv", "w")
lista = list(cursorNZ.fetchall())
for fila  in lista:
    registro = ''
    for campo in fila:
        campo = str(campo)
        registro = registro+str(campo)+";"
    registro = registro[:-1]
    registro = registro.replace('None','NULL')
    registro = registro.replace("'NULL'","NULL")
    archi.write(registro+"\n")

----编辑----

谢谢,我正在尝试这个: 其中“sql”是查询, cursorNZ是

connMy = pyodbc.connect(DRIVER=.....)
cursorNZ = connNZ.cursor()

chunk = 10 ** 5  # tweak this
chunks = pandas.read_sql(sql, cursorNZ, chunksize=chunk)
with open('C:/test.csv', 'a') as output:
    for n, df in enumerate(chunks):
        write_header = n == 0
        df.to_csv(output, sep=';', header=write_header, na_rep='NULL')

有这个: AttributeError:'pyodbc.Cursor'对象没有属性'cursor' 有什么想法吗?

1 个答案:

答案 0 :(得分:4)

请勿使用cursorNZ.fetchall()

而是直接遍历游标:

with open("c:/test.csv", "w") as archi:  # note the fixed '/'
    cursorNZ.execute(sql)
    for fila in cursorNZ:
        registro = ''
        for campo in fila:
            campo = str(campo)
            registro = registro+str(campo)+";"
        registro = registro[:-1]
        registro = registro.replace('None','NULL')
        registro = registro.replace("'NULL'","NULL")
        archi.write(registro+"\n")

就个人而言,我只会使用熊猫:

import pyodbc
import pandas

cnn = pyodbc.connect(DRIVER=.....)
chunksize = 10 ** 5  # tweak this
chunks = pandas.read_sql(sql, cnn, chunksize=chunksize)

with open('C:/test.csv', 'a') as output:
    for n, df in enumerate(chunks):
        write_header = n == 0
        df.to_csv(output, sep=';', header=write_header, na_rep='NULL')
相关问题