如何使用python更快地将行插入mysql表?

时间:2019-03-04 02:34:35

标签: python mysql

我正在尝试寻找一种更快的方法将数据插入表中,该表最终将超过1亿行,我将代码运行了近24小时,而该表目前仅输入了900万行,仍在进行中。

我的代码当前一次读取300个csv文件,并将数据存储在列表中,对重复行进行过滤,然后使用for循环将列表中的条目作为元组放置并更新表一一次元组。这种方法花费的时间太长,有没有办法让我批量插入所有行?我尝试过在线查找,但是我正在阅读的方法似乎对我的情况没有帮助。

非常感谢,

大卫

import glob
import os
import csv
import mysql.connector

# MYSQL logon
mydb = mysql.connector.connect(
    host="localhost",
    user="David",
    passwd="Sword",
    database="twitch"
)
mycursor = mydb.cursor()

# list for strean data file names
streamData=[]

# This function obtains file name list from a folder, this is to open files 
in other functions
def getFileNames():
    global streamData
    global topGames

    # the folders to be scanned
    #os.chdir("D://UG_Project_Data")
    os.chdir("E://UG_Project_Data")
    # obtains stream data file names
    for file in glob.glob("*streamD*"):
        streamData.append(file)
    return

# List to store stream data from csv files
sData = []
# Function to read all streamData csv files and store data in a list
def streamsToList():
    global streamData
    global sData

    # Same as gamesToList
    index = len(streamData)
    num = 0
    theFile = streamData[0]
    for x in range(index):
        if (num == 301):
            filterStreams(sData)
            num = 0
            sData.clear()
        try:
            theFile = streamData[x]
            timestamp = theFile[0:15]
            dateTime = timestamp[4:8]+"-"+timestamp[2:4]+"-"+timestamp[0:2]+"T"+timestamp[9:11]+":"+timestamp[11:13]+":"+timestamp[13:15]+"Z"
            with open (theFile, encoding="utf-8-sig") as f:
                reader = csv.reader(f)
                next(reader) # skip header
                for row in reader:
                    if (row != []):
                        col1 = row[0]
                        col2 = row[1]
                        col3 = row[2]
                        col4 = row[3]
                        col5 = row[4]
                        col6 = row[5]
                        col7 = row[6]
                        col8 = row[7]
                        col9 = row[8]
                        col10 = row[9]
                        col11 = row[10]
                        col12 = row[11]
                        col13 = dateTime
                        temp = col1, col2, col3, col4, col5, col6, col7, col8, col9, col10, col11, col12, col13
                        sData.append(temp)
        except:
            print("Problem file:")
            print(theFile)
        print(num)
        num +=1
    return

def filterStreams(self):
    sData = self
    dataSet = set(tuple(x) for x in sData)
    sData = [ list (x) for x in dataSet ]
    return createStreamDB(sData)

# Function to create a table of stream data
def createStreamDB(self):
    global mydb
    global mycursor
    sData = self
    tupleList = ()
    for x in sData:
        tupleList = tuple(x)
        sql = "INSERT INTO streams (id, user_id, user_name, game_id, community_ids, type, title, viewer_count, started_at, language, thumbnail_url, tag_ids, time_stamp) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)"
        val = tupleList
        try:
            mycursor.execute(sql, val)
            mydb.commit()
        except:
            test = 1
    return

if __name__== '__main__':
    getFileNames()
    streamsToList()
    filterStreams(sData)

2 个答案:

答案 0 :(得分:2)

如果某些行成功但有些行失败,是否要让数据库处于损坏状态?如果否,请尝试退出循环。像这样:

for x in sData:
    tupleList = tuple(x)
    sql = "INSERT INTO streams (id, user_id, user_name, game_id, community_ids, type, title, viewer_count, started_at, language, thumbnail_url, tag_ids, time_stamp) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)"
    val = tupleList
    try:
        mycursor.execute(sql, val)
    except:
        # do some thing
        pass
try:
    mydb.commit()
except:
    test = 1

如果不这样做。尝试将cvs文件直接加载到mysql中。

LOAD DATA INFILE "/home/your_data.csv"
INTO TABLE CSVImport
COLUMNS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES;

还可以使您更加清晰。如果您坚持使用python,我已经定义了三种插入数据的方法,因为您需要对数据进行一些处理。

坏方法

In [18]: def inside_loop(): 
    ...:     start = time.time() 
    ...:     for i in range(10000): 
    ...:         mycursor = mydb.cursor() 
    ...:         sql = "insert into t1(name, age)values(%s, %s)" 
    ...:         try: 
    ...:             mycursor.execute(sql, ("frank", 27)) 
    ...:             mydb.commit() 
    ...:         except: 
    ...:             print("Failure..") 
    ...:     print("cost :{}".format(time.time() - start)) 
    ...: 

时间成本:

In [19]: inside_loop()                                                                                                                                                                                                                        
cost :5.92155909538269 

好的

In [9]: def outside_loop(): 
   ...:     start = time.time() 
   ...:     for i in range(10000): 
   ...:         mycursor = mydb.cursor() 
   ...:         sql = "insert into t1(name, age)values(%s, %s)" 
   ...:         try: 
   ...:             mycursor.execute(sql, ["frank", 27]) 
   ...:         except: 
   ...:             print("do something ..") 
   ...:              
   ...:     try: 
   ...:         mydb.commit() 
   ...:     except: 
   ...:         print("Failure..") 
   ...:     print("cost :{}".format(time.time() - start))

时间成本:

In [10]: outside_loop()                                                                                                                                                                                                                       
cost :0.9959311485290527

也许,还有一些更好的方法,甚至是最好的方法。 (即,使用pandas处理数据。并尝试重新设计表格...)

答案 1 :(得分:1)

您可能喜欢我的演示文稿Load Data Fast!,在其中我比较了插入大数据的不同方法,并进行了基准测试,以了解哪种方法最快。

一次插入一行,为每一行提交一个事务,这是最糟糕的方式。

使用LOAD DATA INFILE是最快的。尽管您需要对默认的MySQL实例进行一些配置更改才能使其正常工作。阅读有关选项secure_file_privlocal_infile的MySQL文档。

即使不使用LOAD DATA INFILE,您也可以做得更好。您可以为每个INSERT插入多行,并且可以为每个事务执行多个INSERT语句。

不过,我不会尝试在单个事务中插入全部1亿行。我的习惯是每10,000行提交一次。