如何将Csv文件流式传输到BigQuery?

时间:2016-08-22 17:17:25

标签: python streaming google-bigquery

我到目前为止发现的例子是将json流式传输到BQ,例如https://cloud.google.com/bigquery/streaming-data-into-bigquery

如何将Csv或任何文件类型流式传输到BQ?下面是一个用于流媒体的代码块,似乎"问题"在insert_all_data中' row'定义为json ..谢谢

# [START stream_row_to_bigquery]
def stream_row_to_bigquery(bigquery, project_id, dataset_id, table_name, row,
                           num_retries=5):
    insert_all_data = {
        'rows': [{
            'json': row,
            # Generate a unique id for each row so retries don't accidentally
            # duplicate insert
            'insertId': str(uuid.uuid4()),
        }]
    }
    return bigquery.tabledata().insertAll(
        projectId=project_id,
        datasetId=dataset_id,
        tableId=table_name,
        body=insert_all_data).execute(num_retries=num_retries)
    # [END stream_row_to_bigquery]

1 个答案:

答案 0 :(得分:2)

这就是我wrote非常容易使用bigquery-python库的方式。

def insert_data(datasetname,table_name,DataObject):
          client = get_client(project_id, service_account=service_account,
                            private_key_file=key, readonly=False, swallow_results=False)

          insertObject = DataObject
          try:
              result  = client.push_rows(datasetname,table_name,insertObject)
          except Exception, err:
              print err
              raise
          return result

这里insertObject是一个字典列表,其中一个字典包含一行。

例如:[{field1:value1, field2:value2},{field1:value3, field2:value4}]

csv可以如下阅读,

import pandas as pd
fileCsv = pd.read_csv(file_path+'/'+filename, parse_dates=C, infer_datetime_format=True)
data = []
for row_x in range(len(fileCsv.index)):
    i = 0
    row = {}
    for col_y in schema:
        row[col_y['name']] = _sorted_list[i]['col_data'][row_x]
        i += 1
    data.append(row)
insert_data(datasetname,table_name,data)

数据列表可以发送到insert_data

这会做到这一点但仍然存在我已经提出的限制here