从python中的couchdb模块访问视图

时间:2017-08-05 19:20:39

标签: python python-3.x couchdb cloudant

-

在使用couchdb-python模块中的视图时,我注意到python在开始处理之前将整个结果集加载到内存中。因此,即使您指定了限制和跳过参数,它也会首先将整个结果集加载到内存中,然后应用限制,跳过限制并最终返回结果。

例如,这是我的代码:

import requests
import json
import couchdb
import time

couch = couchdb.Server(url)

def dumpidtofile(dbname,view):
    db=couch[dbname]
    func_total_time=0
    count=db.info()['doc_count'] # Get a count of total number of documents
    batch=count // 10000 # Divide the total count in batches of 10000 and save the quotient 
    f=open(dbname, 'w')
    if batch == 0 :
        print ("Number of documents less that 10000. continuing !!")
        start_time = time.monotonic()
        for item in db.view(view):
            # print (item.key)
            f.write(item.key)
            f.write('\n')
        elapsed_time = time.monotonic() - start_time
        func_total_time=elapsed_time
        print ("Loop finished. Time spent in this loop was {0}".format(elapsed_time))
        print ("Total Function Time :", func_total_time)
    else:
        print ("Number of documents greater that 10000. Breaking into batches !!")
        batch=batch + 1 # This is the number of times that we would have to iterate to retrieve all documents
        for i in range(batch):
            start_time = time.monotonic()
            for item in db.view(view,limit=10000,skip=i*10000):
                # print (item.key)
                f.write(item.key)
                f.write('\n')
            elapsed_time = time.monotonic() - start_time
            func_total_time = func_total_time + elapsed_time
            print ("Loop {0} finished. Time spent in this loop was {1}".format(i,elapsed_time))
        print ("Total Function Time :", func_total_time)
    f.close()

prog_start_time = time.monotonic()
dumpidtofile("mydb","myindex/myview")
prog_end_time = time.monotonic() - prog_start_time
print ("Total Program Time :", prog_end_time)

这是我的样本输出。

enter image description here

在继续之前,程序在图像中突出显示的点等待了大约90秒。当我怀疑视图可能在它甚至开始处理这些循环之前完全被加载时。现在这可能适用于小型数据库,但对于大型数据库(我使用的某些数据库大约是15/20 gbs)来说,它看起来很棒。

因此我想我的问题是:

  1. 有没有更好的方法来迭代文档,特别是在大型数据库中,只需一次加载一部分文档。

  2. 如何确定在此计划中花费的时间最多,以及如何对其进行优化?

  3. 道歉问题的长度。我知道打字的时间很长。 :)

    谢谢-A

1 个答案:

答案 0 :(得分:1)

您可以试用python-cloudant库。它可以批量获取结果。

实施例:

from cloudant import couchdb_admin_party
from cloudant.result import Result

db_name = 'animaldb'
ddoc_id = 'views101'
view_id = 'diet'

with couchdb_admin_party(url='http://localhost:5984') as client:
    db = client.get(db_name, remote=True)
    view = db.get_design_document(ddoc_id).get_view(view_id)

    with open('/tmp/results.txt', 'w') as f:
        for result in Result(view, page_size=1000):
            f.write(result.get('key') + '\n')
相关问题