在Python中同时下载文件

时间:2014-12-31 18:43:10

标签: python pdf concurrency

此代码从存储库下载元数据,将数据写入文件,下载pdf,将pdf转换为文本,然后删除原始pdf:

for record in records:
            record_data = []  # data is stored in record_data
            for name, metadata in record.metadata.items():
                for i, value in enumerate(metadata):
                    if value:
                        record_data.append(value)
            fulltext = ''
            file_path = ''
            file_path_metadata = ''
            unique_id = str(uuid.uuid4())
            for data in record_data:
                if 'Fulltext' in data:
                    # the link to the pdf
                    fulltext = data.replace('Fulltext ', '')
                    # path where the txt file will be stored
                    file_path = '/' + os.path.basename(data).replace('.pdf', '') + unique_id + '.pdf'
                    # path where the metadata will be stored
                    file_path_metadata = '/' + os.path.basename(data).replace('.pdf', '') + unique_id + '_metadata.txt'
                    print fulltext, file_path

            # Write metadata to file
            if fulltext:
                try:
                    write_metadata = open(path_to_institute + file_path_metadata, 'w')
                    for i, data in enumerate(record_data):
                        write_metadata.write('MD_' + str(i) + ': ' + data.encode('utf8') + '\n')
                    write_metadata.close()
                except Exception as e:
                    # Exceptions due to missing path to file
                    print 'Exception when writing metadata: {}'.format(e)
                    print fulltext, path_to_institute, file_path_metadata

                # Download pdf
                download_pdf(fulltext, path_to_institute + file_path)

                # Create text file and delete pdf
                pdf2text(path_to_institute + file_path)

进行一些测量,download_pdf方法和pdf2text方法需要相当长的时间。

以下是这些方法:

from pdfminer.pdfparser import PDFParser
from pdfminer.pdfdocument import PDFDocument
from pdfminer.pdfpage import PDFPage
from pdfminer.pdfinterp import PDFResourceManager
from pdfminer.pdfinterp import PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from cStringIO import StringIO
import os


def remove_file(path):
    try:
            os.remove(path)
    except OSError, e:
            print ("Error: %s - %s." % (e.filename,e.strerror))


def pdf2text(path):
    string_handling = StringIO()
    parser = PDFParser(open(path, 'r'))
    save_file = open(path.replace('.pdf', '.txt'), 'w')

    try:
        document = PDFDocument(parser)
    except Exception as e:
        print '{} is not a readable document. Exception {}'.format(path, e)
        return

    if document.is_extractable:
        recourse_manager = PDFResourceManager()
        device = TextConverter(recourse_manager,
                               string_handling,
                               codec='ascii',
                               laparams=LAParams())
        interpreter = PDFPageInterpreter(recourse_manager, device)
        for page in PDFPage.create_pages(document):
            interpreter.process_page(page)

        # write to file
        save_file.write(string_handling.getvalue())
        save_file.close()

        # deletes pdf
        remove_file(path)

    else:
        print(path, "Warning: could not extract text from pdf file.")
        return

def download_pdf(url, path):
        try:
            f = urllib2.urlopen(url)
        except Exception as e:
            print e
            f = None

        if f:
            data = f.read()
            with open(path, "wb") as code:
                code.write(data)
                code.close()

所以我想我应该并行运行它们。 我试过了,但没有说出来:

    pool = mp.Pool(processes=len(process_data))
    for i in process_data:
        print i
        pool.apply(download_pdf, args=(i[0], i[1]))

    pool = mp.Pool(processes=len(process_data))
    for i in process_data:
        print i[1]
        pool.apply(pdf2text, args=(i[1],))

这需要很长时间?打印就像一次运行一个进程一样......

2 个答案:

答案 0 :(得分:0)

here是一篇关于如何并行创作的文章

它使用multiprocessing.dummy在不同的线程中运行

这是一个小例子:

from urllib2 import urlopen
from multiprocessing.dummy import Pool

urls = [url_a,
        url_b,
        url_c
       ]

pool = Pool()
res = pool.map(urlopen, urls)

pool.close()
pool.join()

for python> = 3.3我建议concurrent.futures

示例:

import functools
import urllib.request
import futures

URLS = ['http://www.foxnews.com/',
    'http://www.cnn.com/',
    'http://europe.wsj.com/',
    'http://www.bbc.co.uk/',
    'http://some-made-up-domain.com/']

def load_url(url, timeout):
    return urllib.request.urlopen(url, timeout=timeout).read()

with futures.ThreadPoolExecutor(50) as executor:
    future_list = executor.run_to_futures(
       [functools.partial(load_url, url, 30) for url in URLS])

示例来自:here

答案 1 :(得分:0)

我终于找到了一种并行运行代码的方法。令人难以置信的快得多快。

    import multiprocessing as mp

    jobs = []
    for i in process_data:
        p = mp.Process(target=download_pdf, args=(i[0], i[1]))
        jobs.append(p)
        p.start()

    for i, data in enumerate(process_data):
        print data
        p = mp.Process(target=pdf2text, args=(data[1],))
        jobs[i].join()
        p.start()