Python多线程高内存使用问题

时间:2016-02-27 00:27:19

标签: python multithreading memory screen-scraping

我正在使用多线程和随机代理抓取网页。我的家用电脑处理这个很好,但需要很多过程(在当前的代码中,我已将其设置为100)。 RAM的使用量似乎达到了2.5GB左右。但是当我在我的CentOS VPS上运行时,我得到一个通用的“Killed”消息,程序终止。运行100个进程后,我非常非常快地得到了Killed错误。我把它减少到一个更合理的8但仍然得到了同样的错误,但经过了更长的时间。基于一些研究,我假设“被杀”错误与内存使用有关。如果没有多线程,则不会发生错误。

那么,我可以做些什么来优化我的代码仍然可以快速运行,但不能使用如此多的内存?我最好还是进一步减少进程数量?我可以在程序运行时从Python中监控我的内存使用情况吗?

编辑:我刚刚意识到我的VPS在桌面上有256mb RAM和24gb,这是我最初编写代码时没有考虑的。

#Request soup of url, using random proxy / user agent - try different combinations until valid results are returned
def getsoup(url):
    attempts = 0
    while True:
        try:
            proxy = random.choice(working_proxies)
            headers = {'user-agent': random.choice(user_agents)}  
            proxy_dict = {'http': 'http://' + proxy}
            r = requests.get(url, headers, proxies=proxy_dict, timeout=5)
            soup = BeautifulSoup(r.text, "html5lib") #"html.parser"
            totalpages = int(soup.find("div",  class_="pagination").text.split(' of ',1)[1].split('\n', 1)[0])  #Looks for totalpages to verify proper page load 
            currentpage = int(soup.find("div",  class_="pagination").text.split('Page ',1)[1].split(' of', 1)[0])
            if totalpages < 5000: #One particular proxy wasn't returning pagelimit=60 or offset requests properly ..            
                break
        except Exception as e:
            # print 'Error! Proxy: {}, Error msg: {}'.format(proxy,e)
            attempts = attempts + 1        
            if attempts > 30:
                print 'Too many attempts .. something is wrong!'
                sys.exit()
    return (soup, totalpages, currentpage)

#Return soup of page of ads, connecting via random proxy/user agent
def scrape_url(url):
    soup, totalpages, currentpage = getsoup(url)               
    #Extract ads from page soup

    ###[A bunch of code to extract individual ads from the page..]

    # print 'Success! Scraped page #{} of {} pages.'.format(currentpage, totalpages)
    sys.stdout.flush()
    return ads     

def scrapeall():     
    global currentpage, totalpages, offset
    url = "url"

    _, totalpages, _ = getsoup(url + "0")
    url_list = [url + str(60*i) for i in range(totalpages)]

    # Make the pool of workers
    pool = ThreadPool(100)    
    # Open the urls in their own threads and return the results
    results = pool.map(scrape_url, url_list)
    # Close the pool and wait for the work to finish
    pool.close()
    pool.join()

    flatten_results = [item for sublist in results for item in sublist] #Flattens the list of lists returned by multithreading
    return flatten_results

adscrape = scrapeall() 

1 个答案:

答案 0 :(得分:2)

BeautifulSoup是纯Python库,在中档网站上会占用大量内存。如果它是一个选项,请尝试将其替换为lxml,这更快,并用C语言编写。如果您的页面很大,它可能仍会耗尽内存。

正如评论中已经建议的那样,您可以使用queue.Queue来存储回复。更好的版本是检索对磁盘的响应,将文件名存储在队列中,并在单独的进程中解析它们。为此,您可以使用multiprocessing库。如果解析耗尽内存并被杀死,则继续提取。这种模式称为fork和die,是Python使用太多内存的常见解决方法。

然后你还需要一种方法来查看解析失败的响应。

相关问题