访问python

时间:2018-01-08 18:50:12

标签: python class variables

我知道这是重复的,但我没有那个“啊哈”时刻,我理解如何访问一个类变量。在此代码中,我从数千页的列表中抓取网站。这些工作是通过concurrent.futures提交的。

我希望能够返回“结果”的值。我在self.results中使用了def __init__(self, url_list, threads),当我尝试print(example.results时似乎无法提取该变量。

如果self.results返回了一个值,但example.results未从if __name__ == '__main__':中提取,那么您如何访问该值?我知道我做错了什么,但我不知道它是什么。

from concurrent.futures import ThreadPoolExecutor
from proxy_def import *
import requests
from bs4 import BeautifulSoup
from parsers import *

site = 0


class ConcurrentListCrawler(object):

    def __init__(self, url_list, threads):

        self.urls = url_list
        self.results = {}
        self.max_threads = threads

    def __make_request(self, url):
        try:
            r = requests.get(url=url, timeout=20)
            r.raise_for_status()
            print(countit(), r.url)
        except requests.exceptions.Timeout:
            r = requests.get(url=url, timeout=60)
        except requests.exceptions.ConnectionError:
            r = requests.get(url=url, timeout=60)
        except requests.exceptions.RequestException as e:
            raise e
        return r.url, r.text

    def __parse_results(self, url, html):

        try:
            print(url)
            trip_data = restaurant_parse(url)

        except Exception as e:
            raise e

        if trip_data:
            print('here we go')
            self.results = trip_data
            #print(self.results)
        return self.results


    def wrapper(self, url):
        url, html = self.__make_request(url)
        self.__parse_results(url, html)

    def run_script(self):
        with ThreadPoolExecutor(max_workers=min(len(self.urls),self.max_threads)) as Executor:
            jobs = [Executor.submit(self.wrapper, u) for u in self.urls]


if __name__ == '__main__':
    listo = loadit()
    print(listo)
    print(len(listo))
    example = ConcurrentListCrawler(listo, 10)
    example.run_script()
    print(example.results)

任何指针都将非常感激。

2 个答案:

答案 0 :(得分:0)

我相信你的一个方法没有返回结果。
进行以下更改。

def wrapper(self, url):
    url, html = self.__make_request(url)
    return self.__parse_results(url, html)

在此之后,我建议你将self.results用作字典,就像它被声明一样 在方法" __ parse_results(..)"中,将trip_data附加到self.results,如下所示,而不是分配。

def __parse_results(self, url, html):

    try:
        print(url)
        trip_data = restaurant_parse(url)

    except Exception as e:
        raise e

    if trip_data:
        print('here we go')
        self.results[url] = trip_data
        #print(self.results)
    return self.results

当你附加到self.results时,它会保留较旧的值,你可以避免通过重新分配来替换。

答案 1 :(得分:0)

问题是我通过列表一次性提交了所有工作。我无法从类中提取变量,因为print(example.results)因为代码的那部分在所有作业完成之前都无法访问。通过这个我能够通过摆脱课程来解决(即使这篇文章的标题表明这是问题)。

from concurrent.futures import ThreadPoolExecutor
import concurrent
from proxy_def import *
import requests
from bs4 import BeautifulSoup
from parsers import *

site = 0

def load_url(url):

    try:
        print(countit(), url)
        trip_data = restaurant_parse(url)
        return trip_data

    except Exception as e:
        raise e


if __name__ == '__main__':
    URLs = loadit()
    #print(URLs)
    #print(len(URLs))
    with ThreadPoolExecutor(max_workers=10) as executor:
        # start the load operations and mark each future with its URL
        future_to_url = {executor.submit(load_url, url): url for url in URLs}
        for future in concurrent.futures.as_completed(future_to_url):
            url = future_to_url[future]
            try:
                data = future.result()
                print('this is data', data)
            except Exception as exc:
                print('%r generated an exception: %s' % (url, exc))

在这里,我可以抓住data来拉字典。

感谢大家的帮助。