scrapy中的cookie问题 - IP被阻止

时间:2017-03-20 23:44:58

标签: python python-2.7 cookies amazon-ec2 scrapy

我是scrapy的新手(这是我的第一次),而且我遇到了一些问题。整个想法是从网站中删除数据并使用python将其存储在Sqlite数据库中。

我在使用Python 2.7中的scrapy删除数据时被阻止了。我遇到了这个问题,只更改IP无法正常工作,所以我想知道如何将scrapy使用的cookie删除到另一台计算机(IP地址)的全新运行中。

刮刀工作正常,但我提出了太多要求(我的坏),现在网站阻止了我。我尝试使用scrapy-polipo-tor并且在defpoxy之后,没有成功。所以我决定建立一个AWS ec2实例并从中运行,因为它将拥有不同的IP地址。我甚至使用Windows实例(我在这里运行mac OSX)。问题是我一直被重定向到网站说我被阻止的页面,即使使用不同的IP。

我的settings.py就像:

BOT_NAME = 'ScrapMSOMJournal'

SPIDER_MODULES = ['ScrapMSOMJournal.spiders']
NEWSPIDER_MODULE = 'ScrapMSOMJournal.spiders'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

CONCURRENT_REQUESTS_PER_DOMAIN = 1
RETRY_TIMES = 0
COOKIES_ENABLED = False

我的spider.py如下:(不是实际的文件名)

# -*- coding: utf-8 -*-
from scrapy import Request, Spider


    class PaperInfoSpider(Spider):
        name = "infos"

        with open("/Users/pedroveronezi/BIA656_PaperProbability/links_ids.txt", "rt") as f:
            start_urls = [url.strip() for url in f.readlines()]
        # start_urls = [
        #     'http://pubsonline.informs.org/doi/abs/10.1287/msom.2014.0498',
        # ]

        custom_settings = {
            'ROBOTSTXT_OBEY=False': False,
            'ITEM_PIPELINES': {
                'ScrapMSOMJournal.pipelines.SQLiteStorePipelineInfos': 300,
            },
            'DOWNLOAD_DELAY': 40000.0,
            'COOKIES_ENABLED': False,
        }

        # response.css('.tocArticleDoi a::text')[0].extract()
        def parse(self, response):
            authors = []
            for paper in response.css('.contribDegrees'):
                authors.append(paper.css('.header::text').extract_first())
            affiliations = []
            for paper in response.css('.contribAff::text'):
                affiliations.append(paper.extract())
            dates_received = []
            dates_accepted = []
            dates_published = []
            for paper in response.css('.dates'):
                temp = paper.css('div::text').extract_first()
                temp = str(temp)
                start_rec = temp.find('Received:')
                received_date = temp[start_rec:].split('\n')[0]
                start_acc = temp.find('Accepted:')
                accepted_date = temp[start_acc:].split('\n')[0]
                start_pub = temp.find('Published Online:')
                published_date = temp[start_pub:].split('\n')[0]
                if start_rec != -1:
                    dates_received.append(str(received_date))
                elif start_acc != -1:
                    dates_accepted.append(str(accepted_date))
                elif start_pub != -1:
                    dates_published.append(str(published_date))

            keywords = []
            for paper in response.css('.abstractKeywords , .abstractKeywords .title'):
                keywords.append(paper.css('a::text').extract())
            title = str(response.css('.chaptertitle::text').extract_first())
            title = title.split()
            title = ' '.join(title)

            try:
                abstract = str(response.css('.abstractInFull p::text').extract_first())
            except UnicodeEncodeError:
                abstract = ''

            keywords_string = []
            for key in keywords:
                keywords_string.append('|'.join(key))

            link_complete = str(response.css('.publicationContentDoi a::text').extract_first())
            temp = link_complete.split('/')
            link_id = str(temp[len(temp)-1])

            keywords_string = '|'.join(keywords_string)
            date_received = ''
            for date in dates_received:
                date_received = str(date).split(':')[1]
            date_accepted = ''
            for date in dates_accepted:
                date_accepted = str(date).split(':')[1]
            date_published = ''
            for date in dates_published:
                date_published = str(date).split(':')[1]

            dict_rtn = {'authors': '|'.join(authors),
                        'affiliations': '|'.join(affiliations),
                        'keywords': keywords_string,
                        'date_received': date_received,
                        'date_accepted': date_accepted,
                        'date_published': date_published,
                        'title': title,
                        'abstract': abstract,
                        'link_id': link_id,
                        }
            return dict_rtn

在我使用的任何计算机中,我都会收到相同的输出消息:

2017-03-20 19:38:55 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: ScrapMSOMJournal)
2017-03-20 19:38:55 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'ScrapMSOMJournal.spiders', 'CONCURRENT_REQUESTS_PER_DOMAIN': 1, 'SPIDER_MODULES': ['ScrapMSOMJournal.spiders'], 'RETRY_TIMES': 0, 'BOT_NAME': 'ScrapMSOMJournal', 'COOKIES_ENABLED': False}
2017-03-20 19:38:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2017-03-20 19:38:55 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-03-20 19:38:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-03-20 19:38:55 [scrapy.middleware] INFO: Enabled item pipelines:
['ScrapMSOMJournal.pipelines.SQLiteStorePipelineInfos']
2017-03-20 19:38:55 [scrapy.core.engine] INFO: Spider opened
2017-03-20 19:38:55 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-03-20 19:38:55 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-03-20 19:38:56 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://pubsonline.informs.org/doi/abs/10.1287/msom.2015.0518?cookieSet=1> from <GET http://pubsonline.informs.org/doi/abs/10.1287/msom.2015.0518>

1 个答案:

答案 0 :(得分:0)

尝试更改user_agent,有关如何在scrapy中执行此操作,请参阅docs 默认情况下,scrapy使用类似“scrapy v1.3”的东西,因为它是user_agent字符串,并且很容易检测到。

并在用户代理字符串here

上查看官方mozzila的页面

简而言之,请尝试在setting.py

中进行设置
USER_AGENT = "Mozilla/5.0 (Windows NT x.y; Win64; x64; rv:10.0) Gecko/20100101 Firefox/10.0"
相关问题