Scrapy,无法抓取任何页面:“TCP连接超时:110:连接超时。”

时间:2017-12-17 13:46:52

标签: python python-3.x web-scraping scrapy

编程新手

无法从属于同一网站的某个域抓取内容。

例如,我可以抓取it.example.comes.example.compt.example.com,但当我尝试对fr.example.comus.example.com执行相同操作时,我得到:

2017-12-17 14:20:27 [scrapy.extensions.telnet] DEBUG: Telnet console 
listening on 127.0.0.1:6025
2017-12-17 14:21:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages 
(at 
0 pages/min), scraped 0 items (at 0 items/min)
2017-12-17 14:22:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages 
(at 
0 pages/min), scraped 0 items (at 0 items/min)
2017-12-17 14:22:38 [scrapy.downloadermiddlewares.retry] DEBUG: 
Retrying 
<GET https://fr.example.com/robots.txt> (failed 1 times): TCP 
connection 
timed out: 110: Connection timed out.

这是蜘蛛 some.py

import scrapy
import itertools

class SomeSpider(scrapy.Spider):
   name = 'some'
   allowed_domains = ['https://fr.example.com']
   def start_requests(self):
    categories = [ 'thing1', 'thing2', 'thing3',]
           base = "https://fr.example.com/things?t={category}&p={index}"

    for category, index in itertools.product(categories, range(1, 11)):
        yield scrapy.Request(base.format(category=category, index=index))

def parse(self, response):
    response.selector.remove_namespaces()
    info1 = response.css("span.info1").extract()
    info2 = response.css("span.info2").extract()

    for item in zip(info1, info2):
        scraped_info = {
            'info1': item[0],
            'info2': item[1]
            }

        yield scraped_info

我尝试了什么:

  1. 从不同的IP运行蜘蛛(相同域的相同问题)

  2. 添加IP池(不起作用)

  3. 在Stackoverflow上的某处找到:在setting.py中,设置

    USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36'

  4. ROBOTSTXT_OBEY = False

  5. 欢迎任何想法!

1 个答案:

答案 0 :(得分:0)

尝试使用requests包而不是scrapy访问该网页,看看它是否有效。

import requests

url = 'fr.example.com'

response = requests.get(url)
print(response.text)