我尝试使用以下代码对存储在桌面中的本地HTML文件进行爬网,但是在进行爬网过程之前遇到以下错误,例如“无此类文件或目录:'/ robots.txt'”。
[Scrapy命令]
$ scrapy crawl test -o test01.csv
[Scrapy spider]
class TestSpider(scrapy.Spider):
name = 'test'
allowed_domains = []
start_urls = ['file:///Users/Name/Desktop/test/test.html']
[错误]
2018-11-16 01:57:52 [scrapy.core.engine] INFO: Spider opened
2018-11-16 01:57:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-11-16 01:57:52 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2018-11-16 01:57:52 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET file:///robots.txt> (failed 1 times): [Errno 2] No such file or directory: '/robots.txt'
2018-11-16 01:57:56 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET file:///robots.txt> (failed 2 times): [Errno 2] No such file or directory: '/robots.txt'
答案 0 :(得分:1)
在本地使用它时,我从不指定allowed_domains
。
尝试取出该行代码,看看它是否有效。
在发生错误时,它会测试您赋予它的“空”域。