Scrapy 抓取了 0 页蜘蛛

时间:2021-06-15 22:18:51

标签: python scrapy

我正在学习 Scrapy。我在 https://realpython.com/blog/python/web-scraping-with-scrapy-and-mongodb/ 完成了教程,一切顺利。然后我开始了一个新的简单项目来提取数据,同时我观看 YouTube 视频并做了同样的事情,这是输出

--这是代码---

import scrapy

class PostsSpider(scrapy.Spider):
    name = "posts"
    star_url = [
        "https://www.zyte.com/blog/"
    ]
    def parse(self, response):
        for post in response.css("div.oxy-post"):
            yield {
                "title": post.css(".oxy-post-title::text").get(),
                "author": post.css(".oxy-post-meta-author::text").get(),
                "date": post.css(".oxy-post-image-date-overlay::text").get()

            }
        next_page = response.css("a.next_page-numbers::attr(href)").get()
        if next_page is not None:
            next_page = response.urljoin(next_page)
            yield scrapy.Request(next_page, callback=self.parse)

---这是日志---

(venv) C:\Users\taka\PycharmProjects\pythonProject\SCRAPY\postscrape>scrapy crawl posts -o posts.json
2021-06-16 07:17:20 [scrapy.utils.log] INFO: Scrapy 2.5.0 started (bot: postscrape)
2021-06-16 07:17:20 [scrapy.utils.log] INFO: Versions: lxml 4.6.3.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 21.2.0, Python 3
.9.5 (tags/v3.9.5:0a7dcbd, May  3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)], pyOpenSSL 20.0.1 (OpenSSL 1.1.1k  25 Mar 2021), cryptography 3.4.7, Platfo
rm Windows-10-10.0.19043-SP0
2021-06-16 07:17:20 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2021-06-16 07:17:20 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'postscrape',
 'NEWSPIDER_MODULE': 'postscrape.spiders',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_MODULES': ['postscrape.spiders']}
2021-06-16 07:17:20 [scrapy.extensions.telnet] INFO: Telnet Password: 3d3de3400c214e60
2021-06-16 07:17:20 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats']
2021-06-16 07:17:21 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2021-06-16 07:17:21 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2021-06-16 07:17:21 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2021-06-16 07:17:21 [scrapy.core.engine] INFO: Spider opened
2021-06-16 07:17:21 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2021-06-16 07:17:21 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2021-06-16 07:17:21 [scrapy.core.engine] INFO: Closing spider (finished)
2021-06-16 07:17:21 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.002995,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2021, 6, 15, 22, 17, 21, 195804),
 'log_count/INFO': 10,
 'start_time': datetime.datetime(2021, 6, 15, 22, 17, 21, 192809)}
2021-06-16 07:17:21 [scrapy.core.engine] INFO: Spider closed (finished)

我尝试使用 xpath 而不是 css,但结果完全相同。

谢谢!任何帮助表示赞赏!

2 个答案:

答案 0 :(得分:0)

这似乎只是一个错字。尝试将“star_url”更改为“start_urls”

答案 1 :(得分:0)

你也应该从start_urls中删除正斜杠(/),否则会抛出异常。

使用:

start_urls=["https://www.zyte.com/blog"] instead of star_url=["https://www.zyte.com/blog/"]
相关问题