如何在Scrapy中将抓取的数据写入CSV文件?

时间:2017-01-06 16:51:41

标签: python csv web-scraping scrapy web-crawler

我试图通过提取子链接及其标题来抓取网站,然后将提取的标题及其相关链接保存到CSV文件中。我运行以下代码,创建了CSV文件,但它是空的。有什么帮助吗?

我的Spider.py文件如下所示:

from scrapy import cmdline
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor

class HyperLinksSpider(CrawlSpider):
    name = "linksSpy"
    allowed_domains = ["some_website"]
    start_urls = ["some_website"]
    rules = (Rule(LinkExtractor(allow=()), callback='parse_obj', follow=True),)

def parse_obj(self, response):
    items = []
    for link in LinkExtractor(allow=(),    deny=self.allowed_domains).extract_links(response):
        item = ExtractlinksItem()
         for sel in response.xpath('//tr/td/a'):
              item['title'] = sel.xpath('/text()').extract()
              item['link'] = sel.xpath('/@href').extract()   
        items.append(item)
        return items
 cmdline.execute("scrapy crawl linksSpy".split())

我的pipelines.py是:

 import csv

 class ExtractlinksPipeline(object):

 def __init__(self):
    self.csvwriter = csv.writer(open('Links.csv', 'wb'))

 def process_item(self, item, spider):
    self.csvwriter.writerow((item['title'][0]), item['link'][0])
    return item

我的items.py是:

 import scrapy

class ExtractlinksItem(scrapy.Item):
# define the fields for your item here like:
     title = scrapy.Field()
     link = scrapy.Field()

pass

我也更改了我的settings.py:

ITEM_PIPELINES = {'extractLinks.pipelines.ExtractlinksPipeline': 1}

1 个答案:

答案 0 :(得分:0)

要输出所有数据,scrapy具有内置功能Feed Exports 简而言之,您需要的只是settings.py文件中的两个设置:FEED_FORMAT - 应保存Feed的格式,在您的情况下为csv和FEED_URI - Feed应位于的位置被保存,例如~/my_feed.csv

我的相关答案用一个用例更详细地介绍了它:
https://stackoverflow.com/a/41473241/3737009