Scrapy没有正确收集电子邮件

时间:2015-07-09 12:56:02

标签: python web-scraping web-crawler scrapy

我使用Scrapy收集一些数据,除了电子邮件提取部分外,一切正常。由于某种原因,.csv文件中的电子邮件行是空白的,或者只提取了几封电子邮件。我尝试过限制download_delay和CLOSESPIDER_ITEMCOUNT,但它无效。非常感谢任何帮助。

import re
import scrapy


class DmozItem(scrapy.Item):
    # define the fields for your item here like:
    link = scrapy.Field()
    attr = scrapy.Field()
    title = scrapy.Field()
    tag = scrapy.Field()


class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["hanford.craigslist.org"]
    start_urls = [
        "http://hanford.craigslist.org/search/cto?min_auto_year=1980&min_price=3000"
    ]

    BASE_URL = 'http://hanford.craigslist.org/'

    def parse(self, response):
        links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
        for link in links:
            absolute_url = self.BASE_URL + link
            yield scrapy.Request(absolute_url, callback=self.parse_attr)

    def parse_attr(self, response):
        match = re.search(r"(\w+)\.html", response.url)
        if match:
            item_id = match.group(1)
            url = self.BASE_URL + "reply/sdo/cto/" + item_id

            item = DmozItem()
            item["link"] = response.url
            item["title"] = "".join(response.xpath("//span[@class='postingtitletext']//text()").extract())
            item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract()[0])
            return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact)

    def parse_contact(self, response):
        item = response.meta['item']
        item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract())
        return item

1 个答案:

答案 0 :(得分:1)

首先,来自Terms of Use的引用作为警告:

  

使用。您同意不使用或提供软件(一般情况除外)   目的Web浏览器和电子邮件客户端,或明确许可的软件   由我们)或与CL交互或互操作的服务,例如对于   下载,上传,发布,举报,发送电子邮件,搜索或移动   使用。 机器人,蜘蛛,脚本,刮刀,爬虫等等   禁止,以及误导,未经请求,非法和/或垃圾邮件   贴子/电子邮件。您同意不收集用户的个人和/或   联系信息(“PI”)。

这里要解决几件事:

  • 联系信息位于reply/hnf/cto/下,而不是reply/sdo/cto/
  • 指定User-AgentX-Requested-With标题

适合我的完整代码:

import re
from urlparse import urljoin

import scrapy


class DmozItem(scrapy.Item):
    # define the fields for your item here like:
    link = scrapy.Field()
    attr = scrapy.Field()
    title = scrapy.Field()
    tag = scrapy.Field()


class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["hanford.craigslist.org"]
    start_urls = [
        "http://hanford.craigslist.org/search/cto?min_auto_year=1980&min_price=3000"
    ]

    BASE_URL = 'http://hanford.craigslist.org/'

    def parse(self, response):
        links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
        for link in links:
            absolute_url = urljoin(self.BASE_URL, link)
            yield scrapy.Request(absolute_url,
                                 callback=self.parse_attr)

    def parse_attr(self, response):
        match = re.search(r"(\w+)\.html", response.url)
        if match:
            item_id = match.group(1)
            url = urljoin(self.BASE_URL, "reply/hnf/cto/" + item_id)

            item = DmozItem()
            item["link"] = response.url
            item["title"] = "".join(response.xpath("//span[@class='postingtitletext']//text()").extract())
            item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract()[0])
            return scrapy.Request(url,
                                  meta={'item': item},
                                  callback=self.parse_contact,
                                  headers={"X-Requested-With": "XMLHttpRequest",
                                           "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.132 Safari/537.36"})

    def parse_contact(self, response):
        item = response.meta['item']
        item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract())
        return item
相关问题