Scrapy:刮取非常精选的URL

时间:2015-03-28 01:40:25

标签: python scrapy

我正在努力为一个学校项目刮掉雅虎股票,但我不知道如何通过一个非常确定的链接浏览一个页面的每个链接。目标是使用网址的某个结尾部分遍历每个股票,如下所示:

Starting URL = ["https://ca.finance.yahoo.com/q/hp?s=BMO.TO&a=02&b=2&c=2005&d=02&e=2&f=2015&g=m"]

下一个网址如下:

#Canadian Imperial(note the "CM"):
"https://ca.finance.yahoo.com/q/hp?s=CM.TO&a=02&b=2&c=2005&d=02&e=2&f=2015&g=m"

#Blackberry (note the "BB"):
"https://ca.finance.yahoo.com/q/hp?s=BB.TO&a=02&b=2&c=2005&d=02&e=2&f=2015&g=m"

等...

换句话说,唯一会改变的是" hp?s =" " .TO& a&#之间的字符34;

想知道这是否可行。 URL的结尾部分必须与我需要访问的页面保持一致。不幸的是,雅虎的每个页面都没有链接去其他股票。

如果我可以使用 Scrapy的规则和SmglLinkExtractor 执行此操作,那么这将更为可取。

非常感谢任何帮助!

谢谢!

当前Scrapy代码:

from scrapy.spider import Spider
from scrapy.selector import Selector
from dirbot.items import Website
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor


class DmozSpider(Spider):
    name = "dmoz"
    allowed_domains = ["ca.finance.yahoo.com"]
    start_urls = [
        "https://ca.finance.yahoo.com/q/hp?s=BMO.TO&a=02&b=2&c=2005&d=02&e=2&f=2015&g=m"
    ]



    rules = [
        Rule(LinkExtractor(allow=r"/q/hp\?s=\w+\.TO&a=02&b=2&c=2005&d=02&e=2&f=2015&g=m"), follow=True)
        ]

    def parse(self, response):

        item = Website()
        item['name'] = response.xpath('//div[@class="title"]/h2/text()').extract()

        print item['name']

4 个答案:

答案 0 :(得分:1)

这是我在留下的评论中谈论的一个例子。

import urllib
import os

company_symbol = ["ACGL", "AFSI", "AGII", "AGNC", "ANAT", "ARCP", "ASBC", "ASPS", "BANF", "BBCN", "BGCP", "BNCL", "BOKF", "BPOP", "BRKL", "CACC", "CATY", "CBOE", "CBSH", "CFFN", "CHFC", "CINF", "CME ", "COLB", "CVBF", "ERIE", "ESGR", "ETFC", "EWBC", "EZPW", "FCFS", "FCNC", "FFBC", "FFIN", "FITB", "FMBI", "FMER", "FNFG", "FNGN", "FSRV", "FULT", "GBCI", "GLPI", "GLRE", "HBAN", "HBHC", "HLSS", "HOMB", "IBKC", "IBKR", "IBOC", "IPCC", "ISBC", "KRNY", "LPLA", "MBFI", "MHLD", "MKTX", "MTGE", "NAVG", "NBTB", "NDAQ", "NFBK", "NPBC", "NTRS", "NWBI", "ORIT", "OZRK", "PACW", "PBCT", "PCH ", "PNFP", "PRAA", "PVTB", "ROIC", "SAFT", "SBNY", "SBRA", "SCBT", "SEIC", "SIGI", "SIVB", "SLM ", "STFC", "SUSQ", "TCBI", "TFSL", "TRMK", "TROW", "UBSI", "UMBF", "UMPQ", "VRTS", "WABC", "WAFD", "WETF", "WRLD", "WTFC", "Z", "ZION"]

for company in company_symbol:
    url = 'http://finance.google.com/finance/info?client=ig&q={0}:{1}'.format(company, 'NASDAQ')
    nasdaq = urllib.urlopen(url)
    text = nasdaq.read()
    filename = 'nasdaq.txt'.format(company)
    with file(filename, 'a') as output:
        output.write(str(text))

此代码将作为更改网址和对每个网址执行操作的一种方法的示例。

答案 1 :(得分:1)

制定规则follow匹配模式的链接:

rules = [
    Rule(LinkExtractor(allow=r"/q/hp\?s=\w+\.\w+&a=\d+&b=\d+&c=\d+&d=\d+&e=\d+&f=\d+&g=m"), follow=True)
]

尽管如此,我不确定您是否需要在此检查所有网址参数。简化版:

rules = [
    Rule(LinkExtractor(allow=r"/q/hp\?s=\w+\.\w+"), follow=True)
]

并且,不要忘记导入:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor

答案 2 :(得分:1)

如果您需要在给定时间段内仅删除预定义的引号,则逻辑如下:

  1. 准备您感兴趣的引用列表,例如['ABC','XYZ','LOL',......]。
  2. 使用基本scrapy.Spider
  3. 定义start_requests()方法并从中生成一系列请求。
  4. 示例实施:

    # -*- coding: utf-8 -*-
    import scrapy
    from scrapy.http import Request
    
    
    class QuotesSpider(scrapy.Spider):
    
        name = "quotes"
        allowed_domains = ["ca.finance.yahoo.com"]
        quotes = ["BMO", "CM", "BB"]
        url_template = "https://ca.finance.yahoo.com/q/hp?s=%s.TO\
            &a=02&b=2&c=2005&d=02&e=2&f=2015&g=m"
    
        def start_requests(self):
            for quote in self.quotes:
                url = self.url_template % quote
                yield Request(url)
    
        def parse(self, response):
            # process
    

    但是如果您需要获取所有TSX报价数据,那么我建议您从available listings中删除它们,然后按照上面的示例使用。抓住整个ca.finance.yahoo.com显然是一个坏主意。

答案 3 :(得分:0)

如果您有想要加载yahoo页面的股票列表,您可以获得这样的雅虎网址列表:

url_template = "https://ca.finance.yahoo.com/q/hp?s={}.TO&a=02&b=2&c=2005&d=02&e=2&f=2015&g=m"

stocks = ['CM', 'BB']
urls = [url_template.format(stock) for stock in stocks]

我没有使用过scrapy,所以我不确定这是否是你需要的。

相关问题