我做了一些搜索,如like this delta one,但无法得到我需要的东西。我被困住了,无法让结果页面工作或加载......或者其他任何事情都没有。我正在寻找一些有关这方面的见解。
我能够让蜘蛛爬过免责声明页面(我想,我甚至不确定100%如何检查它是否成功)。但是在搜索页面上,我无法弄清楚要做什么。我的尝试如下。这也是我刚刚加入的第一篇关于stackoverflow的帖子,很抱歉如果我搞砸了代码格式化。
from scrapy.spider import Spider
from scrapy.http import FormRequest
from time import sleep
class ccSpider(Spider):
name = "courtsSpider"
allowed_domains = ["courts.state.md.us"]
start_urls = ["http://casesearch.courts.state.md.us"]
def parse(self,response):
self.log('\n\n[Parse is Starting...]')
print response.url
if "I have read" in response.body:
print "Disclaimer Page Accessed\n\n"
else:
print "Disclaimer Page not Accessed\n\n"
return
sleep(1)
return FormRequest.from_response(response,
formname = 'main',
formdata = {'disclaimer':'Y'},
callback = self.parseSearchPage
)
def parseSearchPage(self,response):
self.log('\n\n[Accessing Search Criteria Page...]')
print response.url
if "Default is person" in response.body:
print "Search Page Accessed\n\n"
else:
print "Search Page not Accessed\n\n"
return
sleep(1)
return FormRequest.from_response(response,
formname = 'inquiryForm',
formdata = {'lastName':'SMITH',
'firstName':'JOHN',
#'company':'N',
#'middleName':'',
#'exactMatch':'N',
#'site':'00',
#'courtSystem':'B',
#'filingStart':'',
#'filingEnd':'',
#'filingData':'',
#'caseId':''
},
callback = self.parseResultsPages
)
def parseResultsPages(self,response):
self.log('\n\n[Accessing Search Results Page...]')
print response.url
if "items found" in response.body:
print "Results Page Accessed\n\n"
else:
print "Results Page not Accessed\n\n"
print "Title of Page: " + response.xpath('//title/text()').extract()[0].strip()
return
# The Print below should be giving me search results titled page.. I think.
print response.xpath('//title/text()').extract()[0].strip()
答案 0 :(得分:0)
您可能需要维护会话cookie。 Scrapy使用带有cookie的请求。看到这个相关的答案: Scrapy - how to manage cookies/sessions