Webcrawler正在跳过URL

时间:2016-04-02 04:53:02

标签: ruby nokogiri rest-client

我正在编写一个扫描易受攻击网站的程序,我碰巧知道有几个网站存在漏洞,并返回SQL语法错误,但是,当我运行该程序时,它会跳过这些网站,不输出它们找到的位置或输出它们保存到文件中的位置。该程序正在用于测试,并且所有站点所有者都了解该漏洞。

来源:

def get_urls
  info("Searching for possible SQL vulnerable sites.")
  @agent = Mechanize.new
  page = @agent.get('http://www.google.com/')
  google_form = page.form('f')
  google_form.q = "#{SEARCH}"
  url = @agent.submit(google_form, google_form.buttons.first)
  url.links.each do |link|
    if link.href.to_s =~ /url.q/
      str = link.href.to_s
      str_list = str.split(%r{=|&})
      urls = str_list[1]
      next if str_list[1].split('/')[2] == "webcache.googleusercontent.com"
      urls_to_log = urls.gsub("%3F", '?').gsub("%3D", '=')
      success("Site found: #{urls_to_log}")
      File.open("#{PATH}/temp/SQL_sites_to_check.txt", "a+") {|s| s.puts("#{urls_to_log}'")}
    end
  end
  info("Possible vulnerable sites dumped into #{PATH}/temp/SQL_sites_to_check.txt")
end

def check_if_vulnerable
  info("Checking if sites are vulnerable.")
  IO.read("#{PATH}/temp/SQL_sites_to_check.txt").each_line do |parse|
    begin
      Timeout::timeout(5) do
        parsing = Nokogiri::HTML(RestClient.get("#{parse.chomp}")) 
      end
    rescue Timeout::Error, RestClient::ResourceNotFound, RestClient::SSLCertificateNotVerified, Errno::ECONNABORTED, Mechanize::ResponseCodeError, RestClient::InternalServerError => e
      if e
        warn("URL: #{parse.chomp} failed with error: [#{e}] dumped to non_exploitable.txt")
        File.open("#{PATH}/lib/non_exploitable.txt", "a+"){|s| s.puts(parse)}
      else 
        success("SQL syntax error discovered in URL: #{parse.chomp} dumped to SQL_VULN.txt")
        File.open("#{PATH}/lib/SQL_VULN.txt", "a+"){|vuln| vuln.puts(parse)}
      end
    end
  end
end

使用示例:

[22:49:29 INFO]Checking if sites are vulnerable.
[22:49:53 WARNING]URL: http://www.police.bd/content.php?id=275' failed with error: [execution expired] dumped to non_exploitable.txt

包含网址的文件:

http://www.bible.com/subcat.php?id=2'
http://www.cidko.com/pro_con.php?id=3'
http://www.slavsandtat.com/about.php?id=25'
http://www.police.bd/content.php?id=275'
http://www.icdcprage.org/index.php?id=10'
http://huawei.com/en/plugin.php?id=hwdownload'
https://huawei.com/en/plugin.php?id=unlock'
https://facebook.com/profile.php?id'
http://www.footballclub.com.au/index.php?id=43'
http://www.mesrs.qc.ca/index.php?id=1525'

正如您所看到的,该程序会跳过3个网址并直接进入第4个网址,为什么?

我做错了会发生什么事情?

1 个答案:

答案 0 :(得分:1)

我不确定rescue块是否应该在哪里。你没有对你在parsing = Nokogiri::HTML(RestClient.get("#{parse.chomp}"))中获取的内容做任何事情,对于前三个,它可能只是工作,因此没有异常,也没有错误输出。在该行之后添加一些输出以查看它们被提取。

相关问题