Python http下载页面源码

时间:2010-10-16 16:33:49

标签: python http

你好吗? 我想知道是否可以连接到http主机(例如google.com) 并下载网页的来源?

提前致谢。

5 个答案:

答案 0 :(得分:13)

  

使用urllib2下载页面。

Google会阻止此请求,因为它会阻止所有机器人。将用户代理添加到请求中。

import urllib2
user_agent = 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_4; en-US) AppleWebKit/534.3 (KHTML, like Gecko) Chrome/6.0.472.63 Safari/534.3'
headers = { 'User-Agent' : user_agent }
req = urllib2.Request('http://www.google.com', None, headers)
response = urllib2.urlopen(req)
page = response.read()
response.close() # its always safe to close an open connection
  

您也可以使用pyCurl

import sys
import pycurl

class ContentCallback:
        def __init__(self):
                self.contents = ''

        def content_callback(self, buf):
                self.contents = self.contents + buf

t = ContentCallback()
curlObj = pycurl.Curl()
curlObj.setopt(curlObj.URL, 'http://www.google.com')
curlObj.setopt(curlObj.WRITEFUNCTION, t.content_callback)
curlObj.perform()
curlObj.close()
print t.contents

答案 1 :(得分:7)

您可以使用urllib2模块。

import urllib2
url = "http://somewhere.com"
page = urllib2.urlopen(url)
data = page.read()
print data

有关更多示例,请参阅文档

答案 2 :(得分:2)

httplib(低级别)和urllib(高级别)的文档可以帮助您入门。选择一个更适合你的那个。

答案 3 :(得分:1)

使用requests软件包:

# Import requests
import requests

#url
url = 'https://www.google.com/'

# Create the binary string html containing the HTML source
html = requests.get(url).content

或使用urllib

from urllib.request import urlopen

#url
url = 'https://www.google.com/'

# Create the binary string html containing the HTML source
html = urlopen(url).read()

答案 4 :(得分:0)

所以这是使用mechanize解决这个问题的另一种方法。我发现这绕过了网站的机器人检查系统。我注释掉了set_all_readonly,因为由于某种原因它没有被识别为机械化中的模块。

import mechanize
url = 'http://www.example.com'

br = mechanize.Browser()
#br.set_all_readonly(False)    # allow everything to be written to
br.set_handle_robots(False)   # ignore robots
br.set_handle_refresh(False)  # can sometimes hang without this
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]           # [('User-agent', 'Firefox')]
response = br.open(url)
print response.read()      # the text of the page
response1 = br.response()  # get the response again
print response1.read()     # can apply lxml.html.fromstring()