如何在Python中通过Tor制作urllib2请求?

时间:2009-07-08 06:22:08

标签: python tor

我正在尝试使用Python编写的抓取工具抓取网站。我想将Tor与Python集成,这意味着我想使用Tor匿名抓取该站点。

我试过这样做。它似乎不起作用。我检查了我的IP,它仍然与我使用tor之前的IP相同。我通过python检查了它。

import urllib2
proxy_handler = urllib2.ProxyHandler({"tcp":"http://127.0.0.1:9050"})
opener = urllib2.build_opener(proxy_handler)
urllib2.install_opener(opener)

11 个答案:

答案 0 :(得分:21)

您正在尝试连接到SOCKS端口 - Tor拒绝任何非SOCKS流量。你可以通过中间人 - Privoxy - 使用Port 8118连接。

示例:

proxy_support = urllib2.ProxyHandler({"http" : "127.0.0.1:8118"})
opener = urllib2.build_opener(proxy_support) 
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
print opener.open('http://www.google.com').read()

另请注意传递给ProxyHandler的属性,没有http前缀为ip:port

答案 1 :(得分:9)

pip install PySocks

然后:

import socket
import socks
import urllib2

ipcheck_url = 'http://checkip.amazonaws.com/'

# Actual IP.
print(urllib2.urlopen(ipcheck_url).read())

# Tor IP.
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, '127.0.0.1', 9050)
socket.socket = socks.socksocket
print(urllib2.urlopen(ipcheck_url).read())

仅使用https://stackoverflow.com/a/2015649/895245中的urllib2.ProxyHandler会失败:

Tor is not an HTTP Proxy

提到:How can I use a SOCKS 4/5 proxy with urllib2?

在Ubuntu 15.10上测试,Tor 0.2.6.10,Python 2.7.10。

答案 2 :(得分:2)

在我面前使用privoxy作为http-proxy工作对我来说 - 这是一个爬虫模板:


import urllib2
import httplib

from BeautifulSoup import BeautifulSoup
from time import sleep

class Scraper(object):
    def __init__(self, options, args):
        if options.proxy is None:
            options.proxy = "http://localhost:8118/"
        self._open = self._get_opener(options.proxy)

    def _get_opener(self, proxy):
        proxy_handler = urllib2.ProxyHandler({'http': proxy})
        opener = urllib2.build_opener(proxy_handler)
        return opener.open

    def get_soup(self, url):
        soup = None
        while soup is None:
            try:
                request = urllib2.Request(url)
                request.add_header('User-Agent', 'foo bar useragent')
                soup = BeautifulSoup(self._open(request))
            except (httplib.IncompleteRead, httplib.BadStatusLine,
                    urllib2.HTTPError, ValueError, urllib2.URLError), err:
                sleep(1)
        return soup

class PageType(Scraper):
    _URL_TEMPL = "http://foobar.com/baz/%s"

    def items_from_page(self, url):
        nextpage = None
        soup = self.get_soup(url)

        items = []
        for item in soup.findAll("foo"):
            items.append(item["bar"])
            nexpage = item["href"]

        return nextpage, items

    def get_items(self):
        nextpage, items = self._categories_from_page(self._START_URL % "start.html")
        while nextpage is not None:
            nextpage, newitems = self.items_from_page(self._URL_TEMPL % nextpage)
            items.extend(newitems)
        return items()

pt = PageType()
print pt.get_items()

答案 3 :(得分:2)

以下是使用python中的tor代理下载文件的代码:(更新网址)

import urllib2

url = "http://www.disneypicture.net/data/media/17/Donald_Duck2.gif"

proxy = urllib2.ProxyHandler({'http': '127.0.0.1:8118'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)

file_name = url.split('/')[-1]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)

file_size_dl = 0
block_sz = 8192
while True:
    buffer = u.read(block_sz)
    if not buffer:
        break

    file_size_dl += len(buffer)
    f.write(buffer)
    status = r"%10d  [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
    status = status + chr(8)*(len(status)+1)
    print status,

f.close()

答案 4 :(得分:2)

以下代码100%正在使用Python 3.4

(您需要使用此代码保持TOR浏览器开启)

此脚本通过socks5连接到TOR从checkip.dyn.com获取IP,更改身份并重新发送请求以获取新IP(循环10次)

您需要安装相应的库才能实现此功能。 (享受并且不要滥用)

import socks
import socket
import time
from stem.control import Controller
from stem import Signal
import requests
from bs4 import BeautifulSoup
err = 0
counter = 0
url = "checkip.dyn.com"
with Controller.from_port(port = 9151) as controller:
    try:
        controller.authenticate()
        socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9150)
        socket.socket = socks.socksocket
        while counter < 10:
            r = requests.get("http://checkip.dyn.com")
            soup = BeautifulSoup(r.content)
            print(soup.find("body").text)
            counter = counter + 1
            #wait till next identity will be available
            controller.signal(Signal.NEWNYM)
            time.sleep(controller.get_newnym_wait())
    except requests.HTTPError:
        print("Could not reach URL")
        err = err + 1
print("Used " + str(counter) + " IPs and got " + str(err) + " errors")

答案 5 :(得分:2)

The following solution works for me in Python 3. Adapted from CiroSantilli's answer:

With urllib (name of urllib2 in Python 3):

import socks
import socket
from urllib.request import urlopen

url = 'http://icanhazip.com/'

socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, '127.0.0.1', 9150)
socket.socket = socks.socksocket

response = urlopen(url)
print(response.read())

With requests:

import socks
import socket
import requests

url = 'http://icanhazip.com/'

socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, '127.0.0.1', 9150)
socket.socket = socks.socksocket

response = requests.get(url)
print(response.text)

With Selenium + PhantomJS:

from selenium import webdriver

url = 'http://icanhazip.com/'

service_args = [ '--proxy=localhost:9150', '--proxy-type=socks5', ]
phantomjs_path = '/your/path/to/phantomjs'

driver = webdriver.PhantomJS(
    executable_path=phantomjs_path, 
    service_args=service_args)

driver.get(url)
print(driver.page_source)
driver.close()

Note: If you are planning to use Tor often, consider making a donation to support their awesome work!

答案 6 :(得分:1)

也许您遇到了一些网络连接问题?上面的脚本对我有效(我替换了一个不同的URL - 我使用了http://stackoverflow.com/ - 我得到了预期的页面:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd" >
 <html> <head>

<title>Stack Overflow</title>        
<link rel="stylesheet" href="/content/all.css?v=3856">

(等)

答案 7 :(得分:0)

Tor是一个袜子代理。直接使用the example you cite连接到它会失败并显示&#34; urlopen错误隧道连接失败:501 Tor不是HTTP代理&#34;。正如其他人所说,你可以通过Privoxy解决这个问题。

或者您也可以使用PycURL或SocksiPy。有关使用两者的示例,请参阅...

https://stem.torproject.org/tutorials/to_russia_with_love.html

答案 8 :(得分:0)

您可以使用torify

使用

运行程序
unordered_multimap

答案 9 :(得分:0)

我想我会分享一个对我有用的解决方案(python3,windows10):

第1步:在9151启用Tor ControlPort。

Tor服务在默认端口91509151上的ControlPort上运行。当您运行127.0.0.1:9150时,您应该可以看到本地地址127.0.0.1:9151netstat -an

[go to windows terminal]
cd ...\Tor Browser\Browser\TorBrowser\Tor
tor --service remove
tor --service install -options ControlPort 9151
netstat -an 

第2步:Python脚本如下。

# library to launch and kill Tor process
import os
import subprocess

# library for Tor connection
import socket
import socks
import http.client
import time
import requests
from stem import Signal
from stem.control import Controller

# library for scraping
import csv
import urllib
from bs4 import BeautifulSoup
import time

def launchTor():
    # start Tor (wait 30 sec for Tor to load)
    sproc = subprocess.Popen(r'.../Tor Browser/Browser/firefox.exe')
    time.sleep(30)
    return sproc

def killTor(sproc):
    sproc.kill()

def connectTor():
    socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9150, True)
    socket.socket = socks.socksocket
    print("Connected to Tor")

def set_new_ip():
    # disable socks server and enabling again
    socks.setdefaultproxy()
    """Change IP using TOR"""
    with Controller.from_port(port=9151) as controller:
        controller.authenticate()
        socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9150, True)
        socket.socket = socks.socksocket
        controller.signal(Signal.NEWNYM)

def checkIP():
    conn = http.client.HTTPConnection("icanhazip.com")
    conn.request("GET", "/")
    time.sleep(3)
    response = conn.getresponse()
    print('current ip address :', response.read())

# Launch Tor and connect to Tor network
sproc = launchTor()
connectTor()

# list of url to scrape
url_list = [list of all the urls you want to scrape]

for url in url_list:
    # set new ip and check ip before scraping for each new url
    set_new_ip()
    # allow some time for IP address to refresh
    time.sleep(5)
    checkIP()

    '''
    [insert your scraping code here: bs4, urllib, your usual thingy]
    '''

# remember to kill process 
killTor(sproc)

上面的此脚本将为您要抓取的每个网址更新IP地址。只要确保睡眠时间足以让IP改变。昨天最后测试。希望这可以帮助!

答案 10 :(得分:0)

扩展上述关于使用torify和Tor浏览器的评论(并且不需要Privoxy):

pip install PySocks
pip install pyTorify

(安装Tor浏览器并启动它)

命令行用法:

python -mtorify -p 127.0.0.1:9150 your_script.py

或内置到脚本中:

import torify
torify.set_tor_proxy("127.0.0.1", 9150)
torify.disable_tor_check()
torify.use_tor_proxy()

# use urllib as normal
import urllib.request
req = urllib.request.Request("http://....")
req.add_header("Referer", "http://...") # etc
res = urllib.request.urlopen(req)
html = res.read().decode("utf-8")

注意,Tor浏览器使用端口9150,而不是9050