使用python3和beautifulsoup登录网站

时间:2016-02-20 10:07:22

标签: python login web-scraping beautifulsoup

我需要一些关于学习python web scraping的小项目的帮助。

from bs4 import BeautifulSoup
import urllib.request
import http.cookiejar

base_url = "https://login.yahoo.com/config/login?.src=flickrsignin&.pc=8190&.scrumb=0&.pd=c%3DH6T9XcS72e4mRnW3NpTAiU8ZkA--&.intl=in&.lang=en&mg=1&.done=https%3A%2F%2Flogin.yahoo.com%2Fconfig%2Fvalidate%3F.src%3Dflickrsignin%26.pc%3D8190%26.scrumb%3D0%26.pd%3Dc%253DJvVF95K62e6PzdPu7MBv2V8-%26.intl%3Din%26.done%3Dhttps%253A%252F%252Fwww.flickr.com%252Fsignin%252Fyahoo%252F%253Fredir%253Dhttps%25253A%25252F%25252Fwww.flickr.com%25252F"
login_action = "/config/login?.src=flickrsignin&.pc=8190&.scrumb=0&.pd=c%3DH6T9XcS72e4mRnW3NpTAiU8ZkA--&.intl=in&.lang=en&mg=1&.done=https%3A%2F%2Flogin.yahoo.com%2Fconfig%2Fvalidate%3F.src%3Dflickrsignin%26.pc%3D8190%26.scrumb%3D0%26.pd%3Dc%253DJvVF95K62e6PzdPu7MBv2V8-%26.intl%3Din%26.done%3Dhttps%253A%252F%252Fwww.flickr.com%252Fsignin%252Fyahoo%252F%253Fredir%253Dhttps%25253A%25252F%25252Fwww.flickr.com%25252F"

cj = http.cookiejar.CookieJar()
opener =  urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
opener.addheaders = [('User-agent',
    ('Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) '
     'AppleWebKit/535.1 (KHTML, like Gecko) '
     'Chrome/13.0.782.13 Safari/535.1'))
]


login_data = urllib.parse.urlencode({
    'login-username' : 'username',
    'login-passwd' : 'password',
    'remember_me' : True
})
login_data = login_data.encode('ascii')
login_url = base_url + login_action
response = opener.open(login_url, login_data)
print (response.read())

我已经尝试过登录,但输出是在登录页面html中返回的,有人可以帮我登录这个网站吗?

2 个答案:

答案 0 :(得分:2)

尝试使用beautifulsoup阅读更多信息。 User[email]只是username input nameUser[password]是密码的crsf_token。虽然以下代码只能在没有import requests from requests.packages.urllib3 import add_stderr_logger import urllib from bs4 import BeautifulSoup from urllib.error import HTTPError from urllib.request import urlopen import re, random, datetime random.seed(datetime.datetime.now()) add_stderr_logger() session = requests.Session() per_session = session.post(url, data={'User[email]':'your_email', 'User[password]':'your_password'}) #you can now associate request with beautifulsoup try: #it assumed that by now you are logged so we can now use .get and fetch any page of your choice bsObj = BeautifulSoup(session.get(url).content, 'lxml') except HTTPError as e: print(e) 保护的网站内登录

unsafe

答案 1 :(得分:1)

您没有存储登录时收到的session token。而不是手动执行此操作,您可以使用mechanize来处理登录会话。

here是一篇很好的文章,说明如何做到这一点。

相关问题