Python脚本检查网站的标签

时间:2010-04-09 19:42:21

标签: python html linux scripting crontab

我正在试图弄清楚如何编写网站监控脚本(最后的cron作业)来打开给定的URL,检查标签是否存在,以及标签是否不存在,或者不包含预期的数据,然后将一些数据写入日志文件,或发送电子邮件。

标签可能是类似的或类似的东西。

有人有什么想法吗?

3 个答案:

答案 0 :(得分:5)

你最好的选择是结帐BeautifulSoup。像这样:

import urllib2
from BeautifulSoup import BeautifulSoup

page = urllib2.urlopen("http://yoursite.com")
soup = BeautifulSoup(page)

# See the docs on how to search through the soup. I'm not sure what
# you're looking for so my example stops here :)

之后,通过电子邮件发送或登录它是非常标准的票价。

答案 1 :(得分:2)

这是一个记录和发送邮件的示例代码(未经测试):

#!/usr/bin/env python
import logging
import urllib2
import smtplib

#Log config
logging.basicConfig(filename='/tmp/yourscript.log',level=logging.INFO,)

#Open requested url
url = "http://yoursite.com/tags/yourTag"
data = urllib2.urlopen(url)

if check_content(data):
   #Report to log
   logging.info('Content found')
else:
   #Send mail
   send_mail('Content not found')

def check_content(data):
    #Your BeautifulSoup logic here
    return content_found

def send_mail(message_body):
    server = 'localhost'
    recipients = ['you@yourdomain.com']
    sender = 'script@yourdomain.com'
    message = 'From: %s \n Subject: script result \n\n %s' % (sender, message_body)
    session = smtplib.SMTP(server)
    session.sendmail(sender,recipients,message);

我会使用beautifulSoup

check_content()函数进行编码

答案 2 :(得分:1)

以下(未经测试的)代码使用urllib2来抓取页面并重新搜索。

import urllib2,StringIO

pageString = urllib2.urlopen('**insert url here**').read()
m = re.search(r'**insert regex for the tag you want to find here**',pageString)
if m == None:
    #take action for NOT found here
else:
    #take action for found here

以下(未经测试的)代码使用pycurl和StringIO来抓取页面并重新搜索它。

import pycurl,re,StringIO

b = StringIO.StringIO()
c = pycurl.Curl()
c.setopt(pycurl.URL, '**insert url here**')
c.setopt(pycurl.WRITEFUNCTION, b.write)
c.perform()
c.close()
m = re.search(r'**insert regex for the tag you want to find here**',b.getvalue())
if m == None:
    #take action for NOT found here
else:
    #take action for found here