在python中批量/批量DNS查找?

时间:2015-12-19 22:54:42

标签: python dns

我有一个脚本以下列方式获取DNS(CNAME,MX,NS)数据:

from dns import resolver
...

def resolve_dns(url):
    response_dict = {}
    print "\nResolving DNS for %s" % (url)

    try: 
        response_dict['CNAME'] = [rdata for rdata in resolver.query(url, 'CNAME')]
    except:
        pass

    try: 
        response_dict['MX'] = [rdata for rdata in resolver.query(url, 'MX')]
    except:
        pass

    try: 
        response_dict['NS'] = [rdata for rdata in resolver.query(url, 'NS')]
    except:
        pass

    return response_dict

为连续的URL顺序调用此函数。如果可能,我希望通过同时获取多个网址的数据来加快上述过程。

是否有办法完成上述脚本对一批URL的处理(可能返回一个dict对象列表,每个dict对应一个特定URL的数据)?

1 个答案:

答案 0 :(得分:5)

您可以将工作放入线程池中。您的resolve_dns会连续发出3个请求,因此我创建了一个稍微更通用的工作程序,只执行1次查询并使用collections.product生成所有组合。在线程池中,我将chunksize设置为1以减少线程池批处理,如果某些查询需要很长时间,这会增加执行时间。

import dns
from dns import resolver
import itertools
import collections
import multiprocessing.pool

def worker(arg):
    """query dns for (hostname, qname) and return (qname, [rdata,...])"""
    try:
        url, qname = arg
        rdatalist = [rdata for rdata in resolver.query(url, qname)]
        return qname, rdatalist
    except dns.exception.DNSException, e:
        return qname, []

def resolve_dns(url_list):
    """Given a list of hosts, return dict that maps qname to
    returned rdata records.
    """
    response_dict = collections.defaultdict(list)
    # create pool for querys but cap max number of threads
    pool = multiprocessing.pool.ThreadPool(processes=min(len(url_list)*3, 60))
    # run for all combinations of hosts and qnames
    for qname, rdatalist in pool.imap(
            worker, 
            itertools.product(url_list, ('CNAME', 'MX', 'NS')),
            chunksize=1):
        response_dict[qname].extend(rdatalist)
    pool.close()
    return response_dict

url_list = ['example.com', 'stackoverflow.com']
result = resolve_dns(url_list)
for qname, rdatalist in result.items():
    print qname
    for rdata in rdatalist:
        print '   ', rdata
相关问题