为什么这个守护程序线程会阻塞?

时间:2016-03-11 08:47:05

标签: python-2.7 subprocess python-multithreading

为什么以下代码会阻止cc.start()? crawler.py包含与http://doc.scrapy.org/en/latest/topics/practices.html#run-from-script

类似的代码
import scrapy
import threading
from subprocess import Popen, PIPE

def worker():
    crawler = Popen('python crawler.py', stdout=PIPE, stderr=PIPE, shell=True)
    while True:
        line = crawler.stderr.readline()
        print(line.strip())

cc = threading.Thread(target=worker())
cc.setDaemon(True)
cc.start()
print "Here" # This is not printed
# Do more stuff

crawler.py包含以下代码:

from scrapy.crawler import CrawlerProcess
import scrapy

class MySpider(scrapy.Spider):
    name = 'stackoverflow'
    start_urls = ['http://stackoverflow.com/questions?sort=votes']

def parse(self, response):
    for href in response.css('.question-summary h3 a::attr(href)'):
        full_url = response.urljoin(href.extract())
        yield scrapy.Request(full_url, callback=self.parse_question)

    process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
    })

process.crawl(MySpider)
process.start() # the script will block here until the crawling is finished

1 个答案:

答案 0 :(得分:1)

threading.Thread将可调用对象作为参数(ex函数名),实际上在创建线程实例时调用该函数

cc = threading.Thread(target=worker()) 

你需要做的就是传递要用线程

调用的函数
cc = threading.Thread(target=worker)