python多处理IOError:[Errno232]管道正在关闭

时间:2016-06-06 20:25:57

标签: python multiprocessing deadlock

以下代码将计算所有750个连接并将打印结果队列,但在此之后它会陷入死锁状态。如果我将结果分配给multiprocessing.Queue(),程序会立即死锁。

def function(job, results):
    # do stuff
    results_q.put(stuff)

if __name__ == '__main__':
    devices = {}
    with open('file.txt', 'r') as f:
        projectFile= f.readlines()

        jobs = multiprocessing.Queue()
        results = multiprocessing.Manager().Queue()

        pool = [ multiprocessing.Process(target=function, args=(jobs, results)) for ip in itertools.islice(projectFile, 0, 750) ]

        for p in pool:
            p.start()

        for n in projectFile:
            jobs.put(n.strip())

        for p in pool:
            jobs.put(None)

        count=0
        for p in pool:
            p.join()
            count += 1
            print count

        print results

有没有人看到任何可能导致死锁的事情?我很不确定如何继续进行,因为这一切似乎都在我脑海中查看。任何帮助将不胜感激!

1 个答案:

答案 0 :(得分:1)

我认为这个问题是由创建多个进程引起的。这不一定是死锁,但算法需要很长时间来实例化方法。我用线程进行了测试,显然效果更好。看代码:

import multiprocessing
import itertools
import threading

def function(job, results):
    # do stuff
    results.put(stuff)

if __name__ == '__main__':
    devices = {}
    with open('file.txt', 'r') as f:
        projectFile= f.readlines()

        jobs = multiprocessing.Queue()
        results = multiprocessing.Manager().Queue()

        pool = [threading.Thread(target=function, args=(jobs, results)) for ip in itertools.islice(projectFile, 0, 750) ]

        for i,p in enumerate(pool):
            print "Started Thread Number", i  # Log to verify
            p.start()

        for n in projectFile:
            jobs.put(n.strip())

        for p in pool:
            jobs.put(None)

        count=0
        for p in pool:
            p.join() # This join is dangerous, make sure of the thread not raise any error
            count += 1
            print count

        print results

我不知道这段代码是否会解决你的问题,也许会更快地执行。