多处理在一个进程中生成多个线程时的队列死锁

时间:2018-05-25 04:45:15

标签: python-2.7 deadlock python-multiprocessing python-multithreading

我创建了两个进程,一个生成多线程的进程是将数据写入Queue的响应,另一个是从Queue读取数据。它总是死在高频率,少数不是。特别是当您在写入模块中的run方法中添加sleep时(代码中的注释)。让我把我的代码放在下面:

环境: python2.7

main.py

    from multiprocessing import Process,Queue
    from write import write
    from read import read

    if __name__ == "__main__":
        record_queue = Queue()
        table_queue = Queue()

        pw = Process(target=write,args=[record_queue, table_queue])
        pr = Process(target=read,args=[record_queue, table_queue])

        pw.start()
        pr.start()
        pw.join()
        pr.join()

write.py

from concurrent.futures import ThreadPoolExecutor, as_completed

def write(record_queue, table_queue):
    thread_num = 3
    pool = ThreadPoolExecutor(thread_num)
    futures = [pool.submit(run, record_queue, table_queue) for _ in range (thread_num)]
    results = [r.result() for r in as_completed(futures)]


def run(record_queue, table_queue):
    while True:
        if  table_queue.empty():
            break
    table = table_queue.get()
    # adding this code below reduce deadlock opportunity.
    #import time
    #import random
    #time.sleep(random.randint(1, 3))
    process_with_table(record_queue, table_queue, table)

def process_with_table(record_queue, table_queue, table):
    #for short
    for item in [x for x in range(1000)]:
        record_queue.put(item)

read.py

from concurrent.futures import ThreadPoolExecutor, as_completed
import threading
import Queue

def read(record_queue, table_queue):
    count = 0
    while True:

        item = record_queue.get()
        count += 1
        print ("item: ", item)
        if count == 4:
            break

我用Google搜索并且在SO上有相同的问题,但是我看不出与我的代码相比的相似性,所以任何人都可以帮助我的代码,谢谢......

1 个答案:

答案 0 :(得分:0)

我似乎找到了一个解决方案,将write模块中的run方法更改为:

def run(record_queue, table_queue):
    while True:
        try:  
             if table_queue.empty():
                 break
             table = table_queue.get(timeout=3)

             process_with_table(record_queue, table_queue, table)
        except multiprocessing.queues.Empty:
             import time
             time.sleep(0.1)

并且永远不会在get方法上看到死锁或阻塞。