线程陷入加入

时间:2017-01-05 08:03:29

标签: python multithreading python-3.x concurrency

我正在运行一个提供随机错误的线程池。有时候它会起作用,有时它会卡在pool.join这段代码的一部分。我已经在这几天了,但在它工作或卡住之间找不到任何区别。请帮忙......

这是代码......

def run_thread_pool(functions_list):

    # Make the Pool of workers
    pool = ThreadPool()  # left blank to default to machine number of cores

    pool.map(run_function, functions_list)

    # close the pool and wait for the work to finish
    pool.close()
    pool.join()
    return

同样,这段代码也随机卡在q.join(:

def run_queue_block(methods_list, max_num_of_workers=20):
    from views.console_output_handler import add_to_console_queue

    '''
    Runs methods on threads.  Stores method returns in a list.  Then outputs that list
    after all methods in the list have been completed.

    :param methods_list: example ((method name, args), (method_2, args), (method_3, args)
    :param max_num_of_workers: The number of threads to use in the block.
    :return: The full list of returns from each method.
    '''

    method_returns = []

    log = StandardLogger(logger_name='run_queue_block')

    # lock to serialize console output
    lock = threading.Lock()

    def _output(item):
        # Make sure the whole print completes or threads can mix up output in one line.
        with lock:
            if item:
                add_to_console_queue(item)
            msg = threading.current_thread().name, item
            log.log_debug(msg)

        return

    # The worker thread pulls an item from the queue and processes it
    def _worker():
        log = StandardLogger(logger_name='_worker')

        while True:
            try:
                method, args = q.get()  # Extract and unpack callable and arguments

            except:
                # we've hit a nonetype object.
                break

            if method is None:
                break

            item = method(*args)  # Call callable with provided args and store result
            method_returns.append(item)
            _output(item)

            q.task_done()

    num_of_jobs = len(methods_list)

    if num_of_jobs < max_num_of_workers:
        max_num_of_workers = num_of_jobs

    # Create the queue and thread pool.
    q = Queue()

    threads = []
    # starts worker threads.
    for i in range(max_num_of_workers):
        t = threading.Thread(target=_worker)
        t.daemon = True  # thread dies when main thread (only non-daemon thread) exits.
        t.start()
        threads.append(t)

    for method in methods_list:
        q.put(method)

    # block until all tasks are done
    q.join()

    # stop workers
    for i in range(max_num_of_workers):
        q.put(None)
    for t in threads:
        t.join()

    return method_returns

我永远不知道什么时候会起作用。它大部分时间都有效,但大部分时间都不够好。什么可能导致这样的错误?

2 个答案:

答案 0 :(得分:1)

您必须在shutdown对象上调用concurrent.futures.ThreadPoolExecutor。然后return pool.map的结果。def run_thread_pool(functions_list): # Make the Pool of workers pool = ThreadPool() # left blank to default to machine number of cores result = pool.map(run_function, functions_list) # close the pool and wait for the work to finish pool.shutdown() return result

Queue

我在没有Thread对象和守护进程def run_queue_block(methods_list): from views.console_output_handler import add_to_console_queue ''' Runs methods on threads. Stores method returns in a list. Then outputs that list after all methods in the list have been completed. :param methods_list: example ((method name, args), (method_2, args), (method_3, args) :param max_num_of_workers: The number of threads to use in the block. :return: The full list of returns from each method. ''' method_returns = [] log = StandardLogger(logger_name='run_queue_block') # lock to serialize console output lock = threading.Lock() def _output(item): # Make sure the whole print completes or threads can mix up output in one line. with lock: if item: add_to_console_queue(item) msg = threading.current_thread().name, item log.log_debug(msg) return # The worker thread pulls an item from the queue and processes it def _worker(method, *args, **kwargs): log = StandardLogger(logger_name='_worker') item = method(*args, **kwargs) # Call callable with provided args and store result with lock: method_returns.append(item) _output(item) threads = [] # starts worker threads. for method, args in methods_list: t = threading.Thread(target=_worker, args=(method, args)) t.start() threads.append(t) # stop workers for t in threads: t.join() return method_returns 的情况下简化了代码。检查它是否符合您的要求。

canvas

答案 1 :(得分:0)

要允许您的队列加入第二个示例,您需要确保从队列中删除所有任务。

因此,在_worker函数中,即使无法处理任务,也要将任务标记为已完成,否则队列将永远不会被清空,并且您的程序将会挂起。

def _worker():
    log = StandardLogger(logger_name='_worker')

    while True:
        try:
            method, args = q.get()  # Extract and unpack callable and arguments

        except:
            # we've hit a nonetype object.
            q.task_done()
            break

        if method is None:
            q.task_done()
            break

        item = method(*args)  # Call callable with provided args and store result
        method_returns.append(item)
        _output(item)

        q.task_done()