我无法使用multprocess.Pool.apply_async登录到单个文件。
我尝试从Logging Cookbook中调整this示例,但它仅适用于multiprocessing.Process
。将记录队列传递到apply_async
似乎没有效果。
我想使用一个池,以便我可以轻松管理同时线程的数量。
以下适用于多处理的示例。过程对我来说没问题,除了我没有从主进程获取日志消息,而且当我有100个大型作业时,我认为它不会正常工作。
import logging
import logging.handlers
import numpy as np
import time
import multiprocessing
import pandas as pd
log_file = 'PATH_TO_FILE/log_file.log'
def listener_configurer():
root = logging.getLogger()
h = logging.FileHandler(log_file)
f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
h.setFormatter(f)
root.addHandler(h)
# This is the listener process top-level loop: wait for logging events
# (LogRecords)on the queue and handle them, quit when you get a None for a
# LogRecord.
def listener_process(queue, configurer):
configurer()
while True:
try:
record = queue.get()
if record is None: # We send this as a sentinel to tell the listener to quit.
break
logger = logging.getLogger(record.name)
logger.handle(record) # No level or filter logic applied - just do it!
except Exception:
import sys, traceback
print('Whoops! Problem:', file=sys.stderr)
traceback.print_exc(file=sys.stderr)
def worker_configurer(queue):
h = logging.handlers.QueueHandler(queue) # Just the one handler needed
root = logging.getLogger()
root.addHandler(h)
# send all messages, for demo; no other level or filter logic applied.
root.setLevel(logging.DEBUG)
# This is the worker process top-level loop, which just logs ten events with
# random intervening delays before terminating.
# The print messages are just so you know it's doing something!
def worker_function(sleep_time, name, queue, configurer):
configurer(queue)
start_message = 'Worker {} started and will now sleep for {}s'.format(name, sleep_time)
logging.info(start_message)
time.sleep(sleep_time)
success_message = 'Worker {} has finished sleeping for {}s'.format(name, sleep_time)
logging.info(success_message)
def main_with_process():
start_time = time.time()
single_thread_time = 0.
queue = multiprocessing.Queue(-1)
listener = multiprocessing.Process(target=listener_process,
args=(queue, listener_configurer))
listener.start()
workers = []
for i in range(10):
name = str(i)
sleep_time = np.random.randint(10) / 2
single_thread_time += sleep_time
worker = multiprocessing.Process(target=worker_function,
args=(sleep_time, name, queue, worker_configurer))
workers.append(worker)
worker.start()
for w in workers:
w.join()
queue.put_nowait(None)
listener.join()
end_time = time.time()
final_message = "Script execution time was {}s, but single-thread time was {}s".format(
(end_time - start_time),
single_thread_time
)
print(final_message)
if __name__ == "__main__":
main_with_process()
但我无法进行以下调整:
def main_with_pool():
start_time = time.time()
queue = multiprocessing.Queue(-1)
listener = multiprocessing.Process(target=listener_process,
args=(queue, listener_configurer))
listener.start()
pool = multiprocessing.Pool(processes=3)
job_list = [np.random.randint(10) / 2 for i in range(10)]
single_thread_time = np.sum(job_list)
for i, sleep_time in enumerate(job_list):
name = str(i)
pool.apply_async(worker_function,
args=(sleep_time, name, queue, worker_configurer))
queue.put_nowait(None)
listener.join()
end_time = time.time()
print("Script execution time was {}s, but single-thread time was {}s".format(
(end_time - start_time),
single_thread_time
))
if __name__ == "__main__":
main_with_pool()
我尝试了很多细微的变化,使用了multiprocessing.Manager,multiprocessing.Queue,multiprocessing.get_logger,apply_async.get(),但还没有任何工作。
我认为会有一个现成的解决方案。我应该尝试Celery吗?
感谢
答案 0 :(得分:1)
考虑使用两个队列。第一个队列是您为工作人员放置数据的位置。作业完成后,每个工作人员将结果推送到第二个队列。现在使用第二个队列将日志写入文件。
答案 1 :(得分:1)
这里实际上有两个不同的问题,它们交织在一起:
multiprocessing.Queue()
对象作为参数传递给基于池的函数(您可以将其传递给您直接启动的工作者,但不能传递给任何"进一步在"因为它是) None
发送到侦听器进程之前,必须等待所有异步工作程序完成。要修复第一个,请替换:
queue = multiprocessing.Queue(-1)
使用:
queue = multiprocessing.Manager().Queue(-1)
作为经理管理的Queue()
实例可以通过。
要修复第二个问题,要么收集每个异步调用的每个结果,要么关闭池并等待它,例如:
pool.close()
pool.join()
queue.put_nowait(None)
或更复杂:
getters = []
for i, sleep_time in enumerate(job_list):
name = str(i)
getters.append(
pool.apply_async(worker_function,
args=(sleep_time, name, queue, worker_configurer))
)
while len(getters):
getters.pop().get()
# optionally, close and join pool here (generally a good idea anyway)
queue.put_nowait(None)
(您还应该考虑将put_nowait
替换为put
的等待版本,而不是使用无限长队列。)