如何使用单独的线程运行多个shell作业,并在同时执行每个线程后等待每个线程完成?

时间:2019-12-03 00:35:32

标签: python multithreading parallel-processing subprocess popen

我正在处理一个需要处理大量数据的脚本。我意识到,脚本的并行组件实际上无法为具有大量单独数据点的实例提供帮助。我将创建临时文件并改为并行运行它们。我正在qsub上运行它,因此我将通过-pe threaded $N_JOBS分配一定数量的线程(在本例中,本例中为4)。

我的最终目标是使用我分配的线程之一启动每个进程,然后等待所有作业完成后再继续。

但是,我只使用过process = subprocess.Popenprocess.communicate()来运行shell作业。过去,由于僵尸进程,我在使用process.wait()时遇到了一些麻烦。

我如何修改我的run函数以开始作业,而不是等待完成,然后开始下一个作业,然后在所有作业运行之后,等待所有作业完成?

如果不清楚,请告诉我,我可以更好地解释。在下面的示例(可能是一个可怕的示例?)中,我想使用4个单独的线程(我不确定如何设置b / c,我只是为简单并行化而做过joblib.Parallel),其中每个线程都运行命令echo '$THREAD' && sleep 1。因此,最后大约需要1秒钟而不是4秒钟。

我发现了这篇文章:Python threading multiple bash subprocesses?,但是我不确定如何通过我的run脚本来适应我的情况。

import sys, subprocess, time 

# Number of jobs
N_JOBS=4

# Run command
def run(
    cmd,
    popen_kws=dict(),
    ):

    # Run
    f_stdout = subprocess.PIPE
    f_stderr = subprocess.PIPE

    # Execute the process
    process_ = subprocess.Popen(cmd, shell=True, stdout=f_stdout, stderr=f_stderr, **popen_kws) 
    # Wait until process is complete and return stdout/stderr
    stdout_, stderr_ = process_.communicate() # Use this .communicate instead of .wait to avoid zombie process that hangs due to defunct. Removed timeout b/c it's not available in Python 2

    # Return code
    returncode_ = process_.returncode

    return {"process":process_, "stdout":stdout_, "stderr":stderr_, "returncode":returncode_}

# Commands
cmds = list(map(lambda x:"echo '{}' && sleep 1".format(x), range(1, N_JOBS+1)))
# ["echo '1'", "echo '2'", "echo '3'", "echo '4'"]

# Start time 
start_time = time.time()
results = dict()
for thread, cmd in enumerate(cmds, start=1):
    # Run command but don't wait for it to finish (Currently, it's waiting to finish)
    results[thread] = run(cmd)

# Now wait until they are all finished
print("These jobs took {} seconds\n".format(time.time() - start_time))
print("Here's the results:", *results.items(), sep="\n")
print("\nContinue with script. .. ...")

# These jobs took 4.067937850952148 seconds

# Here's the results:
# (1, {'process': <subprocess.Popen object at 0x1320766d8>, 'stdout': b'1\n', 'stderr': b'', 'returncode': 0})
# (2, {'process': <subprocess.Popen object at 0x1320547b8>, 'stdout': b'2\n', 'stderr': b'', 'returncode': 0})
# (3, {'process': <subprocess.Popen object at 0x132076ba8>, 'stdout': b'3\n', 'stderr': b'', 'returncode': 0})
# (4, {'process': <subprocess.Popen object at 0x132076780>, 'stdout': b'4\n', 'stderr': b'', 'returncode': 0})

# Continue with script. .. ...

我尝试按照multiprocessing https://docs.python.org/3/library/multiprocessing.html上的文档进行操作,但是根据我的情况进行调整确实令人困惑:

# Run command
def run(
    cmd,
    errors_ok=False,
    popen_kws=dict(),
    ):

    # Run
    f_stdout = subprocess.PIPE
    f_stderr = subprocess.PIPE

    # Execute the process
    process_ = subprocess.Popen(cmd, shell=True, stdout=f_stdout, stderr=f_stderr, **popen_kws) 

    return process_

# Commands
cmds = list(map(lambda x:"echo '{}' && sleep 0.5".format(x), range(1, N_JOBS+1)))
# ["echo '1'", "echo '2'", "echo '3'", "echo '4'"]

# Start time 
start_time = time.time()
results = dict()
for thread, cmd in enumerate(cmds, start=1):
    # Run command but don't wait for it to finish (Currently, it's waiting to finish)
    p = multiprocessing.Process(target=run, args=(cmd,))
    p.start()
    p.join()
    results[thread] = p

1 个答案:

答案 0 :(得分:0)

您快到了。进行多处理的最简单方法是使用multiprocessing.Pool对象,如multiprocessing documentation的介绍中所述,然后使用map()starmap()您的一组函数。 map()starmap()之间的最大区别是map()假设您的函数采用单个参数(因此您可以传递一个简单的可迭代对象),而starmap()则需要嵌套的可迭代对象参数。

在您的示例中,这将起作用(run()函数被略过了,尽管我将签名更改为命令和参数列表,因为将字符串传递给系统调用通常是一个坏主意):

from multiprocessing import Pool

N_JOBS = 4

def run(cmd, *args):
    return cmd + str(args)

cmds = [
    ('echo', 'hello', 1, 3, 4),
    ('ls', '-l', '-r'),
    ('sleep', 3),
    ('pwd', '-P'),
    ('whoami',),
]

results = []
with Pool(N_JOBS) as p:
    results = p.starmap(run, cmds)

for r in results:
    print(r)

不必具有与命令相同数量的作业; Pool中的子流程将根据需要重新使用以运行功能。