使用启动方法“ spawn”的Python多重处理不起作用

时间:2019-07-24 21:18:47

标签: python multiprocessing

我写了一个Python类来并行绘制pylot。它在默认启动方法为fork的Linux上运行良好,但是在Windows上尝试时遇到了问题(可以使用spawn start方法在Linux上重现-请参见下面的代码)。我总是最终会收到此错误:

Traceback (most recent call last):
  File "test.py", line 50, in <module>
    test()
  File "test.py", line 7, in test
    asyncPlotter.saveLinePlotVec3("test")
  File "test.py", line 41, in saveLinePlotVec3
    args=(test, ))
  File "test.py", line 34, in process
    p.start()
  File "C:\Users\adrian\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "C:\Users\adrian\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\adrian\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\adrian\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\adrian\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle weakref objects

C:\Python\MonteCarloTools>Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\adrian\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 99, in spawn_main
    new_handle = reduction.steal_handle(parent_pid, pipe_handle)
  File "C:\Users\adrian\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 82, in steal_handle
    _winapi.PROCESS_DUP_HANDLE, False, source_pid)
OSError: [WinError 87] The parameter is incorrect

我希望有一种方法可以使此代码适用于Windows。这里是Linux和Windows上可用的不同启动方法的链接:https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods

import multiprocessing as mp
def test():

    manager = mp.Manager()
    asyncPlotter = AsyncPlotter(manager.Value('i', 0))

    asyncPlotter.saveLinePlotVec3("test")
    asyncPlotter.saveLinePlotVec3("test")

    asyncPlotter.join()


class AsyncPlotter():

    def __init__(self, nc, processes=mp.cpu_count()):

        self.nc = nc
        self.pids = []
        self.processes = processes


    def linePlotVec3(self, nc, processes, test):

        self.waitOnPool(nc, processes)

        print(test)

        nc.value -= 1


    def waitOnPool(self, nc, processes):

        while nc.value >= processes:
            time.sleep(0.1)
        nc.value += 1


    def process(self, target, args):

        ctx = mp.get_context('spawn') 
        p = ctx.Process(target=target, args=args)
        p.start()
        self.pids.append(p)


    def saveLinePlotVec3(self, test):

        self.process(target=self.linePlotVec3,
                       args=(self.nc, self.processes, test))


    def join(self):
        for p in self.pids:
            p.join()


if __name__=='__main__':
    test()

2 个答案:

答案 0 :(得分:2)

使用spawn启动方法时,会Process对象本身被腌制以用于子进程。在您的代码中,target=target参数是AsyncPlotter的绑定方法。看起来整个asyncPlotter实例也必须被腌制才能起作用,其中包括self.manager,显然它不希望被腌制。

简而言之,请将Manager保留在AsyncPlotter之外。这适用于我的macOS系统:

def test():
    manager = mp.Manager()
    asyncPlotter = AsyncPlotter(manager.Value('i', 0))
    ...

此外,如您的评论中所述,asyncPlotter在重新使用时不起作用。我不知道细节,但是看起来它与Value对象如何在进程之间共享有关。 test函数将需要像这样:

def test():
    manager = mp.Manager()
    nc = manager.Value('i', 0)

    asyncPlotter1 = AsyncPlotter(nc)
    asyncPlotter1.saveLinePlotVec3("test 1")
    asyncPlotter2 = AsyncPlotter(nc)
    asyncPlotter2.saveLinePlotVec3("test 2")

    asyncPlotter1.join()
    asyncPlotter2.join()

总而言之,您可能需要重组代码并使用a process pool。它已经可以处理AsyncPlottercpu_count和并行执行中的工作:

from multiprocessing import Pool, set_start_method
from random import random
import time

def linePlotVec3(test):
    time.sleep(random())
    print("test", test)

if __name__ == "__main__":
    set_start_method("spawn")
    with Pool() as pool:
        pool.map(linePlotVec3, range(20))

或者您可以使用ProcessPoolExecutor来进行pretty much the same thing。此示例一次启动一个任务,而不是映射到列表:

from concurrent.futures import ProcessPoolExecutor
import multiprocessing as mp
import time
from random import random

def work(i):
    r = random()
    print("work", i, r)
    time.sleep(r)

def main():
    ctx = mp.get_context("spawn")
    with ProcessPoolExecutor(mp_context=ctx) as pool:
        for i in range(20):
            pool.submit(work, i)

if __name__ == "__main__":
    main()

答案 1 :(得分:1)

为了可移植性,作为参数传递给将在进程中运行的函数的所有对象都必须是可腌制的。