杀死在Python中的类__init__中创建的子进程

时间:2014-11-09 13:56:49

标签: python subprocess fork kill

(Python和OO新手 - 如果我在这里愚蠢,我会提前道歉)

我正在尝试定义一个Python 3类,这样在创建实例时,还会创建两个子进程。这些子进程在后台执行一些工作(发送和侦听UDP数据包)。子进程还需要相互通信并与实例通信(根据从UDP接收的内容更新实例属性等)。

我正在使用os.fork创建我的子进程,因为我不明白如何使用子进程模块向子进程发送多个文件描述符 - 这可能是我的问题的一部分。

我遇到的问题是如何在销毁实例时终止子进程。我的理解是我不应该在Python中使用析构函数,因为Python应该清理内容并自动收集垃圾。在任何情况下,以下代码都会让孩子退出后继续运行。

这里的正确方法是什么?

import os
from time import sleep

class A:
    def __init__(self):
        sfp, pts = os.pipe() # senderFromParent, parentToSender
        pfs, stp = os.pipe() # parentFromSender, senderToParent
        pfl, ltp = os.pipe() # parentFromListener, listenerToParent
        sfl, lts = os.pipe() # senderFromListener, listenerToSender
        pid = os.fork()
        if pid:
            # parent
            os.close(sfp)
            os.close(stp)
            os.close(lts)
            os.close(ltp)
            os.close(sfl)
            self.pts = os.fdopen(pts, 'w') # allow creator of A inst to
            self.pfs = os.fdopen(pfs, 'r') # send and receive messages
            self.pfl = os.fdopen(pfl, 'r') # to/from sender and
        else:                              # listener processes
            # sender or listener
            os.close(pts)
            os.close(pfs)
            os.close(pfl)
            pid = os.fork()
            if pid:
                # sender
                os.close(ltp)
                os.close(lts)
                sender(self, sfp, stp, sfl)
            else:
                # listener
                os.close(stp)
                os.close(sfp)
                os.close(sfl)
                listener(self, ltp, lts)

def sender(a, sfp, stp, sfl):
    sfp = os.fdopen(sfp, 'r') # receive messages from parent
    stp = os.fdopen(stp, 'w') # send messages to parent
    sfl = os.fdopen(sfl, 'r') # received messages from listener
    while True:
        # send UDP packets based on messages from parent and process
        # responses from listener (some responses passed back to parent)
        print("Sender alive")
        sleep(1)

def listener(a, ltp, lts):
    ltp = os.fdopen(ltp, 'w') # send messages to parent
    lts = os.fdopen(lts, 'w') # send messages to sender
    while True:
        # listen for and process incoming UDP packets, sending some
        # to sender and some to parent
        print("Listener alive")
        sleep(1)

a = A()

运行以上产生:

Sender alive
Listener alive
Sender alive
Listener alive
...

3 个答案:

答案 0 :(得分:0)

根据建议here,您可以使用带有标记守护程序= True的multiprocessing模块创建子进程。

示例:

from multiprocessing import Process

p = Process(target=f, args=('bob',))
p.daemon = True
p.start()

答案 1 :(得分:0)

实际上,你应该使用析构函数。 Python对象有一个__del__方法,在对象被垃圾收集之前调用。

在您的情况下,您应该定义

def __del__(self):
   ...

class A内向您的子进程发送相应的终止信号。当然,不要忘记将子PID存储在父进程中。

答案 2 :(得分:0)

尝试重新发明轮子毫无意义。 subprocess可以满足您的所有需求,但multiprocessing只是简单的过程,因此我们会使用它。

您可以使用multiprocessing.Pipe创建连接,并可以在一对进程之间来回发送消息。您可以将管道设为“双工”,因此如果您需要,两端都可以发送和接收。您可以使用multiprocessing.Manager在进程之间创建共享Namespace(在侦听器,发送方和父级之间共享状态)。使用multiprocessing.listmultiprocessing.dictmultiprocessing.Namespace时出现警告。分配给它们的任何可变对象在重新分配给托管对象之前不会看到对该对象所做的更改。

例如

namespace.attr = {}
# change below not cascaded to other processes
namespace.attr["key"] = "value"
# force change to other processes
namespace.attr = namespace.attr

如果您需要有多个进程写入同一属性,那么您将需要使用同步来防止一个进程同时修改,从而消除另一个进程所做的更改。

示例代码:

from multiprocessing import Process, Pipe, Manager

class Reader:

    def __init__(self, writer_conn, namespace):
        self.writer_conn = writer_conn
        self.namespace = namespace

    def read(self):
        self.namespace.msgs_recv = 0
        with self.writer_conn:
            try:
                while True:
                    obj = self.writer_conn.recv()
                    self.namespace.msgs_recv += 1
                    print("Reader got:", repr(obj))
            except EOFError:
                print("Reader has no more data to receive")

class Writer:

    def __init__(self, reader_conn, namespace):
        self.reader_conn = reader_conn
        self.namespace = namespace

    def write(self, msgs):
        self.namespace.msgs_sent = 0
        with self.reader_conn:
            for msg in msgs:
                self.reader_conn.send(msg)
                self.namespace.msgs_sent += 1

def create_child_processes(reader, writer, msgs):
    p_write = Process(target=Writer.write, args=(writer, msgs))
    p_write.start()

    # This is very important otherwise reader will hang after writer has finished.
    # The order of this statement coming after p_write.start(), but after
    # p_read.start() is also important. Look up file descriptors and how they
    # are inherited by child processes on Unix and how a any valid fd to the
    # write side of a pipe will keep all read ends open
    writer.reader_conn.close()

    p_read = Process(target=Reader.read, args=(reader,))
    p_read.start()

    return p_read, p_write

def run_mp_pipe():

    manager = Manager()
    namespace = manager.Namespace()
    read_conn, write_conn = Pipe()

    reader = Reader(read_conn, namespace)
    writer = Writer(write_conn, namespace)

    p_read, p_write = create_child_processes(reader, writer, 
        msgs=["hello", "world", {"key", "value"}])

    print("starting")

    p_write.join()
    p_read.join()

    print("done")
    print(namespace)
    assert namespace.msgs_sent == namespace.msgs_recv

if __name__ == "__main__":
    run_mp_pipe()

输出:

starting
Reader got: 'hello'
Reader got: 'world'
Reader got: {'key', 'value'}
Reader has no more data to receive
done
Namespace(msgs_recv=3, msgs_sent=3)