使用TensorFlow的Datasets API会使进程挂在会话析构函数中

时间:2018-07-27 10:59:55

标签: python tensorflow

摘要:

我们正在使用TensorFlow的数据集API。更具体地说,我们使用tf.data.Dataset.from_generator基于生成器函数创建数据集。

当Python垃圾回收我们的tf.Session对象时,其析构函数将调用TensorFlow删除会话(tf_session.TF_DeleteSession)。该调用被挂起,因为它正在尝试执行tf.py_func函数,但是无法获取Python的全局解释器锁。它试图执行的功能似乎是我们数据集中的“最终确定”功能。

更多详细信息:

当我们的tf.Session对象在Python中被垃圾回收时,其析构函数(__del__方法)将无限期挂起。问题似乎是BaseSession中的此调用:

tf_session.TF_DeleteSession(self._session)

运行lldb显示以下堆栈跟踪:

* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
  * frame #0: 0x0000000101855e7e libsystem_kernel.dylib`__psynch_cvwait + 10
    frame #1: 0x000000010188d662 libsystem_pthread.dylib`_pthread_cond_wait + 732
    frame #2: 0x00000001019b6cb0 libc++.1.dylib`std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 18
    frame #3: 0x000000011279a63b libtensorflow_framework.so`nsync::nsync_mu_semaphore_p_with_deadline(nsync::nsync_semaphore_s_*, timespec) + 283
    frame #4: 0x0000000112796eb7 libtensorflow_framework.so`nsync::nsync_cv_wait_with_deadline_generic(nsync::nsync_cv_s_*, void*, void (*)(void*), void (*)(void*), timespec, nsync::nsync_note_s_*) + 423
    frame #5: 0x0000000112797621 libtensorflow_framework.so`nsync::nsync_cv_wait(nsync::nsync_cv_s_*, nsync::nsync_mu_s_*) + 49
    frame #6: 0x00000001090810e3 _pywrap_tensorflow_internal.so`tensorflow::Notification::WaitForNotification() + 67
    frame #7: 0x0000000109d4d809 _pywrap_tensorflow_internal.so`tensorflow::CapturedFunction::RunInstantiated(std::__1::vector<tensorflow::Tensor, std::__1::allocator<tensorflow::Tensor> > const&, std::__1::vector<tensorflow::Tensor, std::__1::allocator<tensorflow::Tensor> >*) + 649
    frame #8: 0x0000000109cffa21 _pywrap_tensorflow_internal.so`tensorflow::(anonymous namespace)::GeneratorDatasetOp::Dataset::Iterator::~Iterator() + 97
    frame #9: 0x0000000109cffb8e _pywrap_tensorflow_internal.so`tensorflow::(anonymous namespace)::GeneratorDatasetOp::Dataset::Iterator::~Iterator() + 14
    frame #10: 0x0000000109cfd669 _pywrap_tensorflow_internal.so`tensorflow::(anonymous namespace)::FlatMapDatasetOp::Dataset::Iterator::~Iterator() + 105
    frame #11: 0x0000000109cfd6de _pywrap_tensorflow_internal.so`tensorflow::(anonymous namespace)::FlatMapDatasetOp::Dataset::Iterator::~Iterator() + 14
    frame #12: 0x00000001019e98fd libc++.1.dylib`std::__1::__shared_weak_count::__release_shared() + 43
    frame #13: 0x0000000109d0a579 _pywrap_tensorflow_internal.so`tensorflow::(anonymous namespace)::IteratorResource::~IteratorResource() + 169
    frame #14: 0x0000000109d0a5fe _pywrap_tensorflow_internal.so`tensorflow::(anonymous namespace)::IteratorResource::~IteratorResource() + 14
    frame #15: 0x000000011226db4d libtensorflow_framework.so`tensorflow::ResourceMgr::DoDelete(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 301
    frame #16: 0x000000011226dd50 libtensorflow_framework.so`tensorflow::ResourceMgr::DoDelete(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::type_index, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 192
    frame #17: 0x0000000109d0c558 _pywrap_tensorflow_internal.so`tensorflow::(anonymous namespace)::OneShotIteratorOp::~OneShotIteratorOp() + 104
    frame #18: 0x0000000109d0c71e _pywrap_tensorflow_internal.so`tensorflow::(anonymous namespace)::OneShotIteratorOp::~OneShotIteratorOp() + 14
    frame #19: 0x00000001122670ff libtensorflow_framework.so`tensorflow::OpSegment::Item::~Item() + 63
    frame #20: 0x0000000112267ffd libtensorflow_framework.so`tensorflow::OpSegment::RemoveHold(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 205
    frame #21: 0x000000010b880b42 _pywrap_tensorflow_internal.so`tensorflow::DirectSession::~DirectSession() + 546
    frame #22: 0x000000010b88108e _pywrap_tensorflow_internal.so`tensorflow::DirectSession::~DirectSession() + 14
    frame #23: 0x000000010935dfd3 _pywrap_tensorflow_internal.so`TF_DeleteSession + 931
    frame #24: 0x0000000109006e5a _pywrap_tensorflow_internal.so`_wrap_TF_DeleteSession(_object*, _object*) + 122
    frame #25: 0x00000001007bb688 Python`_PyCFunction_FastCallDict + 568
    frame #26: 0x00000001008443e4 Python`call_function + 612
    frame #27: 0x0000000100849d84 Python`_PyEval_EvalFrameDefault + 21892
    frame #28: 0x00000001008447cc Python`_PyFunction_FastCallDict + 828
    frame #29: 0x000000010075f984 Python`_PyObject_FastCallDict + 356
    frame #30: 0x000000010075faa0 Python`_PyObject_Call_Prepend + 208
    frame #31: 0x000000010075f8d4 Python`_PyObject_FastCallDict + 180
    frame #32: 0x00000001007d6579 Python`slot_tp_finalize + 121
    frame #33: 0x000000010089b18a Python`collect + 1418
    frame #34: 0x000000010089b8c3 Python`_PyGC_CollectIfEnabled + 99
    frame #35: 0x000000010087af57 Python`Py_FinalizeEx + 119
    frame #36: 0x000000010087b0e0 Python`Py_Exit + 16
    frame #37: 0x000000010087ef4c Python`handle_system_exit + 252
    frame #38: 0x000000010087f1a5 Python`PyErr_PrintEx + 437
    frame #39: 0x0000000100880a1d Python`PyRun_SimpleStringFlags + 125
    frame #40: 0x00000001008992a4 Python`Py_Main + 1812
    frame #41: 0x0000000100000dfe Python
    frame #42: 0x0000000100000c34 Python

似乎会话的析构函数正在等待操作完成。罪魁祸首似乎是PyFuncOp,没有越过这一行:

py_threadstate = PyGILState_Ensure();

因此,该操作似乎正在尝试获取GIL,但不能。我的假设是,这个py_func是数据集的“ finalize”函数(来自_GeneratorDataset)。

我的假设是,当Python调用tf_session.TF_DeleteSession(self._session)时应释放GIL,因此PyFuncOp随后应能够再次获取它。确实,当我编写一个隔离测试来尝试重现此错误时,我没有看到此问题,并且GIL已成功获得。

为此,我会在TensorFlow中引发一个错误,但是在一个小的测试用例中重现它却一直没有成功(尽管只能在我们的“真实”系统中发生,尽管它可以在此处完全重现)。

有人可以帮助我了解这里发生的事情吗?

1 个答案:

答案 0 :(得分:0)

这被视为TensorFlow中的错误:https://github.com/tensorflow/tensorflow/issues/21277

现在已修复,但是我不确定它将使用哪个版本(可能是1.11.0)。