C ++ Rest SDK Casablanca Sigtrap

时间:2016-07-24 12:37:03

标签: c++ debugging casablanca

我正在使用C++ Rest SDK ("Casablanca")从Websocket-Servers接收Feed。目前,我有三个不同的连接,使用websocket_callback_client class同时运行三个不同的服务器。

程序运行一段未定义的时间,然后突然收到SIGTRAP, Trace/ Breakpoint trap。这是GDB

的输出
#0  0x00007ffff5abec37 in __GI_raise (sig=5) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x000000000047bb8e in pplx::details::_ExceptionHolder::~_ExceptionHolder() ()
#2  0x000000000044be29 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() ()
#3  0x000000000047fa39 in pplx::details::_Task_impl<unsigned char>::~_Task_impl() ()
#4  0x000000000044be29 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() ()
#5  0x00007ffff6feb09f in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x7fffc8021420, __in_chrg=<optimized out>) at /usr/include/c++/4.8/bits/shared_ptr_base.h:546
#6  0x00007ffff6fffa38 in std::__shared_ptr<pplx::details::_Task_impl<unsigned char>, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x7fffc8021418, __in_chrg=<optimized out>) at /usr/include/c++/4.8/bits/shared_ptr_base.h:781
#7  0x00007ffff6fffa52 in std::shared_ptr<pplx::details::_Task_impl<unsigned char> >::~shared_ptr (this=0x7fffc8021418, __in_chrg=<optimized out>) at /usr/include/c++/4.8/bits/shared_ptr.h:93
#8  0x00007ffff710f766 in pplx::details::_PPLTaskHandle<unsigned char, pplx::task<unsigned char>::_InitialTaskHandle<void, void web::websockets::client::details::wspp_callback_client::shutdown_wspp_impl<websocketpp::config::asio_tls_client>(std::weak_ptr<void> const&, bool)::{lambda()#1}, pplx::details::_TypeSelectorNoAsync>, pplx::details::_TaskProcHandle>::~_PPLTaskHandle() (this=0x7fffc8021410, __in_chrg=<optimized out>)
    at /home/cpprestsdk/Release/include/pplx/pplxtasks.h:1631
#9  0x00007ffff716e6f2 in pplx::task<unsigned char>::_InitialTaskHandle<void, void web::websockets::client::details::wspp_callback_client::shutdown_wspp_impl<websocketpp::config::asio_tls_client>(std::weak_ptr<void> const&, bool)::{lambda()#1}, pplx::details::_TypeSelectorNoAsync>::~_InitialTaskHandle() (this=0x7fffc8021410, __in_chrg=<optimized out>) at /home/cpprestsdk/Release/include/pplx/pplxtasks.h:3710
#10 0x00007ffff716e722 in pplx::task<unsigned char>::_InitialTaskHandle<void, void web::websockets::client::details::wspp_callback_client::shutdown_wspp_impl<websocketpp::config::asio_tls_client>(std::weak_ptr<void> const&, bool)::{lambda()#1}, pplx::details::_TypeSelectorNoAsync>::~_InitialTaskHandle() (this=0x7fffc8021410, __in_chrg=<optimized out>) at /home/cpprestsdk/Release/include/pplx/pplxtasks.h:3710
#11 0x00007ffff71f9cdd in boost::_bi::list1<boost::_bi::value<void*> >::operator()<void (*)(void*), boost::_bi::list0> (this=0x7fffdc7d7d28, f=@0x7fffdc7d7d20: 0x479180 <pplx::details::_TaskProcHandle::_RunChoreBridge(void*)>, a=...)
    at /usr/local/include/boost/bind/bind.hpp:259
#12 0x00007ffff71f9c8f in boost::_bi::bind_t<void, void (*)(void*), boost::_bi::list1<boost::_bi::value<void*> > >::operator() (this=0x7fffdc7d7d20) at /usr/local/include/boost/bind/bind.hpp:1222
#13 0x00007ffff71f9c54 in boost::asio::asio_handler_invoke<boost::_bi::bind_t<void, void (*)(void*), boost::_bi::list1<boost::_bi::value<void*> > > > (function=...) at /usr/local/include/boost/asio/handler_invoke_hook.hpp:69
#14 0x00007ffff71f9bea in boost_asio_handler_invoke_helpers::invoke<boost::_bi::bind_t<void, void (*)(void*), boost::_bi::list1<boost::_bi::value<void*> > >, boost::_bi::bind_t<void, void (*)(void*), boost::_bi::list1<boost::_bi::value<void*> > > > (function=..., context=...) at /usr/local/include/boost/asio/detail/handler_invoke_helpers.hpp:37
#15 0x00007ffff71f9b2e in boost::asio::detail::completion_handler<boost::_bi::bind_t<void, void (*)(void*), boost::_bi::list1<boost::_bi::value<void*> > > >::do_complete (owner=0x7488d0, base=0x7fffc801ecd0)
    at /usr/local/include/boost/asio/detail/completion_handler.hpp:68
#16 0x00000000004c34c1 in boost::asio::detail::task_io_service::run(boost::system::error_code&) ()
#17 0x00007ffff709fb27 in boost::asio::io_service::run (this=0x7ffff759ab78 <crossplat::threadpool::shared_instance()::s_shared+24>) at /usr/local/include/boost/asio/impl/io_service.ipp:59
#18 0x00007ffff7185a81 in crossplat::threadpool::thread_start (arg=0x7ffff759ab60 <crossplat::threadpool::shared_instance()::s_shared>) at /home/cpprestsdk/Release/include/pplx/threadpool.h:133
#19 0x00007ffff566e184 in start_thread (arg=0x7fffdc7d8700) at pthread_create.c:312
#20 0x00007ffff5b8237d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

在第18行,给出了/pplx/threadpool.h:133。 这是这些行的源代码:

  123     static void* thread_start(void *arg)
  124     {
  125 #if (defined(ANDROID) || defined(__ANDROID__))
  126         // Calling get_jvm_env() here forces the thread to be attached.
  127         get_jvm_env();
  128         pthread_cleanup_push(detach_from_java, nullptr);
  129 #endif
  130         threadpool* _this = reinterpret_cast<threadpool*>(arg);
  131         try
  132         {
  133             _this->m_service.run();
  134         }
  135         catch (const _cancel_thread&)
  136         {
  137             // thread was cancelled
  138         }
  139         catch (...)
  140         {
  141             // Something bad happened
  142 #if (defined(ANDROID) || defined(__ANDROID__))
  143             // Reach into the depths of the 'droid!
  144             // NOTE: Uses internals of the bionic library
  145             // Written against android ndk r9d, 7/26/2014
  146             __pthread_cleanup_pop(&__cleanup, true);
  147             throw;
  148 #endif
  149         }
  150 #if (defined(ANDROID) || defined(__ANDROID__))
  151         pthread_cleanup_pop(true);
  152 #endif
  153         return arg;
  154     }

为了澄清,m_serviceboost::asio::io_service。 对我而言,#133行会抛出一个异常,它会在#139行被捕,然后重新抛出。在这一点上,我必须亲自捕捉它,因为如果我不这样做,并且pplx - 对象被未捕获的异常破坏,它将引发SIGTRAP

这与我的研究有多远。问题是我不知道发生了什么。我已将websocket_callback_clienttry {} catch(...){}发送或接收数据的所有位置包围起来,但仍在进行中。

也许之前使用过此库的人可以帮助我。

2 个答案:

答案 0 :(得分:1)

根据我的经验,这是由于一个单独的问题而发生的。
当调用websocket_callback_client的close处理程序时,大多数人都会尝试删除websocket_callback_client。这内部调用close函数。
当发生这种情况时,websocket_callback_client将等待收盘完成。如果另一个线程意识到连接已经死亡并且尝试清理,则会从2个不同的位置删除相同的对象,这将导致重大问题。
Howto reconnect to a server which does not answer to close()对cpprestsdk调用关闭时会发生什么进行了相当全面的审核。

希望这会有所帮助:)

编辑: 事实证明(我在链接问题中给出的响应有这个),如果你试图从close处理程序中关闭或删除websocket_callback_client,它将自己调用close处理程序,它将锁定线程。
我发现最适合我的解决方案是在close处理程序中设置一个标志,并在主线程或至少一个备用线程中处理清理。

答案 1 :(得分:1)

重温这一点。我找到了一个解决方案,我发布在cpprestsdk github(https://github.com/Microsoft/cpprestsdk/issues/427)上。

SDK在表现异常方面做得不好,在问题中我已经表明他们需要改进文档并提供一个干净的公共界面来实现这一点(你会发现解决方案有代码味道)它)。

需要做的是重新抛出用户异常。

这是在进行http_client请求调用的上下文中,但应适用于pplx的任何用法。

client->request(request).then([=] (web::http::http_response response) mutable {
    // Your code here
}).then([=] (pplx::task<void> previous_task) mutable {
    if (previous_task._GetImpl()->_HasUserException()) {
        auto holder = previous_task._GetImpl()->_GetExceptionHolder(); // Probably should put in try

        try {
            // Need to make sure you try/catch here, as _RethrowUserException can throw
            holder->_RethrowUserException();
        } catch (std::exception& e) {
            // Do what you need to do here
        }
    }
});

捕获“未观察到的异常”的处理在第二个then()中完成。

相关问题