C++ 休息SDK 卡萨布兰卡 sigtrap

C++ Rest SDK Casablanca Sigtrap

本文关键字:sigtrap 卡萨布兰卡 SDK 休息 C++      更新时间:2023-10-16

我正在使用C++ Rest SDK("Casablanca"(从Websocket-Servers接收提要。目前,我使用 websocket_callback_client 类与同时运行的三个不同服务器建立了三个不同的连接。

程序运行未定义的时间,然后突然收到SIGTRAP, Trace/ Breakpoint trap。这是GDB的输出:

#0  0x00007ffff5abec37 in __GI_raise (sig=5) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x000000000047bb8e in pplx::details::_ExceptionHolder::~_ExceptionHolder() ()
#2  0x000000000044be29 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() ()
#3  0x000000000047fa39 in pplx::details::_Task_impl<unsigned char>::~_Task_impl() ()
#4  0x000000000044be29 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() ()
#5  0x00007ffff6feb09f in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x7fffc8021420, __in_chrg=<optimized out>) at /usr/include/c++/4.8/bits/shared_ptr_base.h:546
#6  0x00007ffff6fffa38 in std::__shared_ptr<pplx::details::_Task_impl<unsigned char>, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x7fffc8021418, __in_chrg=<optimized out>) at /usr/include/c++/4.8/bits/shared_ptr_base.h:781
#7  0x00007ffff6fffa52 in std::shared_ptr<pplx::details::_Task_impl<unsigned char> >::~shared_ptr (this=0x7fffc8021418, __in_chrg=<optimized out>) at /usr/include/c++/4.8/bits/shared_ptr.h:93
#8  0x00007ffff710f766 in pplx::details::_PPLTaskHandle<unsigned char, pplx::task<unsigned char>::_InitialTaskHandle<void, void web::websockets::client::details::wspp_callback_client::shutdown_wspp_impl<websocketpp::config::asio_tls_client>(std::weak_ptr<void> const&, bool)::{lambda()#1}, pplx::details::_TypeSelectorNoAsync>, pplx::details::_TaskProcHandle>::~_PPLTaskHandle() (this=0x7fffc8021410, __in_chrg=<optimized out>)
    at /home/cpprestsdk/Release/include/pplx/pplxtasks.h:1631
#9  0x00007ffff716e6f2 in pplx::task<unsigned char>::_InitialTaskHandle<void, void web::websockets::client::details::wspp_callback_client::shutdown_wspp_impl<websocketpp::config::asio_tls_client>(std::weak_ptr<void> const&, bool)::{lambda()#1}, pplx::details::_TypeSelectorNoAsync>::~_InitialTaskHandle() (this=0x7fffc8021410, __in_chrg=<optimized out>) at /home/cpprestsdk/Release/include/pplx/pplxtasks.h:3710
#10 0x00007ffff716e722 in pplx::task<unsigned char>::_InitialTaskHandle<void, void web::websockets::client::details::wspp_callback_client::shutdown_wspp_impl<websocketpp::config::asio_tls_client>(std::weak_ptr<void> const&, bool)::{lambda()#1}, pplx::details::_TypeSelectorNoAsync>::~_InitialTaskHandle() (this=0x7fffc8021410, __in_chrg=<optimized out>) at /home/cpprestsdk/Release/include/pplx/pplxtasks.h:3710
#11 0x00007ffff71f9cdd in boost::_bi::list1<boost::_bi::value<void*> >::operator()<void (*)(void*), boost::_bi::list0> (this=0x7fffdc7d7d28, f=@0x7fffdc7d7d20: 0x479180 <pplx::details::_TaskProcHandle::_RunChoreBridge(void*)>, a=...)
    at /usr/local/include/boost/bind/bind.hpp:259
#12 0x00007ffff71f9c8f in boost::_bi::bind_t<void, void (*)(void*), boost::_bi::list1<boost::_bi::value<void*> > >::operator() (this=0x7fffdc7d7d20) at /usr/local/include/boost/bind/bind.hpp:1222
#13 0x00007ffff71f9c54 in boost::asio::asio_handler_invoke<boost::_bi::bind_t<void, void (*)(void*), boost::_bi::list1<boost::_bi::value<void*> > > > (function=...) at /usr/local/include/boost/asio/handler_invoke_hook.hpp:69
#14 0x00007ffff71f9bea in boost_asio_handler_invoke_helpers::invoke<boost::_bi::bind_t<void, void (*)(void*), boost::_bi::list1<boost::_bi::value<void*> > >, boost::_bi::bind_t<void, void (*)(void*), boost::_bi::list1<boost::_bi::value<void*> > > > (function=..., context=...) at /usr/local/include/boost/asio/detail/handler_invoke_helpers.hpp:37
#15 0x00007ffff71f9b2e in boost::asio::detail::completion_handler<boost::_bi::bind_t<void, void (*)(void*), boost::_bi::list1<boost::_bi::value<void*> > > >::do_complete (owner=0x7488d0, base=0x7fffc801ecd0)
    at /usr/local/include/boost/asio/detail/completion_handler.hpp:68
#16 0x00000000004c34c1 in boost::asio::detail::task_io_service::run(boost::system::error_code&) ()
#17 0x00007ffff709fb27 in boost::asio::io_service::run (this=0x7ffff759ab78 <crossplat::threadpool::shared_instance()::s_shared+24>) at /usr/local/include/boost/asio/impl/io_service.ipp:59
#18 0x00007ffff7185a81 in crossplat::threadpool::thread_start (arg=0x7ffff759ab60 <crossplat::threadpool::shared_instance()::s_shared>) at /home/cpprestsdk/Release/include/pplx/threadpool.h:133
#19 0x00007ffff566e184 in start_thread (arg=0x7fffdc7d8700) at pthread_create.c:312
#20 0x00007ffff5b8237d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

在第 #18 行给出了 soruce/pplx/threadpool.h:133。这是这些行周围的源代码:

  123     static void* thread_start(void *arg)
  124     {
  125 #if (defined(ANDROID) || defined(__ANDROID__))
  126         // Calling get_jvm_env() here forces the thread to be attached.
  127         get_jvm_env();
  128         pthread_cleanup_push(detach_from_java, nullptr);
  129 #endif
  130         threadpool* _this = reinterpret_cast<threadpool*>(arg);
  131         try
  132         {
  133             _this->m_service.run();
  134         }
  135         catch (const _cancel_thread&)
  136         {
  137             // thread was cancelled
  138         }
  139         catch (...)
  140         {
  141             // Something bad happened
  142 #if (defined(ANDROID) || defined(__ANDROID__))
  143             // Reach into the depths of the 'droid!
  144             // NOTE: Uses internals of the bionic library
  145             // Written against android ndk r9d, 7/26/2014
  146             __pthread_cleanup_pop(&__cleanup, true);
  147             throw;
  148 #endif
  149         }
  150 #if (defined(ANDROID) || defined(__ANDROID__))
  151         pthread_cleanup_pop(true);
  152 #endif
  153         return arg;
  154     }

为澄清起见,m_service是一个boost::asio::io_service。对我来说,它看起来像第 #133 行抛出了一个异常,它在 #139 行被捕获然后重新抛出。在这一点上,我必须亲自抓住它,因为如果我不这样做并且pplx对象被未捕获的异常破坏,它将引发SIGTRAP

这就是我的研究取得了多大进展。问题是我不知道这是在哪里发生的。我已经用try {} catch(...){}包围了通过websocket_callback_client发送或接收数据的每个位置,并且仍在发生。

也许以前使用过这个库的人可以帮助我。

根据我的经验,这是由于一个单独的问题而发生的。
当调用websocket_callback_client的紧密处理程序时,大多数人会尝试删除websocket_callback_client。这在内部调用关闭函数。
发生这种情况时,websocket_callback_client将等待收盘价完成。如果另一个线程意识到连接已失效并尝试清理,您将从 2 个不同的位置删除相同的对象,这将导致重大问题。
如何重新连接到没有应答 close(( 的服务器对 cpprestsdk 调用 close 时会发生什么进行了相当彻底的审查。

希望这对:)有所帮助

编辑:事实证明(我在链接问题中给出的响应是这样的(,如果您尝试关闭或删除关闭处理程序中的websocket_callback_client,它本身将调用关闭处理程序,这将锁定线程。
我发现最适合我的解决方案是在关闭处理程序中设置一个标志,并在主线程或至少一个备用线程中处理清理。

重新审视这一点。我找到了一个解决方法,我已经在cpprestsdk github(https://github.com/Microsoft/cpprestsdk/issues/427(上发布了它。

SDK 在显示异常方面做得很差,在我指出的问题中,他们需要改进有关此的文档,并提供一个干净的公共接口来执行此操作(您将看到解决方案具有代码气味(。

需要做的是重新抛出用户异常。

这是在进行http_client请求调用的上下文中,但应该适用于 pplx 的任何用法。

client->request(request).then([=] (web::http::http_response response) mutable {
    // Your code here
}).then([=] (pplx::task<void> previous_task) mutable {
    if (previous_task._GetImpl()->_HasUserException()) {
        auto holder = previous_task._GetImpl()->_GetExceptionHolder(); // Probably should put in try
        try {
            // Need to make sure you try/catch here, as _RethrowUserException can throw
            holder->_RethrowUserException();
        } catch (std::exception& e) {
            // Do what you need to do here
        }
    }
});

捕获"未观察到的异常"的处理在第二个then()中完成。