运行"OpenPose C++ API Tutorial - Example 3 - Body from image"失败

Running of "OpenPose C++ API Tutorial - Example 3 - Body from image" has failed

本文关键字:Body from image 失败 Example C++ API Tutorial 运行 OpenPose      更新时间:2023-10-16

我已经成功安装了openpose(使用OpenCV,CUDA和所有必需的库(。起初,我尝试通过运行演示应用程序来运行openpose(我不得不使用参数--net-resolution(:

./build/examples/openpose/openpose.bin --net-resolution "-1x64"

它工作正常。

现在,我想在我自己的源代码中使用openpose。所以我想从OpenPose教程中构建并运行一个示例(03个来自图像的关键点(。但是当我尝试运行源代码时,我得到了以下输出:

Starting OpenPose demo...
Configuring OpenPose...
Starting thread(s)...
Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0.
F0806 22:47:55.485319 12505 syncedmem.cpp:71] Check failed: error == cudaSuccess (2 vs. 0)  out of memory
*** Check failure stack trace: ***
    @     0x7f35145810cd  google::LogMessage::Fail()
    @     0x7f3514582f33  google::LogMessage::SendToLog()
    @     0x7f3514580c28  google::LogMessage::Flush()
    @     0x7f3514583999  google::LogMessageFatal::~LogMessageFatal()
    @     0x7f3513c92c28  caffe::SyncedMemory::mutable_gpu_data()
    @     0x7f3513b1c202  caffe::Blob<>::mutable_gpu_data()
    @     0x7f3513cdfd10  caffe::ConvolutionLayer<>::Forward_gpu()
    @     0x7f3513c58f11  caffe::Net<>::ForwardFromTo()
    @     0x7f3517a89e9d  op::NetCaffe::forwardPass()
    @     0x7f3517ab244a  op::PoseExtractorCaffe::forwardPass()
    @     0x7f3517aab2d5  op::PoseExtractor::forwardPass()
    @     0x7f3517aa829f  op::WPoseExtractor<>::work()
    @     0x7f3517aebb79  op::Worker<>::checkAndWork()
    @     0x7f3517aebd03  op::SubThread<>::workTWorkers()
    @     0x7f3517af5968  op::SubThreadQueueInOut<>::work()
    @     0x7f3517aedfe1  op::Thread<>::threadFunction()
    @     0x7f351549766f  (unknown)
    @     0x7f3514bb96db  start_thread
    @     0x7f3514ef288f  clone
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

我已经禁用了所有可能的其他选项,这是我的来源:

#define OPENPOSE_FLAGS_DISABLE_PRODUCER
#define OPENPOSE_FLAGS_DISABLE_DISPLAY
#include <openpose/flags.hpp>
#include <openpose/headers.hpp>
DEFINE_string(image_path, "/home/user/Downloads/myimage.jpg",
              "Process an image.");
// Display
DEFINE_bool(no_display, false,
            "Enable to disable the visual display.");
// This worker will just read and return all the jpg files in a directory
void display(const std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>& datumsPtr)
{
    try
    {
        if (datumsPtr != nullptr && !datumsPtr->empty())
        {
            // Display image
            cv::imshow(OPEN_POSE_NAME_AND_VERSION + " - Tutorial C++ API", datumsPtr->at(0)->cvOutputData);
            cv::waitKey(0);
        }
        else
            op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
    }
    catch (const std::exception& e)
    {
        op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
    }
}
void printKeypoints(const std::shared_ptr<std::vector<std::shared_ptr<op::Datum>>>& datumsPtr)
{
    try
    {
        // Example: How to use the pose keypoints
        if (datumsPtr != nullptr && !datumsPtr->empty())
        {
            op::log("Body keypoints: " + datumsPtr->at(0)->poseKeypoints.toString(), op::Priority::High);
            op::log("Face keypoints: " + datumsPtr->at(0)->faceKeypoints.toString(), op::Priority::High);
            op::log("Left hand keypoints: " + datumsPtr->at(0)->handKeypoints[0].toString(), op::Priority::High);
            op::log("Right hand keypoints: " + datumsPtr->at(0)->handKeypoints[1].toString(), op::Priority::High);
        }
        else
            op::log("Nullptr or empty datumsPtr found.", op::Priority::High);
    }
    catch (const std::exception& e)
    {
        op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
    }
}
void configureWrapper(op::Wrapper& opWrapper)
{
    try
    {
        // Configuring OpenPose
        // logging_level
        op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.",
                  __LINE__, __FUNCTION__, __FILE__);
        op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level);
        op::Profiler::setDefaultX(FLAGS_profile_speed);
        // Applying user defined configuration - GFlags to program variables
        // outputSize
        const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
        // netInputSize
        const auto netInputSize = op::flagsToPoint(FLAGS_net_resolution, "-1x64");
        // poseMode
        const auto poseMode = op::flagsToPoseMode(FLAGS_body);
        // poseModel
        const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
        // JSON saving
        if (!FLAGS_write_keypoint.empty())
            op::log("Flag `write_keypoint` is deprecated and will eventually be removed."
                    " Please, use `write_json` instead.", op::Priority::Max);
        // keypointScaleMode
        const auto keypointScaleMode = op::flagsToScaleMode(FLAGS_keypoint_scale);
        // heatmaps to add
        const auto heatMapTypes = op::flagsToHeatMaps(FLAGS_heatmaps_add_parts, FLAGS_heatmaps_add_bkg,
                                                      FLAGS_heatmaps_add_PAFs);
        const auto heatMapScaleMode = op::flagsToHeatMapScaleMode(FLAGS_heatmaps_scale);
        // >1 camera view?
        const auto multipleView = (FLAGS_3d || FLAGS_3d_views > 1);
        // Enabling Google Logging
        const bool enableGoogleLogging = true;
        // Pose configuration (use WrapperStructPose{} for default and recommended configuration)
        const op::WrapperStructPose wrapperStructPose{
                poseMode, netInputSize, outputSize, keypointScaleMode, FLAGS_num_gpu, FLAGS_num_gpu_start,
                FLAGS_scale_number, (float)FLAGS_scale_gap, op::flagsToRenderMode(FLAGS_render_pose, multipleView),
                poseModel, !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap,
                FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, heatMapScaleMode, FLAGS_part_candidates,
                (float)FLAGS_render_threshold, FLAGS_number_people_max, FLAGS_maximize_positives, FLAGS_fps_max,
                FLAGS_prototxt_path, FLAGS_caffemodel_path, (float)FLAGS_upsampling_ratio, enableGoogleLogging};
        opWrapper.configure(wrapperStructPose);
        // Face configuration (use op::WrapperStructFace{} to disable it)
        opWrapper.configure(op::WrapperStructFace{});
        // Hand configuration (use op::WrapperStructHand{} to disable it)
        opWrapper.configure(op::WrapperStructHand{});
        // Extra functionality configuration (use op::WrapperStructExtra{} to disable it)
        opWrapper.configure(op::WrapperStructExtra{});
        // No GUI. Equivalent to: opWrapper.configure(op::WrapperStructGui{});
        opWrapper.configure(op::WrapperStructGui{});
        // Set to single-thread (for sequential processing and/or debugging and/or reducing latency)
        if (FLAGS_disable_multi_thread)
            opWrapper.disableMultiThreading();
    }
    catch (const std::exception& e)
    {
        op::error(e.what(), __LINE__, __FUNCTION__, __FILE__);
    }
}
int tutorialApiCpp()
{
    try
    {
        op::log("Starting OpenPose demo...", op::Priority::High);
        const auto opTimer = op::getTimerInit();
        // Configuring OpenPose
        op::log("Configuring OpenPose...", op::Priority::High);
        op::Wrapper opWrapper{op::ThreadManagerMode::Asynchronous};
        configureWrapper(opWrapper);
        // Starting OpenPose
        op::log("Starting thread(s)...", op::Priority::High);
        opWrapper.start();
        // Process and display image
        const auto imageToProcess = cv::imread(FLAGS_image_path);
        auto datumProcessed = opWrapper.emplaceAndPop(imageToProcess);
        if (datumProcessed != nullptr)
        {
            printKeypoints(datumProcessed);
            if (!FLAGS_no_display)
                display(datumProcessed);
        }
        else
            op::log("Image could not be processed.", op::Priority::High);
        // Measuring total time
        op::printTime(opTimer, "OpenPose demo successfully finished. Total time: ", " seconds.", op::Priority::High);
        // Return
        return 0;
    }
    catch (const std::exception& e)
    {
        return -1;
    }
}
int main(int argc, char *argv[])
{
    // Parsing command line flags
    gflags::ParseCommandLineFlags(&argc, &argv, true);
    // Running tutorialApiCpp
    return tutorialApiCpp();
}

你能帮我找出可能出错的地方吗?即使在我设置--net-resolution之前,error == cudaSuccess (2 vs. 0) out of memory显示的消息也通过 openpose 演示应用程序显示。设置此参数后,打开姿势已开始正常工作。

非常感谢您的任何建议,任何帮助或评论都可以引导我解决问题。

现在我几乎感到尴尬。问题不在于禁用一些其他选项或附近的任何东西。问题出在此代码中:

const auto netInputSize = op::flagsToPoint(FLAGS_net_resolution, "-1x368");

就像我写的,我想降低净分辨率,因为我只有 2GB 的 GPU 内存。但是当我复制粘贴的示例代码时,我看到了上面的代码段,我认为这意味着:

const auto netInputSize = op::flagsToPoint(<flag_name>, <flag_value>);

但事实并非如此。经过一些探索和调试,我发现标志FLAGS_net_resolution的值在此代码之前和之后是相同的。我发现如果您想在代码中更改标志的默认值(而不是通过命令行(,则必须在ParseCommandLineFlags()调用函数之前在main()函数中执行此操作(即更改默认标志值(。

所以,我的问题的解决方案是:

  1. 复制示例源代码
  2. 将示例源代码粘贴到我的源文件中
  3. 更改函数main()中的网络分辨率标志
int main(int argc, char *argv[])
{
    FLAGS_net_resolution = "-1x256";
    // Parsing command line flags
    gflags::ParseCommandLineFlags(&argc, &argv, true);
    // Running tutorialApiCpp
    return tutorialApiCpp();
}

注意:仅更改了标志net_resolution的默认值,仍然可以通过命令行设置该值。

再次感谢。