OpenCV 将相机中的图像与相同图像进行匹配不会产生 100% 匹配

OpenCV match image from camera with same image does not produce 100% matching

本文关键字:图像 匹配 100% 相机 OpenCV      更新时间:2023-10-16

我的目标是将从相机捕获的图像与某些模型进行匹配,并找到最接近的图像。但是我认为我错过了一些东西。这就是我正在做的:首先我从相机中获取一个帧,选择一个部分,提取关键点并使用 SURF 计算描述符并将它们存储在 xml 文件中(我也将模型存储为 model.png(。这是我的模型。然后我取另一帧(几秒钟(,选择相同的部分,计算描述符并将其与之前存储的描述符进行匹配。结果并不像我期望的那样接近 100%(我使用良好匹配和关键点数量之间的比率(。为了进行比较,如果我加载 model.png,计算其描述符并与存储的描述符匹配,我会得到 100% 匹配(或多或少(,这是合理的。这是我的代码:

#include <iostream>
#include "opencv2/opencv.hpp"
#include "opencv2/nonfree/nonfree.hpp"
using namespace std;
std::vector<cv::KeyPoint> detectKeypoints(cv::Mat image, int hessianTh, int nOctaves, int nOctaveLayers, bool extended, bool upright) {
    std::vector<cv::KeyPoint> keypoints;
    cv::SurfFeatureDetector detector(hessianTh,nOctaves,nOctaveLayers,extended,upright);
    detector.detect(image,keypoints);
    return keypoints; }
cv::Mat computeDescriptors(cv::Mat image,std::vector<cv::KeyPoint> keypoints, int hessianTh, int nOctaves, int nOctaveLayers, bool extended, bool upright) {
    cv::SurfDescriptorExtractor extractor(hessianTh,nOctaves,nOctaveLayers,extended,upright);
    cv::Mat imageDescriptors;
    extractor.compute(image,keypoints,imageDescriptors);
    return imageDescriptors; }
int main(int argc, char *argv[]) {
    cv::VideoCapture cap(0);
    cap.set(CV_CAP_PROP_FRAME_WIDTH, 2304); 
    cap.set(CV_CAP_PROP_FRAME_HEIGHT, 1536); 
    cap >> frame;
    cv::Rect selection(939,482,1063-939,640-482);
    cv::Mat roi = frame(selection).clone();
    //cv::Mat roi=cv::imread("model.png");  
    cv::cvtColor(roi,roi,CV_BGR2GRAY);
    cv::equalizeHist(roi,roi);
    if (std::stoi(argv[1])==1)
    {
        std::vector<cv::KeyPoint> keypoints = detectKeypoints(roi,400,4,2,true,false);
        cv::FileStorage fs("model.xml", cv::FileStorage::WRITE);
        cv::write(fs,"keypoints",keypoints);
        cv::write(fs,"descriptors",computeDescriptors(roi,keypoints,400,4,2,true,false));
        fs.release();
        cv::imwrite("model.png",roi);
    }
    else
    {
        cv::FileStorage fs("model.xml", cv::FileStorage::READ);
        std::vector<cv::KeyPoint> modelkeypoints;
        cv::Mat modeldescriptor;
        cv::FileNode filenode = fs["keypoints"];
        cv::read(filenode,modelkeypoints);
        filenode = fs["descriptors"];
        cv::read(filenode, modeldescriptor);
        fs.release();
        std::vector<cv::KeyPoint> roikeypoints = detectKeypoints(roi,400,4,2,true,false);
        cv::Mat roidescriptor = computeDescriptors(roi,roikeypoints,400,4,2,true,false);
        std::vector<std::vector<cv::DMatch>> matches;
        cv::BFMatcher matcher(cv::NORM_L2);
        if(roikeypoints.size()<modelkeypoints.size())
            matcher.knnMatch(roidescriptor, modeldescriptor, matches, 2);  // Find two nearest matches
        else
            matcher.knnMatch(modeldescriptor, roidescriptor, matches, 2);
        vector<cv::DMatch> good_matches;
        for (int i = 0; i < matches.size(); ++i)
        {
            const float ratio = 0.7;
            if (matches[i][0].distance < ratio * matches[i][1].distance)
            {
                good_matches.push_back(matches[i][0]);
            }
        }
        cv::Mat matching;
        cv::Mat model = cv::imread("model.png");
        if(roikeypoints.size()<modelkeypoints.size())
            cv::drawMatches(roi,roikeypoints,model,modelkeypoints,good_matches,matching);
        else
            cv::drawMatches(model,modelkeypoints,roi,roikeypoints,good_matches,matching);
        cv::imwrite("matches.png",matching);
        float result = static_cast<float>(good_matches.size())/static_cast<float>(roikeypoints.size());
        std::cout << result << std::endl;
    }
    return 0; }

任何建议将不胜感激,这让我发疯了。

这是意料之中的,两帧之间的微小变化是您没有得到 100% 匹配的原因。但是在同一图像上,SURF特征将完全相同,计算的描述符将完全相同。因此,请调整相机的方法,在要素应该相同时绘制它们之间的距离。在距离上设置一个阈值,以便接受大多数(可能是 95%(的匹配。这样,您将拥有较低的错误匹配率,并且仍然具有较大的真实匹配率。