使用SURF匹配图像并确定最佳匹配

时间:2016-03-15 08:54:50

标签: c# image opencv emgucv surf

我一直在尝试使用EMGU示例SURFFeature来确定图像是否在图像集合中。但我在理解如何确定是否找到匹配时遇到问题。

.........原始图像.............................. Scene_1(匹配).. ....................... Scene_2(不匹配)

enter image description here ................... enter image description here ................... enter image description here

我一直在查看文档并花了好几个小时寻找可能的解决方案,如何确定图像是否相同。 正如您在下面的图片中看到的那样,两者都匹配。

enter image description here enter image description here

很明显,我试图找到的那个获得更多匹配(连接线)但是如何在代码中检查这个?

问题:如何过滤好匹配?

我的目标是能够将输入图像(从网络摄像头捕获)与数据库中的图像集合进行比较。但在我将所有图像保存到数据库之前,我需要知道我可以将输入与哪些值进行比较。 (例如,在DB中保存objectKeypoints)

这是我的示例代码(匹配部分):

private void match_test()
{
    long matchTime;
    using (Mat modelImage = CvInvoke.Imread(@"images\input.jpg", LoadImageType.Grayscale))
    using (Mat observedImage = CvInvoke.Imread(@"images\2.jpg", LoadImageType.Grayscale))
    {
        Mat result = DrawMatches.Draw(modelImage, observedImage, out matchTime);
        //ImageViewer.Show(result, String.Format("Matched using {0} in {1} milliseconds", CudaInvoke.HasCuda ? "GPU" : "CPU", matchTime));
        ib_output.Image = result;
        label7.Text = String.Format("Matched using {0} in {1} milliseconds", CudaInvoke.HasCuda ? "GPU" : "CPU", matchTime);
     }
}

public static void FindMatch(Mat modelImage, Mat observedImage, out long matchTime, out VectorOfKeyPoint modelKeyPoints, out VectorOfKeyPoint observedKeyPoints, VectorOfVectorOfDMatch matches, out Mat mask, out Mat homography)
{
    int k = 2;
    double uniquenessThreshold = 0.9;
    double hessianThresh = 800;

    Stopwatch watch;
    homography = null;

    modelKeyPoints = new VectorOfKeyPoint();
    observedKeyPoints = new VectorOfKeyPoint();

    using (UMat uModelImage = modelImage.ToUMat(AccessType.Read))
    using (UMat uObservedImage = observedImage.ToUMat(AccessType.Read))
    {
        SURF surfCPU = new SURF(hessianThresh);
        //extract features from the object image
        UMat modelDescriptors = new UMat();
        surfCPU.DetectAndCompute(uModelImage, null, modelKeyPoints, modelDescriptors, false);

        watch = Stopwatch.StartNew();

        // extract features from the observed image
        UMat observedDescriptors = new UMat();
        surfCPU.DetectAndCompute(uObservedImage, null, observedKeyPoints, observedDescriptors, false);

        //Match the two SURF descriptors
        BFMatcher matcher = new BFMatcher(DistanceType.L2);
        matcher.Add(modelDescriptors);

        matcher.KnnMatch(observedDescriptors, matches, k, null);

        mask = new Mat(matches.Size, 1, DepthType.Cv8U, 1);
        mask.SetTo(new MCvScalar(255));

        Features2DToolbox.VoteForUniqueness(matches, uniquenessThreshold, mask);
        int nonZeroCount = CvInvoke.CountNonZero(mask);

        if (nonZeroCount >= 4)
        {
            nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints,
               matches, mask, 1.5, 20);

            if (nonZeroCount >= 4)
                homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints,
                   observedKeyPoints, matches, mask, 2);
        }

        watch.Stop();
    }

    matchTime = watch.ElapsedMilliseconds;
}

我真的觉得我离解决方案不远了......希望有人可以帮助我

1 个答案:

答案 0 :(得分:5)

退出Features2DToolbox.GetHomographyMatrixFromMatchedFeatures时,匹配的mask矩阵is updated to have zeros是异常值(即,在计算的单应性下不能很好地对应)。因此,在CountNonZero上再次呼叫mask可以指示匹配质量。

我看到你想要将比赛分类为“好”或“差”,而不是仅仅将多个比赛与单个图像进行比较;从您问题中的示例看,似乎合理的阈值可能是输入图像中找到的关键点的1/4。您可能还需要一个绝对最小值,理由是如果没有一定数量的证据,您无法真正考虑好匹配。所以,例如

之类的东西
bool FindMatch(...) {
    bool goodMatch = false;
    // ...
    homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(...);
    int nInliers = CvInvoke.CountNonZero(mask);
    goodMatch = nInliers >= 10 && nInliers >= observedKeyPoints.size()/4;
    // ...
    return goodMatch;
}

在当前homography计算goodMatch的分支上,只是在初始化时保持错误。数字10和1/4有点任意,取决于您的应用程序。

(警告:上述内容完全来源于阅读文档;我实际上没有尝试过。)