DescriptorMatcher中的imgIdx问题mexopencv

时间:2013-12-21 08:19:53

标签: matlab opencv image-processing computer-vision mex

我的想法很简单。我正在使用mexopencv并尝试查看当前是否存在与我的数据库中存储的任何图像匹配的任何对象。我正在使用OpenCV DescriptorMatcher函数来训练我的图像。 这是一个片段,我希望构建在this之上,这是使用mexopencv进行一对一的图像匹配,也可以扩展为图像流。

function hello

    detector = cv.FeatureDetector('ORB');
    extractor = cv.DescriptorExtractor('ORB');
    matcher = cv.DescriptorMatcher('BruteForce-Hamming');

    train = [];
    for i=1:3
        train(i).img = [];
        train(i).points = [];
        train(i).features = [];    
    end;

    train(1).img = imread('D:\test\1.jpg');
    train(2).img = imread('D:\test\2.png');
    train(3).img = imread('D:\test\3.jpg');


    for i=1:3

        frameImage = train(i).img;
        framePoints = detector.detect(frameImage);
        frameFeatures = extractor.compute(frameImage , framePoints);

       train(i).points = framePoints;
       train(i).features = frameFeatures;

    end;

    for i = 1:3 
        boxfeatures = train(i).features;
        matcher.add(boxfeatures);
    end;
    matcher.train();

    camera = cv.VideoCapture;
    pause(3);%Sometimes necessary 

    window = figure('KeyPressFcn',@(obj,evt)setappdata(obj,'flag',true));
    setappdata(window,'flag',false);

    while(true)

      sceneImage = camera.read; 
      sceneImage = rgb2gray(sceneImage);

      scenePoints = detector.detect(sceneImage);
      sceneFeatures = extractor.compute(sceneImage,scenePoints);

      m = matcher.match(sceneFeatures);

      %{
      %Comments in
      img_no = m.imgIdx;
      img_no = img_no(1);

      %I am planning to do this based on the fact that
      %on a perfect match imgIdx a 1xN will be filled
      %with the index of the training  
      %example 1,2 or 3 

      objPoints = train(img_no+1).points;
      boxImage = train(img_no+1).img;

      ptsScene = cat(1,scenePoints([m.queryIdx]+1).pt);
      ptsScene = num2cell(ptsScene,2);

      ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
      ptsObj = num2cell(ptsObj,2);

      %This is where the problem starts here, assuming the 
      %above is correct , Matlab yells this at me 
      %index exceeds matrix dimensions.

      end [H,inliers] = cv.findHomography(ptsScene,ptsObj,'Method','Ransac');
      m = m(inliers);

      imgMatches = cv.drawMatches(sceneImage,scenePoints,boxImage,boxPoints,m,...
       'NotDrawSinglePoints',true);
      imshow(imgMatches);

     %Comment out
     %}

      flag = getappdata(window,'flag');
      if isempty(flag) || flag, break; end
      pause(0.0001);

end

现在问题是imgIdx是一个1xN矩阵,它包含不同训练指数的索引,这很明显。并且仅在完美匹配时,矩阵imgIdx完全填充匹配的图像索引。 那么,我如何使用此矩阵来选择正确的图像索引。也 在这两行中,我得到的索引误差超过矩阵维数。

ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
ptsObj = num2cell(ptsObj,2); 

这是显而易见的,因为在调试时我清楚地看到m.trainIdx的大小大于objPoints,即我访问的点我不应该,因此索引超过 关于imgIdx的使用的文档很少,所以任何了解这个主题的人都需要帮助。 这些是我用过的图像。

Image1

Image1

Image2

Image2

Image3

Image3

@ Amro回复后的第一次更新:

With the ratio of min distance to distance at 3.6 , I get the following response.

For 3.6

With the ratio of min distance to distance at 1.6 , I get the following response.

For 1.6

1 个答案:

答案 0 :(得分:3)

我认为用代码解释会更容易,所以在这里:)

%% init
detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming');

urls = {
    'http://i.imgur.com/8Pz4M9q.jpg?1'
    'http://i.imgur.com/1aZj0MI.png?1'
    'http://i.imgur.com/pYepuzd.jpg?1'
};

N = numel(urls);
train = struct('img',cell(N,1), 'pts',cell(N,1), 'feat',cell(N,1));

%% training
for i=1:N
    % read image
    train(i).img = imread(urls{i});
    if ~ismatrix(train(i).img)
        train(i).img = rgb2gray(train(i).img);
    end

    % extract keypoints and compute features
    train(i).pts = detector.detect(train(i).img);
    train(i).feat = extractor.compute(train(i).img, train(i).pts);

    % add to training set to match against
    matcher.add(train(i).feat);
end
% build index
matcher.train();

%% testing
% lets create a distorted query image from one of the training images
% (rotation+shear transformations)
t = -pi/3;    % -60 degrees angle
tform = [cos(t) -sin(t) 0; 0.5*sin(t) cos(t) 0; 0 0 1];
img = imwarp(train(3).img, affine2d(tform));    % try all three images here!

% detect fetures in query image
pts = detector.detect(img);
feat = extractor.compute(img, pts);

% match against training images
m = matcher.match(feat);

% keep only good matches
%hist([m.distance])
m = m([m.distance] < 3.6*min([m.distance]));

% sort by distances, and keep at most the first/best 200 matches
[~,ord] = sort([m.distance]);
m = m(ord);
m = m(1:min(200,numel(m)));

% naive classification (majority vote)
tabulate([m.imgIdx])    % how many matches each training image received
idx = mode([m.imgIdx]);

% matches with keypoints belonging to chosen training image
mm = m([m.imgIdx] == idx);

% estimate homography (used to locate object in query image)
ptsQuery = num2cell(cat(1, pts([mm.queryIdx]+1).pt), 2);
ptsTrain = num2cell(cat(1, train(idx+1).pts([mm.trainIdx]+1).pt), 2);
[H,inliers] = cv.findHomography(ptsTrain, ptsQuery, 'Method','Ransac');

% show final matches
imgMatches = cv.drawMatches(img, pts, ...
    train(idx+1).img, train(idx+1).pts, ...
    mm(logical(inliers)), 'NotDrawSinglePoints',true);

% apply the homography to the corner points of the training image
[h,w] = size(train(idx+1).img);
corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
p = cv.perspectiveTransform(corners, H);
p = permute(p, [2 3 1]);

% show where the training object is located in the query image
opts = {'Color',[0 255 0], 'Thickness',4};
imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
imshow(imgMatches)

结果:

object_detection

请注意,由于您没有发布任何测试图像(在您的代码中,您正在从网络摄像头获取输入),我通过扭曲一个训练图像并将其用作查询图像来创建一个。我正在使用某些MATLAB工具箱(imwarp等)中的函数,但这些函数对于演示来说并不重要,您可以用等效的OpenCV替换它们......

我必须说这种方法不是最强大的方法。考虑使用其他技术,例如bag-of-word model,OpenCV已经implements

相关问题