MATLAB图像拼接错误

时间:2019-05-15 19:15:28

标签: matlab image-processing functional-programming matlab-cvst image-stitching

因此,我正在使用Mathsworks网站上提供的代码来尝试拼接20张图像。 Mathsworks上的示例具有5张给定的图像,效果很好,针迹可靠。 但是,当我更改图像的初始目录时,程序会引发各种错误。主要错误是没有足够的区域被匹配。蒙太奇时,我可以看到它们的图像,阅读效果很好。

我已完成操作,问题出在这里:

tforms(n) = estimateGeometricTransform(matchedPoints, matchedPointsPrev,...
    'projective', 'Confidence', 99.9, 'MaxNumTrials', 4000);

我认为,当我注释掉它时,代码运行没有错误。 我摆弄了输入参数,但没有改变。

clear all;

%buildingDir = fullfile('C:\ changed to my directory');
buildingDir = fullfile(toolboxdir('vision'), 'visiondata', 'building');
buildingScene = imageDatastore(buildingDir);

montage(buildingScene.Files)

I = readimage(buildingScene, 1);

grayImage = rgb2gray(I);
points = detectSURFFeatures(grayImage);
[features, points] = extractFeatures(grayImage, points);

numImages = numel(buildingScene.Files);
tforms(numImages) = projective2d(eye(3));
%tforms(numImages) = affine2d(eye(3));

imageSize = zeros(numImages,2);

for n = 2:numImages

    pointsPrevious = points;
    featuresPrevious = features;

    I = readimage(buildingScene, n);

    grayImage = rgb2gray(I);

    imageSize(n,:) = size(grayImage);

    points = detectSURFFeatures(grayImage);
    [features, points] = extractFeatures(grayImage, points);

    indexPairs = matchFeatures(features, featuresPrevious, 'Unique', true);

    matchedPoints = points(indexPairs(:,1), :);
    matchedPointsPrev = pointsPrevious(indexPairs(:,2), :);

    % Estimate the transformation between I(n) and I(n-1).
   tforms(n) = estimateGeometricTransform(matchedPoints, matchedPointsPrev,...
    'projective', 'Confidence', 99.9, 'MaxNumTrials', 4000);

    tforms(n).T = tforms(n).T * tforms(n-1).T;
end

for i = 1:numel(tforms)
    [xlim(i,:), ylim(i,:)] = outputLimits(tforms(i), [1 imageSize(i,2)], [1 imageSize(i,1)]);
end

avgXLim = mean(xlim, 2);

[~, idx] = sort(avgXLim);

centerIdx = floor((numel(tforms)+1)/2);

centerImageIdx = idx(centerIdx);

Tinv = invert(tforms(centerImageIdx));

for i = 1:numel(tforms)
    tforms(i).T = tforms(i).T * Tinv.T;
end

for i = 1:numel(tforms)
    [xlim(i,:), ylim(i,:)] = outputLimits(tforms(i), [1 imageSize(i,2)], [1 imageSize(i,1)]);
end

maxImageSize = max(imageSize);

xMin = min([1; xlim(:)]);
xMax = max([maxImageSize(2); xlim(:)]);

yMin = min([1; ylim(:)]);
yMax = max([maxImageSize(1); ylim(:)]);

width  = round(xMax - xMin);
height = round(yMax - yMin);

panorama = zeros([height width 3], 'like', I);

blender = vision.AlphaBlender('Operation', 'Binary mask', 'MaskSource', 'Input port');

xLimits = [xMin xMax];
yLimits = [yMin yMax];
panoramaView = imref2d([height width], xLimits, yLimits);

for i = 1:numImages

    I = readimage(buildingScene, i);

    warpedImage = imwarp(I, tforms(i), 'OutputView', panoramaView);

    mask = imwarp(true(size(I,1),size(I,2)), tforms(i), 'OutputView', panoramaView);

    panorama = step(blender, panorama, warpedImage, mask);
end

figure, imshow(panorama)


I was wondering if the highlighted code I put was the problem or whether something else was causing it. From the error messages I get theres not enough matched points - but I dont know why.

0 个答案:

没有答案
相关问题