相机校准不正确

时间:2019-07-24 09:48:39

标签: c++ c opencv camera-calibration

我正在使用OpenCV和棋盘方法执行相机应用程序。这是我的输入图片:

Input checkerboard before calibration

这是校准后的结果:

Calibrated image

我添加了一个红色矩形以可视化错误所在:空间畸变已得到纠正,但不是很准确,现在的结果还存在其他一些非线性畸变,导致正方形的边界不平行。校准过程中未使用图像中显示的点和十字线,背景中的黑白褪色也没有任何影响,当我将其删除时,结果是相同的。

这可能是什么原因?

这是当前计算校正的方式(简化的伪代码,没有任何错误检查,并且缺少一些无关的部分)。首先根据棋盘格计算校正值:

CvSize board_sz = cvSize( board_w, board_h );
//Allocate storage for the parameters according to total number of corners and number of snapshots
CvMat* image_points      = cvCreateMat(n_boards*board_total,2,CV_32FC1);
CvMat* object_points     = cvCreateMat(n_boards*board_total,3,CV_32FC1);
CvMat* point_counts      = cvCreateMat(n_boards,1,CV_32SC1);
CvMat* intrinsic_matrix  = cvCreateMat(3,3,CV_32FC1);
CvMat* distortion_coeffs = cvCreateMat(4,1,CV_32FC1);

CvPoint2D32f* corners = new CvPoint2D32f[ board_total ];

//Find chessboard corners:
cvFindChessboardCorners(gray_image, board_sz, corners,corner_count,CV_CALIB_CB_ADAPTIVE_THRESH|CV_CALIB_CB_FILTER_QUADS|CV_CALIB_CB_NORMALIZE_IMAGE );
  if (*corner_count!=board_total) cvFindChessboardCorners(gray_image, board_sz, corners,corner_count,CV_CALIB_CB_ADAPTIVE_THRESH|CV_CALIB_CB_NORMALIZE_IMAGE);

cvFindCornerSubPix(gray_image,corners, *corner_count, cvSize(11,11), cvSize(-1,-1), cvTermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 ));

// Initialize the intrinsic matrix with both the two focal lengths in a ratio of 1.0
CV_MAT_ELEM(*intrinsic_matrix,float,0,0)=1.0f;   CV_MAT_ELEM(*intrinsic_matrix,float,0,1)=0.0f;   CV_MAT_ELEM(*intrinsic_matrix,float,0,2)=0.0f;
CV_MAT_ELEM(*intrinsic_matrix,float,1,0)=0.0f;   CV_MAT_ELEM(*intrinsic_matrix,float,1,1)=1.0f;   CV_MAT_ELEM(*intrinsic_matrix,float,1,2)=0.0f;
CV_MAT_ELEM(*intrinsic_matrix,float,2,0)=0.0f;   CV_MAT_ELEM(*intrinsic_matrix,float,2,1)=0.0f;   CV_MAT_ELEM(*intrinsic_matrix,float,2,2)=0.0f;

CV_MAT_ELEM(*intrinsic_matrix,float,0,0)=1.0; // fx
CV_MAT_ELEM(*intrinsic_matrix,float,1,1)=(1.0*calib_data->height)/calib_data->width; // fy
cvCalibrateCamera2(object_points, image_points, point_counts, cvGetSize(gray_image),
                     intrinsic_matrix, distortion_coeffs,
                     NULL,NULL,
                     CV_CALIB_FIX_ASPECT_RATIO);

cornerPoints=(struct camera_calib_point*)malloc(*corner_count*sizeof(struct camera_calib_point));
// calculate spatial correction matrix
{
   cv::Point2f src_vertices[4],dst_vertices[4];
   float       minx=1000000.0,miny=1000000.0,maxx=-1000000.0,maxy=-1000000.0;
   double      d;
   cv::Mat     warpMatrix;

   src_vertices[0]=corners[0];
   src_vertices[1]=corners[board_w-1];
   src_vertices[2]=corners[board_w*board_h-board_w];
   src_vertices[3]=corners[board_w*board_h-1];
   for (i=0; i<4; i++)
   {
      if (src_vertices[i].x<minx) minx=src_vertices[i].x;
      if (src_vertices[i].x>maxx) maxx=src_vertices[i].x;
      if (src_vertices[i].y<miny) miny=src_vertices[i].y;
      if (src_vertices[i].y>maxy) maxy=src_vertices[i].y;
   }
   dst_vertices[0].x=minx; dst_vertices[0].y=maxy;
   dst_vertices[1].x=maxx; dst_vertices[1].y=maxy;
   dst_vertices[2].x=minx; dst_vertices[2].y=miny;
   dst_vertices[3].x=maxx; dst_vertices[3].y=miny;
   warpMatrix=getPerspectiveTransform(src_vertices, dst_vertices);
}
// end of calculate spatial correction matrix

然后将校正数据通过以下方式应用于图像:

/* Edit: missing part added */
cv::Size imageSize(handle->width,handle->height);

cv::initUndistortRectifyMap(handle->intrinsic, handle->distortion,cv::Mat(),
                            cv::getOptimalNewCameraMatrix(handle->intrinsic, handle->distortion,imageSize,1,imageSize,0),
                            imageSize, CV_16SC2,*handle->mapx,*handle->mapy);
/* End of edit: missing part added */


cv::remap(cv::cvarrToMat(sImage),dImage,*handle->mapx,*handle->mapy,cv::INTER_LINEAR);
cv::warpPerspective(dImage, dImage2,*handle->warpMtx,dImage.size(),cv::INTER_LINEAR,cv::BORDER_CONSTANT);

所以...有人知道这里可能出什么问题吗?

0 个答案:

没有答案
相关问题