比较openCv PnP和openGv PnP

时间:2018-03-30 19:17:30

标签: opencv opencv-solvepnp

我正在尝试构建一个测试项目来比较openCv solvePnP实现与openGv实现。

这里详细介绍了opencv:

https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnp

和openGv在这里:

https://laurentkneip.github.io/opengv/page_how_to_use.html

使用opencv示例代码,我在图像中找到一个棋盘,并构建匹配的3d点。我运行cv pnp,然后设置Gv解算器。 cv pnp运行正常,并打印值:

//rotation
 -0.003040771263293328, 0.9797142824436152, -0.2003763421317906;
 0.0623096853748876, 0.2001735322445355, 0.977777101438374]

//translation
[-12.06549797067309;
 -9.533070368412945;
 37.6825295047483]

我通过重新投影3d点进行测试,看起来很不错。

然而,Gv Pnp为所有值打印nan。我试图遵循示例代码,但我必须在某处犯错。代码是:

    int main(int argc, char **argv) {


        cv::Mat matImg = cv::imread("chess.jpg");

            cv::Size boardSize(8, 6);

            //Construct the chessboard model
            double squareSize = 2.80;
            std::vector<cv::Point3f> objectPoints;
            for (int i = 0; i < boardSize.height; i++) {
                for (int j = 0; j < boardSize.width; j++) {
                    objectPoints.push_back(
                        cv::Point3f(double(j * squareSize), float(i * squareSize), 0));
                }
            }

            cv::Mat rvec, tvec;
            cv::Mat cameraMatrix, distCoeffs;
            cv::FileStorage fs("CalibrationData.xml", cv::FileStorage::READ);
            fs["cameraMatrix"] >> cameraMatrix;
            fs["dist_coeffs"] >> distCoeffs;

                //Found chessboard corners
                std::vector<cv::Point2f> imagePoints;
                bool found = cv::findChessboardCorners(matImg, boardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);

                if (found) {
                    cv::drawChessboardCorners(matImg, boardSize, cv::Mat(imagePoints), found);

                    //SolvePnP

                    cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec);

                    drawAxis(matImg, cameraMatrix, distCoeffs, rvec, tvec, squareSize);
                }


                //cv to matrix
                cv::Mat R;
                cv::Rodrigues(rvec, R);

                std::cout << "results from cv:" << R << tvec << std::endl;


//START OPEN GV

                //vars
                bearingVectors_t bearingVectors;
                points_t points;
                rotation_t rotation;


                //add points to the gv type
                for (int i = 0; i < objectPoints.size(); ++i)
                {
                    point_t pnt;
                    pnt.x() = objectPoints[i].x;
                    pnt.y() = objectPoints[i].y;
                    pnt.z() = objectPoints[i].z;

                    points.push_back(pnt);
                }

                /*
                K is the common 3x3 camera matrix that you can  compose with cx, cy, fx, and fy.
                You put the image point into homogeneous form (append a 1),
                multiply it with the inverse of K from the left, which gives you a normalized image point (a spatial direction vector).
                You normalize that to norm 1.
                */

                //to homogeneous
                std::vector<cv::Point3f> imagePointsH;
                convertPointsToHomogeneous(imagePoints, imagePointsH);


                //multiply by K.Inv
                for (int i = 0; i < imagePointsH.size(); i++)
                {
                    cv::Point3f pt = imagePointsH[i];
                    cv::Mat ptMat(3, 1, cameraMatrix.type());
                    ptMat.at<double>(0, 0) = pt.x;
                    ptMat.at<double>(1, 0) = pt.y;
                    ptMat.at<double>(2, 0) = pt.z;

                    cv::Mat dstMat = cameraMatrix.inv() * ptMat;

                    //store as bearing vector       
                    bearingVector_t bvec;
                    bvec.x() = dstMat.at<double>(0, 0);
                    bvec.y() = dstMat.at<double>(1, 0);
                    bvec.z() = dstMat.at<double>(2, 0);

                    bvec.normalize();
                    bearingVectors.push_back(bvec);

                }


                //create a central absolute adapter
                absolute_pose::CentralAbsoluteAdapter adapter(
                    bearingVectors,
                    points,
                    rotation);


                size_t iterations = 50;

                std::cout << "running epnp (all correspondences)" << std::endl;
                transformation_t epnp_transformation;
                for (size_t i = 0; i < iterations; i++)
                    epnp_transformation = absolute_pose::epnp(adapter);

                std::cout << "results from epnp algorithm:" << std::endl;
                std::cout << epnp_transformation << std::endl << std::endl;



        return 0;
    }

我在设置openGv Pnp求解器时出错了吗?

1 个答案:

答案 0 :(得分:0)

几年后,我遇到了同样的问题,并解决了。要将openCv转换为openGV承载向量,可以执行以下操作:

bearingVectors_t bearingVectors;
std::vector<cv::Point2f> dd2;


const int N1 = static_cast<int>(dd2.size());
cv::Mat points1_mat = cv::Mat(dd2).reshape(1);

// first rectify points and construct homogeneous points
// construct homogeneous points
cv::Mat ones_col1 = cv::Mat::ones(N1, 1, CV_32F);
cv::hconcat(points1_mat, ones_col1, points1_mat);

// undistort points
cv::Mat points1_rect = points1_mat * cameraMatrix.inv();

// compute bearings
points2bearings3(points1_rect, &bearingVectors);

使用此功能进行最终转换:

// Convert a set of points to bearing
// points Matrix of size Nx3 with the set of points.
// bearings Vector of bearings.
void points2bearings3(const cv::Mat& points,
    opengv::bearingVectors_t* bearings) {
    double l;
    cv::Vec3f p;
    opengv::bearingVector_t bearing;
    for (int i = 0; i < points.rows; ++i) {
        p = cv::Vec3f(points.row(i));
        l = std::sqrt(p[0] * p[0] + p[1] * p[1] + p[2] * p[2]);
        for (int j = 0; j < 3; ++j) bearing[j] = p[j] / l;
        bearings->push_back(bearing);
    }
}