VTK相机来自于OpenCV估计的来自solvePnP的姿势

时间:2014-03-16 19:30:54

标签: c++ opencv graphics 3d vtk

我在将OpenCV的cv::solvePnP中的RVEC和TVEC(估计的相机姿势)应用到虚拟3D场景中的vtkCamera时遇到了很多麻烦。我希望有人能告诉我我正在犯的错误。

我正在尝试使用vtkActor(3D DICOM渲染我的胸部,并在我的躯干上放置信托标记):

3D Rendering of my chest with fiduciary markers placed on my torso

并使用cv::solvePnP将基准标记与下图中显示的红色圆圈对齐(注意:红色圆圈是基准标记图片中的硬编码坐标某个相机视角):

The OpenCV Scene

正如您所看到的,在将vtkTransform应用于vtkCamera之后,叠加的体积渲染未对齐。

  cv::Mat op(model_points);

  cv::Mat rvec;
  cv::Mat tvec;

  // op = the 3D coordinates of the markers in the scene
  // ip = the 2D coordinates of the markers in the image
  // camera = the intrinsic camera parameters (for calibration)
  // dists = the camera distortion coefficients
  cv::solvePnP(op, *ip, camera, dists, rvec, tvec, false, CV_ITERATIVE);

  cv::Mat rotM;
  cv::Rodrigues(rvec, rotM);

  rotM = rotM.t();

  cv::Mat rtvec = -(rotM*tvec);

  std::cout << "rotM: \n" << rotM << std::endl;
  std::cout << "tvec: \n" << tvec << std::endl;
  std::cout << "rtvec: \n" << rtvec << std::endl;

  double cam[16] = {
    rotM.at<double>(0), rotM.at<double>(1), rotM.at<double>(2), rtvec.at<double>(0),
    rotM.at<double>(3), rotM.at<double>(4), rotM.at<double>(5), rtvec.at<double>(1),
    rotM.at<double>(6), rotM.at<double>(7), rotM.at<double>(8), rtvec.at<double>(2),
    0, 0, 0, 1
  };

  vtkSmartPointer<vtkTransform> T = vtkSmartPointer<vtkTransform>::New();
  T->SetMatrix(cam);

  vtkSmartPointer<vtkRenderer> renderer = v->renderer();

  double b_p[3];
  double a_p[3];
  double *b_o;
  double b_o_store[3];
  double *a_o;
  double b_f[3];
  double a_f[3];
  vtkSmartPointer<vtkCamera> scene_camera = v->camera();

  // Reset Position/Focal/Orientation before applying transformation
  // so the transform does not compound
  v->ResetCameraPositionOrientation();

  // Apply the transformation
  scene_camera->ApplyTransform(T);
  scene_camera->SetClippingRange(1, 2000);

在下面的场景捕捉中强调了这一点(胸部弯曲,朝向屏幕,你可以看到场景中演员最底部的三个顶级信托标记):

The mis-aligned scene

以下屏幕截图显示了RVEC&amp;我得到了TVEC,以及转型前后的位置/方向/焦点:

enter image description here

场景按以下方式初始化:

  this->actor_ = vtkVolume::New();
  this->actor_->SetMapper(mapper);
  this->actor_->SetProperty(volumeProperty);
  this->actor_->SetPosition(0,0,0);
  this->actor_->RotateX(90.0);

  this->renderer_ = vtkRenderer::New();
  this->renderer_->AddViewProp(this->actor_);
  this->renderer_->SetBackground(0.3,0.3,0.3);

  this->camera_ = this->renderer_->GetActiveCamera();

  // Center the scene so that we can grab the position/focal-point for later
  // use.
  this->renderer_->ResetCamera();

  // Get the position/focal-point for later use.
  double pos[3];
  double orientation[3];
  this->camera_->GetPosition(pos);
  this->camera_->GetFocalPoint(this->focal_);
  double *_o = this->camera_->GetOrientation();

  this->orientation_[0] = _o[0];
  this->orientation_[1] = _o[1];
  this->orientation_[2] = _o[2];

  this->position_[0] = pos[0];
  this->position_[1] = pos[1];
  this->position_[2] = pos[2];

  // Set the camera in hopes of it "sticking"
  this->camera_->SetPosition(pos);
  this->camera_->SetFocalPoint(this->focal_);
  this->camera_->SetViewUp(0, 1, 0);
  this->camera_->SetFreezeFocalPoint(true);

我为这么久的问题道歉。我想提供尽可能多的信息。我已经在这个问题上工作了几天而且无法理解它!

2 个答案:

答案 0 :(得分:0)

这可能为时已晚,但我目前正在做的几乎是确切的事情,我们刚刚解决了这个问题。我甚至使用VTK和一切! SolvePnP返回的内容(假设我没有错,OpenCV的Rodrigues返回一个类似的矩阵)是一个全局变换矩阵。该矩阵表示全局帧中的旋转和平移。由于转换的工作方式,必须对全局变量进行预乘,而对局部变换进行后变换,因此需要在代码中完成。我们这样做的方法是使用

((vtkMRMLLinearTransformNode*)(d->CameraVideoTransformComboBox->currentNode()))->SetMatrixTransformFromParent(staticVideoTransform4x4);

其中:

d是对3D Slicer模块的UI的引用,该模块基本上可以像大型VTK工具箱一样对待

CameraVideoTransformComboBox只是一个存储转换的UI组合框

staticVideoTransform4x4是我们的转换矩阵。

然后,我们通过UI应用转换,而不是通过代码执行转换的方法,所以不幸的是,那里有一些黑色的拳击不能让我给你一个关于你如何编码的确切答案它自己。如果您(或更有可能读过此内容的人)遇到此问题,我建议您查看vtkMRMLTransformNode::SetMatrixTransformFromParent()。如果这不能完全起作用,请尝试反转矩阵!

答案 1 :(得分:-1)

你可以反转从solvePnP()返回的矩阵,并使用cv :: viz :: Viz3d :: setViewerPose()作为示例