在MPI中发送和接收数组

时间:2018-06-01 10:20:54

标签: c++ c arrays mpi dynamic-arrays

我是MPI的新手,我正在编写一个简单的MPI程序来获得矩阵和向量的点积,即A * b = c。但是,我的代码不起作用。源代码如下所列。

如果我用

替换A,b,c和缓冲区的声明
double A[16], b[4], c[4], buffer[8];

并注释那些与分配和自由操作相关的行,我的代码有效并且结果是正确的。在这种情况下,我想知道问题应该与指针有关,但我不知道要解决这个问题。

还有一件事,在我的代码中,缓冲区只有4个元素,但缓冲区大小必须大于8,否则它不起作用。

#include<mpi.h>
#include<iostream>
#include<stdlib.h>

using namespace std;

int nx = 4, ny = 4, nxny;
int ix, iy;
double *A = nullptr, *b = nullptr, *c = nullptr, *buffer = nullptr;
double ans;

// info MPI
int myGlobalID, root = 0, numProc;
int numSent;
MPI_Status status;

// functions
void get_ixiy(int);

int main(){

  MPI_Init(NULL, NULL);
  MPI_Comm_size(MPI_COMM_WORLD, &numProc);
  MPI_Comm_rank(MPI_COMM_WORLD, &myGlobalID);

  nxny = nx * ny;

  A = new double(nxny);
  b = new double(ny);
  c = new double(nx);
  buffer = new double(ny);

  if(myGlobalID == root){
    // init A, b
    for(int k = 0; k < nxny; ++k){
      get_ixiy(k);
      b[iy] = 1;
      A[k] = k;
    }
    numSent = 0;

    // send b to each worker processor
    MPI_Bcast(&b, ny, MPI_DOUBLE, root, MPI_COMM_WORLD);

    // send a row of A to each worker processor, tag with row number
    for(ix = 0; ix < min(numProc - 1, nx); ++ix){
      for(iy = 0; iy < ny; ++iy){
        buffer[iy] = A[iy + ix * ny];
      }
      MPI_Send(&buffer, ny, MPI_DOUBLE, ix+1, ix+1, MPI_COMM_WORLD);
      numSent += 1;
    }

    for(ix = 0; ix < nx; ++ix){
      MPI_Recv(&ans, 1, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
      int sender = status.MPI_SOURCE;
      int ansType = status.MPI_TAG;
      c[ansType] = ans;

      // send another row to worker process
      if(numSent < nx){
        for(iy = 0; iy < ny; ++iy){
          buffer[iy] = A[iy + numSent * ny];
        }
        MPI_Send(&buffer, ny, MPI_DOUBLE, sender, numSent+1, 
        MPI_COMM_WORLD);
        numSent += 1;
      }
      else
        MPI_Send(MPI_BOTTOM, 0, MPI_DOUBLE, sender, 0, MPI_COMM_WORLD);
    }

    for(ix = 0; ix < nx; ++ix){
      std::cout << c[ix] << " ";
    }
    std::cout << std::endl;

    delete [] A;
    delete [] b;
    delete [] c;
    delete [] buffer;
  }
  else{
    MPI_Bcast(&b, ny, MPI_DOUBLE, root, MPI_COMM_WORLD);
      if(myGlobalID <= nx){
        while(1){
          MPI_Recv(&buffer, ny, MPI_DOUBLE, root, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
          if(status.MPI_TAG == 0) break;
          int row = status.MPI_TAG - 1;
          ans = 0.0;

          for(iy = 0; iy < ny; ++iy) ans += buffer[iy] * b[iy];

          MPI_Send(&ans, 1, MPI_DOUBLE, root, row, MPI_COMM_WORLD);
      }
    }
  }

  MPI_Finalize();
  return 0;
} // main

void get_ixiy(int k){
  ix = k / ny;
  iy = k % ny;
}

错误信息如下所示。

=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 7455 RUNNING AT ***
=   EXIT CODE: 11
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES

YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault: 
11 (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions

1 个答案:

答案 0 :(得分:1)

您的代码中存在几个问题,您必须先修复它们。

首先,您希望在此for循环中访问不存在的b[]元素:

for(int k = 0; k < nxny; ++k){
  get_ixiy(k);
  b[k] = 1;     // WARNING: this is an error
  A[k] = k;
}

其次,您只是为根进程删除已分配的内存。这会导致内存泄漏:

if(myGlobalID == root){
  // ...
  delete [] A;
  delete [] b;
  delete [] c;
  delete [] buffer;
}

您必须删除所有进程的已分配内存。

第三,你有一个无用的函数void get_ixiy(int);改变全局变量ix,iy。它没用,因为在调用此函数后,你永远不会使用ix,直到你手动更改它们。见这里:

for(ix = 0; ix < min(numProc - 1, nx); ++ix){
    for(iy = 0; iy < ny; ++iy){
        // ...
    }
}

第四,您以完全错误的方式使用MPI_Send()MPI_Recv()。你很幸运,你没有得到更多的错误。

相关问题