混合OpenMP + MPI:我需要来自此示例的解释

时间:2015-05-27 21:56:57

标签: c mpi openmp hybrid

我在互联网上找到了这个例子,但我无法理解从主节点发送的确切内容是否为A [5],例如将发送给其他从属设备的内容?第5行或所有元素直到第5行或所有元素从第5行依此类推???     #包括     #包括     #包括     #include

#define TAG 13

int main(int argc, char *argv[]) {
  //double **A, **B, **C, *tmp;
  double **A, **B, **C, *tmp;
  double startTime, endTime;
  int numElements, offset, stripSize, myrank, numnodes, N, i, j, k;
  int numThreads, chunkSize = 10;

  MPI_Init(&argc, &argv);

  MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
  MPI_Comm_size(MPI_COMM_WORLD, &numnodes);

  N = atoi(argv[1]);
  numThreads = atoi(argv[2]);  // difference from MPI: how many threads/rank?

  omp_set_num_threads(numThreads);  // OpenMP call to set threads per rank

  if (myrank == 0) {
    tmp = (double *) malloc (sizeof(double ) * N * N);
    A = (double **) malloc (sizeof(double *) * N);
    for (i = 0; i < N; i++)
      A[i] = &tmp[i * N];
  }
  else {
    tmp = (double *) malloc (sizeof(double ) * N * N / numnodes);
    A = (double **) malloc (sizeof(double *) * N / numnodes);
    for (i = 0; i < N / numnodes; i++)
      A[i] = &tmp[i * N];
  }


  tmp = (double *) malloc (sizeof(double ) * N * N);
  B = (double **) malloc (sizeof(double *) * N);
  for (i = 0; i < N; i++)
    B[i] = &tmp[i * N];


  if (myrank == 0) {
    tmp = (double *) malloc (sizeof(double ) * N * N);
    C = (double **) malloc (sizeof(double *) * N);
    for (i = 0; i < N; i++)
      C[i] = &tmp[i * N];
  }
  else {
    tmp = (double *) malloc (sizeof(double ) * N * N / numnodes);
    C = (double **) malloc (sizeof(double *) * N / numnodes);
    for (i = 0; i < N / numnodes; i++)
      C[i] = &tmp[i * N];
  }

  if (myrank == 0) {
    // initialize A and B
    for (i=0; i<N; i++) {
      for (j=0; j<N; j++) {
        A[i][j] = 1.0;
        B[i][j] = 1.0;
      }
    }
  }

  // start timer
  if (myrank == 0) {
    startTime = MPI_Wtime();
  }

  stripSize = N/numnodes;

  // send each node its piece of A -- note could be done via MPI_Scatter
  if (myrank == 0) {
    offset = stripSize;
    numElements = stripSize * N;
    for (i=1; i<numnodes; i++) {

我在下面的行上无法理解发送

MPI_Send(A[offset], numElements, MPI_DOUBLE, i, TAG, MPI_COMM_WORLD);
      offset += stripSize;
    }
  }
  else {  // receive my part of A

还有:

MPI_Recv(A[0], stripSize * N, MPI_DOUBLE, 0, TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
  }

同样广播将从B发送什么?

// everyone gets B
  MPI_Bcast(B[0], N*N, MPI_DOUBLE, 0, MPI_COMM_WORLD);

  // Let each process initialize C to zero 
  for (i=0; i<stripSize; i++) {
    for (j=0; j<N; j++) {
      C[i][j] = 0.0;
    }
  }

  // do the work---this is the primary difference from the pure MPI program
  #pragma omp parallel for shared(A,B,C,numThreads) private(i,j,k) schedule (static, chunkSize)
  for (i=0; i<stripSize; i++) {
    for (j=0; j<N; j++) {
      for (k=0; k<N; k++) {
    C[i][j] += A[i][k] * B[k][j];
      }
    }
  }

  // master receives from workers  -- note could be done via MPI_Gather
  if (myrank == 0) {
    offset = stripSize; 
    numElements = stripSize * N;
    for (i=1; i<numnodes; i++) {
      MPI_Recv(C[offset], numElements, MPI_DOUBLE, i, TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
      offset += stripSize;
    }
  }
  else { // send my contribution to C
    MPI_Send(C[0], stripSize * N, MPI_DOUBLE, 0, TAG, MPI_COMM_WORLD);
  }



  MPI_Finalize();
  return 0;
}

1 个答案:

答案 0 :(得分:1)

A,B和C是使用指针逼近指针的动态2D数组。指数为A[row][col]。省略最后一个索引时,返回该行中第一个元素(列0)的地址。这很有用,因为您可以使用该地址和矩阵的“宽度”传递单行。这就是2D数组存储在内存中的方式:

enter image description here

从矩阵A发送第5行,共有num_cols列:

MPI_Send(A[5], num_cols, MPI_DOUBLE, ...);

同样地,你可以写&A[5][0]来访问同一个地址,但它只是更加混乱。

此外,如果您希望发送完整的2D矩阵,可以轻松完成此操作,因为每行都连续存储在内存中。只需使用第一行B[0](也指向第一列)并使用N*N线性化长度(假设这是一个方阵)。

发送完整的N*N方阵[{1}}:

B
相关问题