在spawn过程中获取值

时间:2017-01-31 11:28:24

标签: c mpi

我试图使用集体MPI函数在我的衍生过程中获取值。

在这种情况下,我有一个N * N矩阵,我希望将每一行传递给每个进程。获取每个过程中的值并将它们的值相加。

我使用这个例子:

MPI_Scatter of 2D array and malloc

int main(int argc, char *argv[]){
  int *n, range, i, j, dato, resultado;
  int *matriz;
  char *nombre_esclave="esclavo";

  //MPI Section
  int rank, size;
  MPI_Comm hijos;
  MPI_Status status;



  MPI_Init(&argc, &argv);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  MPI_Comm_size(MPI_COMM_WORLD, &size);

  matriz = createMatrix(N, N); 
  printArray(matriz, N * N);

  //Child process
  MPI_Comm_spawn("slave", MPI_ARGV_NULL, N, MPI_INFO_NULL, 0,    MPI_COMM_SELF, &hijos, MPI_ERRCODES_IGNORE);


  // received row will contain N integers
  int *procRow = malloc(sizeof(int) * N); 

  MPI_Scatter(matriz, N, MPI_INT, // send one row, which contains N integers
              procRow, N, MPI_INT, // receive one row, which contains N integers
              MPI_ROOT, hijos);



  MPI_Finalize();
  return 0;
}

和奴隶

从属

   MPI_Init(&argc, &argv);
   MPI_Comm_rank(MPI_COMM_WORLD, &pid);
   MPI_Comm_size(MPI_COMM_WORLD, &size);


   MPI_Comm_get_parent(&parent);

   if (parent != MPI_COMM_NULL) {
        printf("This is a child process\n");
   }       

   //number of processes in the remote group of comm (integer)
   MPI_Comm_remote_size(parent, &size);


   int *procRow = malloc(sizeof(int) * N);

   //UNABLE TO GET VALUES FROM THE PARENT
   //I need to sum old the values y every portion of the matrix
   //passed to every child process
   MPI_Reduce(procRow, &resultado_global, N, MPI_INT, MPI_SUM, 0, parent);

UPDATE enter image description here

使用MPI_Comm_spawn,我创建了3个孩子。在每个孩子中我想得到一排矩阵(我在主人中使用分散)。后来我使用MPI_Reduce对孩子的每一行求和(这就是为什么我说得到值)。

更新2

在奴隶上我修改了代码,我在每个进程中都获得了行。

if (parent != MPI_COMM_NULL) {


       //number of processes in the remote group of comm (integer)
       MPI_Comm_remote_size(parent, &size_remote);

       int *matrix = malloc(sizeof(int) * size);
       int *procRow = malloc(sizeof(int) * size);



       MPI_Scatter(matrix, N, MPI_INT,procRow, N, MPI_INT,0, parent);

       //procRow values correctly from each row of the matrix

       if (procRow != NULL) {
          printf("Process %d; %d %d %d \n", pid, procRow[0], procRow[1], procRow[2]);
       }       

    //Unable to sum each row
       MPI_Reduce(procRow, &resultado_global, size, MPI_INT, MPI_SUM, ROOT, parent);
       //MPI_Reduce(procRow, &resultado_global, size, MPI_INT, MPI_SUM, ROOT, MPI_COMM_WORLD);

   }

更新3(已解决)

IN SLAVE

if (parent != MPI_COMM_NULL) {

       //number of processes in the remote group of comm (integer)
       MPI_Comm_remote_size(parent, &size_remote);

       int *matrix = malloc(sizeof(int) * size);
       int *procRow = malloc(sizeof(int) * size);


       MPI_Scatter(matrix, N, MPI_INT, procRow, N, MPI_INT, 0, parent);



       if (procRow != NULL) {
          printf("Process %d; %d %d %d \n", pid, procRow[0], procRow[1], procRow[2]);
          sumaParcial=0;
          for (int i = 0; i < N; i++)
            sumaParcial = sumaParcial + procRow[i]; 
       }       



       MPI_Reduce(&sumaParcial, &resultado_global, 1, MPI_INT, MPI_SUM, ROOT, parent);


   }

IN MASTER

  // received row will contain N integers
  int *procRow = malloc(sizeof(int) * N); 

  MPI_Scatter(matriz, N, MPI_INT, // send one row, which contains N integers
              procRow, N, MPI_INT, // receive one row, which contains N integers
              MPI_ROOT, hijos);


  MPI_Reduce(&sumaParcial, &resultado_global, 1, MPI_INT, MPI_SUM, MPI_ROOT, hijos);

  printf("\n GLOBAL RESULT :%d\n",resultado_global);

有什么想法吗? 感谢

1 个答案:

答案 0 :(得分:2)

从编辑开始,我认为分散功能正常。

你的主要困惑似乎是关于MPI_Reduce。它没有做任何局部减少。根据您的图形,您希望在从属队列6, 15, 24中具有值0, 1, 2。这完全没有MPI,只需迭代本地行。

行上的MPI_Reduce会导致根[12, 15, 18]。如果您只想在从属根处获得总和45,则应首先在本地汇总值,然后MPI_Reduce将每个等级的单个值汇总为单个全局值。

相关问题