MPI_Isend()中的MPI分段错误

时间:2012-06-18 20:19:28

标签: c debugging segmentation-fault mpi

我是MPI编程的新手!我试着测量点对点通信带宽  beetween到处理器实用。但是现在我遇到了分段错误!我不明白为什么会这样。我也在ubuntu上尝试了valgrind,但不知道。所以也许有人可以帮助我:D

  

感谢快速响应,但这并没有改变问题:(   我刚刚更新了错误!

这里是源代码

#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[]){

 int myrank, size;
 MPI_Init(&argc, &argv);
 MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
 MPI_Comm_size(MPI_COMM_WORLD, &size);

 int *arraySend = (int *)malloc(25000*sizeof(int));
 int *arrayRecv = (int *)malloc(25000*sizeof(int));
 double startTime = 0.0, endTime = 0.0;
 MPI_Status status,statusSend, statusRecv;
 MPI_Request requestSend, requestRecv;

 if(size != 2){
   if(myrank == 0){
       printf("only two processors!\n");
       MPI_Finalize();  
       return 0;
    }
 }

 if(myrank == 0){
     startTime = MPI_Wtime();
     MPI_Send(&arraySend, 25000, MPI_INT, 1, 0,MPI_COMM_WORLD);
 }else{
     MPI_Recv(&arrayRecv, 25000, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
 } 

 if(myrank == 0){
   endTime = MPI_Wtime();
   printf("100k Bytes blocking: %f Mb/s\n", 0.1/(endTime-startTime));
   startTime = MPI_Wtime();
   MPI_Isend(&arraySend, 25000, MPI_INT, 1, 0, MPI_COMM_WORLD, &requestSend);
   MPI_Wait(&requestSend, &statusSend);
  }else{
   MPI_Irecv(&arrayRecv,25000,MPI_INT,0,0,MPI_COMM_WORLD, &requestRecv);
   MPI_Wait(&requestRecv, &statusRecv);
  }

 if(myrank == 0){
    endTime = MPI_Wtime();
    printf("100k Bytes non-blocking: %f Mb/s\n", 0.1/(endTime-startTime));
 }
 free(arraySend);
 free(arrayRecv);
 MPI_Finalize();
 return 0;
}

此处错误已更新!

$ mpirun -np 2 nr2
[P90:05046] *** Process received signal ***
[P90:05046] Signal: Segmentation fault (11)
[P90:05046] Signal code: Address not mapped (1)
[P90:05046] Failing at address: 0x7fff54fd8000
[P90:05046] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x10060) [0x7f8474777060]
[P90:05046] [ 1] /lib/x86_64-linux-gnu/libc.so.6(+0x131b99) [0x7f84744f7b99]
[P90:05046] [ 2] /usr/lib/libmpi.so.0(ompi_convertor_pack+0x14d) [0x7f84749c75dd]
[P90:05046] [ 3] /usr/lib/openmpi/lib/openmpi/mca_btl_sm.so(+0x1de8) [0x7f846fe14de8]
[P90:05046] [ 4] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0xd97e) [0x7f8470c6c97e]
[P90:05046] [ 5] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x8900) [0x7f8470c67900]
[P90:05046] [ 6] /usr/lib/openmpi/lib/openmpi/mca_btl_sm.so(+0x4188) [0x7f846fe17188]
[P90:05046] [ 7] /usr/lib/libopen-pal.so.0(opal_progress+0x5b) [0x7f8473f330db]
[P90:05046] [ 8] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x6fd5) [0x7f8470c65fd5]
[P90:05046] [ 9] /usr/lib/libmpi.so.0(PMPI_Send+0x195) [0x7f84749e1805]
[P90:05046] [10] nr2(main+0xe1) [0x400c55]
[P90:05046] [11] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed) [0x7f84743e730d]
[P90:05046] [12] nr2() [0x400ab9]
[P90:05046] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 5046 on node P90 exited on signal 11 
(Segmentation fault).

2 个答案:

答案 0 :(得分:5)

传递的数组的大小错误。

sizeof(arraySend)应该是简单25000,因为MPI会在您定义数据类型(此处为MPI_INT)时自动扣除大小。 Ony如果你有一个位数组,你通常需要在代码中使用sizeof(...)。

尝试在堆栈而不是堆上分配内存,例如而不是:

 int *arraySend = (int *)malloc(25000*sizeof(int));

使用      int arraySend [25000];

然后在您的mpi通话中使用arraySend代替&arraySend

如果你可以使用C ++,你也可以使用漂亮的boost mpi标题,其中大小是根据传递的数据自动计算的。

答案 1 :(得分:0)

如果您正在使用体面的mpi实现,则可以使用mpirun -gdb,更多文档here

相关问题