在Fortran MPI中使用持久通信

时间:2017-05-30 17:05:08

标签: fortran mpi

我在我的CFD代码中使用持久性通信。我在另一个子程序和主子程序中进行了通讯设置,在那里我有do循环,我使用MPI_STARTALL(),MPI_WAITALL()。 为了缩短它,我正在展示设置的第一部分。其余的数组完全相同。

我的设置subrotuine看起来像:

Subroutine MPI_Subroutine
use Variables
use mpi
implicit none

!Starting up MPI
call MPI_INIT(ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD,npes,ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD,MyRank,ierr)

!Compute the size of local block (1D Decomposition)
Jmax = JmaxGlobal
Imax = ImaxGlobal/npes
if (MyRank.lt.(ImaxGlobal - npes*Imax)) then
  Imax = Imax + 1
end if
if (MyRank.ne.0.and.MyRank.ne.(npes-1)) then
  Imax = Imax + 2
Else
  Imax = Imax + 1
endif

! Computing neighboars
if (MyRank.eq.0) then
  Left = MPI_PROC_NULL
else
  Left = MyRank - 1
end if

if (MyRank.eq.(npes -1)) then
  Right = MPI_PROC_NULL
else
  Right = MyRank + 1
end if


! Initializing the Arrays in each processor, according to the number of local nodes
Call InitializeArrays

!Creating the channel of communication for this computation,
!Sending and receiving the u_old (Ghost cells)
Call MPI_SEND_INIT(u_old(2,:),Jmax,MPI_DOUBLE_PRECISION,Left,tag,MPI_COMM_WORLD,req(1),ierr)
Call MPI_RECV_INIT(u_old(Imax,:),jmax,MPI_DOUBLE_PRECISION,Right,tag,MPI_COMM_WORLD,req(2),ierr)
Call MPI_SEND_INIT(u_old(Imax-1,:),Jmax,MPI_DOUBLE_PRECISION,Right,tag,MPI_COMM_WORLD,req(3),ierr)
Call MPI_RECV_INIT(u_old(1,:),jmax,MPI_DOUBLE_PRECISION,Left,tag,MPI_COMM_WORLD,req(4),ierr)

因为我正在调试我的代码,所以我只是检查这些数组。当我检查我的幽灵细胞充满了零。然后我猜我搞乱了指令。

主要代码,我调用MPI_STARTALL,MPI_WAITALL看起来像:

Program 
use Variables
use mpi
implicit none
open(32, file = 'error.dat')

Call MPI_Subroutine


!kk=kk+1
 DO kk=1, 2001
! A lot of calculation

! communicating the maximum error among the processes and delta t
call MPI_REDUCE(eps,epsGlobal,1,MPI_DOUBLE_PRECISION,MPI_MAX,0,MPI_COMM_WORLD,ierr)
call MPI_BCAST(epsGlobal,1,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call MPI_REDUCE(delta_t,delta_tGlobal,1,MPI_DOUBLE_PRECISION,MPI_MIN,0,MPI_COMM_WORLD,ierr)


if(MyRank.eq.0) delta_t = delta_tGlobal

call MPI_BCAST(delta_t,1,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)


if(MyRank.eq.0) then
  write(*,*) kk,epsGlobal,(kk*delta_t)
  write(32,*) kk,epsGlobal
endif

Call Swap
Call MPI_STARTALL(4,req,ierr) !
Call MPI_WAITALL(4,req,status,ierr) 
enddo

变量在另一个模块中设置。 MPI相关变量如下:

! MPI variables
INTEGER :: npes, MyRank, ierr, Left, Right, tag
INTEGER  :: status(MPI_STATUS_SIZE,4)
INTEGER,dimension(4) :: req

感谢您在这个问题上的时间和建议。

0 个答案:

没有答案