MPI_Scatterv仅分散自定义MPI_Datatype的一部分

时间:2014-03-23 03:37:44

标签: c++ c mpi

此问题可能与this one有关。

我有以下结构:

struct Particle {

double x;
double y;
double vx;
double vy;
double ax;
double ay;
int i;
int j;

Particle():
    x(-1.0),
    y(-1.0),
    vx(0.0),
    vy(0.0),
    ax(0.0),
    ay(0.0),
    i(-1),
    j(-1) { };

Particle& operator=(const Particle& right) {

    if(&right == this)
        throw std::domain_error("Particle self-assignment!");

    x = right.x;
    y = right.y;
    vx = right.vx;
    vy = right.vy;
    ax = right.ax;
    ay = right.ay;
    i = right.i;
    j = right.j;

    return *this;
} 
};

我在每个处理器上构建一个MPI_Datatype,如下所示:

//
// Build MPI_Datatype PARTICLE
//
MPI_Datatype PARTICLE;
Particle p;                 // needed for displacement computation
int block_len[8];           // the number of elements in each "block" will be 1 for us
MPI_Aint displacements[8];  // displacement of each element from start of new type
MPI_Datatype typelist[8];   // MPI types of the elements
MPI_Aint start_address;     // used in calculating the displacements
MPI_Aint address;

//
// Set up
//
for(int i = 0; i < 8; ++i) {
    block_len[i] = 1;
}

typelist[0] = MPI_FLOAT;
typelist[1] = MPI_FLOAT;
typelist[2] = MPI_FLOAT;
typelist[3] = MPI_FLOAT;
typelist[4] = MPI_FLOAT;
typelist[5] = MPI_FLOAT;
typelist[6] = MPI_INT;
typelist[7] = MPI_INT;

MPI_Address(&p.x, &start_address);          // getting starting address
displacements[0] = 0;                       // first element is at displacement 0

MPI_Address(&p.y, &address);
displacements[1] = address - start_address;

MPI_Address(&p.vx, &address);
displacements[2] = address - start_address;

MPI_Address(&p.vy, &address);
displacements[3] = address - start_address;

MPI_Address(&p.ax, &address);
displacements[4] = address - start_address;

MPI_Address(&p.ay, &address);
displacements[5] = address - start_address;

MPI_Address(&p.i, &address);
displacements[6] = address - start_address;

MPI_Address(&p.j, &address);
displacements[7] = address - start_address;

//
// Building new MPI type
//
MPI_Type_struct(8, block_len, displacements, typelist, &PARTICLE);
MPI_Type_commit(&PARTICLE);

然后想像这样分散它:

MPI_Scatterv(particles.data(), partition_sizes.data(), partition_offsets.data(), PARTICLE, local_particles.data(), n_local, PARTICLE, 0, MPI_COMM_WORLD );

MPI_Scatterv的参数如下:

int n_local                                    // number of particles on each processor
std::vector<Particle> particles;               // particles will be available on all processors but it will only be filled with particles on processor 0 and then scattered to all other processors.
std::vector<int> partition_sizes(n_proc);
std::vector<int> partition_offsets(n_proc);
std::vector<Particle> local_particles(n);

有趣的是,struct Particle的int部分(i,j)被正确分散,所以我在所有local_particles [k]上都有正确的i,j值。但是,所有double值(x,y,vx,vy,ax,ay)都采用默认构造函数值。

有没有其他人经历过这个?有任何想法吗?有人能指出我详细的Scatterv文档,他们散布自定义MPI_Datatypes吗?

非常感谢!

1 个答案:

答案 0 :(得分:0)

正如Jonathan指出的那样,我使用的是MPI_FLOAT而不是MPI_DOUBLE。将类型列表元素从MPI_FLOAT更改为MPI_DOUBLE后,问题得以解决。

相关问题