MPI_Scatter& MPI_Bcast解决方案在一个应用程序中。如何让分区打印分区大小

时间:2015-03-22 04:57:34

标签: c++ mpi

我是MPI的新手,这个应用程序涉及MPI_Bcast和MPI_Scatter的实现。要求首先是root应该使用MPI_Bcast向节点广播分区的大小,然后将每个数组的部分分散到节点。我的根工作正常,但节点没有接收到数组的值,因此平均值的计算是偏斜的。下面是我到目前为止的代码

/** includes **/
   #include <iostream>
   #include <mpi.h>


    // function that will implement the coordinator job of this application
    void coordinator(int world_size) {

    std::cout << " coordinator rank [0] starting " << std::endl;

    // generate 100000 random integers and store them in an array

    int values[40];
    for (unsigned int i = 0; i < 40; i++){
        values[i] = rand() % 10;
        std::cout << values[i] << ", ";
        if (i % 10 == 9) std::cout << std::endl;
    }

    // determine the size of each partition by dividing 100000 by the world size
    // it is impertative that the world_size divides this evenly

    int partition_size = 40 / world_size;
    std::cout << " coordinator rank [0] partition size is " << partition_size  << "\n" << std::endl;

    // broadcast the partition size to each node so they can setup up memory as appropriate

    MPI_Bcast(&partition_size, 1, MPI_INT, 0, MPI_COMM_WORLD);
    std::cout << " coordinator rank [0] broadcasted partition size\n" << std::endl;

    // generate an average for our partition

    int total = 0;
    for (unsigned int i = 0; i < (40 / world_size); i++)
        total += values[i];
    float average = (float)total / (40 / world_size);
    std::cout << " coordinator rank [0] average is " << average << "\n" << std::endl;

    // call a reduce operation to get the total average and then divide that by the world size

    float total_average = 0;
    MPI_Reduce(&average, &total_average, 1, MPI_FLOAT, MPI_SUM, 0, MPI_COMM_WORLD);
    std::cout << " total average is " << total_average / world_size << std::endl;
}
// function that will implement the participant job of this applicaiton

void participant(int world_rank, int world_size) {

    std::cout << " participant rank [" << world_rank << "] starting" << std::endl;

    // get the partition size from the root and allocate memory as necessary

    int partition_size = 0;
    MPI_Bcast(&partition_size, 1, MPI_INT, 0, MPI_COMM_WORLD);
    std::cout << " participant rank [" << world_rank << "] recieved partition size of " <<
        partition_size << std::endl;

    // allocate the memory for our partition

    int *partition = new int[partition_size];

    // generate an average for our partition

    int total = 0;
    for (unsigned int i = 0; i < partition_size; i++)
        total += partition[i];
    float average = (float)total / partition_size;
    std::cout << " participant rank [" << world_rank << "] average is " << average << std::endl;

    // call a reduce operation to get the total average and then divide that by the world size

    float total_average = 0;
    MPI_Reduce(&average, &total_average, 1, MPI_FLOAT, MPI_SUM, 0, MPI_COMM_WORLD);

    // as we are finished with the memory we should free it

    delete partition;
}

int main(int argc, char** argv) {

    // initialise the MPI library

    MPI_Init(NULL, NULL);


    // determine the world size
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    // determine our rank in the world

    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

    // print out the rank and size

    std::cout << " rank [" << world_rank << "] size [" << world_size << "]" << std::endl;

    // if we have a rank of zero then we are the coordinator. if not we are a participant
    // in the task

    if (world_rank == 0){
        coordinator(world_size);
    } 
    else{
        participant(world_rank, world_size);
    }

    int *values = new int[40];
    int *partition_size = new int[40 / world_size];

    // run the scatter operation and then display the contents of all 4 nodes

    MPI_Scatter(values, 40 / world_size, MPI_INT, partition_size, 40 / world_size,
        MPI_INT, 0, MPI_COMM_WORLD);
    std::cout << "rank " << world_rank << " partition: ";
    for (unsigned int i = 0; i < 40 / world_size; i++)
        std::cout << partition_size[i] << ", ";
    std::cout << std::endl;

    // finalise the MPI library
    MPI_Finalize();

}

这是我在运行代码后获得的内容

我需要得到这个

1,7,4,0,9,4,8​​,8,2,4,

5,5,1,7,1,1,5,2,7,6,

1,4,2,3,2,2,1,6,8,5,

7,6,1,8,9,2,7,9,5,4,

但我得到了这个

排名0分区:-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,

排名3分区:-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,

排名2分区:-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,

排名1分区:-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,-842150451,

1 个答案:

答案 0 :(得分:1)

您散布了一系列未初始化数据:

int *values = new int[40];
int *partition_size = new int[40 / world_size];

// values is never initialised

MPI_Scatter(values, 40 / world_size, MPI_INT, partition_size, 40 / world_size,
    MPI_INT, 0, MPI_COMM_WORLD);

-8421504510xCDCDCDCD,Microsoft CRT在调试模式下使用该值填充新分配的内存(在发布模式下,内存内容将在分配后保留原样。)

您必须将MPI_Scatter的调用放在相应的协调员/参与者功能中。

相关问题