MPI_Scatterv只散射自定义MPI_Datatype的一部分

MPI_Scatterv scatters only part of custom MPI_Datatype

本文关键字:MPI Datatype 一部分 自定义 Scatterv      更新时间:2023-10-16

这个问题可能与这个问题有关。

我有以下结构体:

struct Particle {
double x;
double y;
double vx;
double vy;
double ax;
double ay;
int i;
int j;
Particle():
    x(-1.0),
    y(-1.0),
    vx(0.0),
    vy(0.0),
    ax(0.0),
    ay(0.0),
    i(-1),
    j(-1) { };
Particle& operator=(const Particle& right) {
    if(&right == this)
        throw std::domain_error("Particle self-assignment!");
    x = right.x;
    y = right.y;
    vx = right.vx;
    vy = right.vy;
    ax = right.ax;
    ay = right.ay;
    i = right.i;
    j = right.j;
    return *this;
} 
};

我在每个处理器上建立一个MPI_Datatype,像这样:

//
// Build MPI_Datatype PARTICLE
//
MPI_Datatype PARTICLE;
Particle p;                 // needed for displacement computation
int block_len[8];           // the number of elements in each "block" will be 1 for us
MPI_Aint displacements[8];  // displacement of each element from start of new type
MPI_Datatype typelist[8];   // MPI types of the elements
MPI_Aint start_address;     // used in calculating the displacements
MPI_Aint address;
//
// Set up
//
for(int i = 0; i < 8; ++i) {
    block_len[i] = 1;
}
typelist[0] = MPI_FLOAT;
typelist[1] = MPI_FLOAT;
typelist[2] = MPI_FLOAT;
typelist[3] = MPI_FLOAT;
typelist[4] = MPI_FLOAT;
typelist[5] = MPI_FLOAT;
typelist[6] = MPI_INT;
typelist[7] = MPI_INT;
MPI_Address(&p.x, &start_address);          // getting starting address
displacements[0] = 0;                       // first element is at displacement 0
MPI_Address(&p.y, &address);
displacements[1] = address - start_address;
MPI_Address(&p.vx, &address);
displacements[2] = address - start_address;
MPI_Address(&p.vy, &address);
displacements[3] = address - start_address;
MPI_Address(&p.ax, &address);
displacements[4] = address - start_address;
MPI_Address(&p.ay, &address);
displacements[5] = address - start_address;
MPI_Address(&p.i, &address);
displacements[6] = address - start_address;
MPI_Address(&p.j, &address);
displacements[7] = address - start_address;
//
// Building new MPI type
//
MPI_Type_struct(8, block_len, displacements, typelist, &PARTICLE);
MPI_Type_commit(&PARTICLE);

然后像这样散列:

MPI_Scatterv(particles.data(), partition_sizes.data(), partition_offsets.data(), PARTICLE, local_particles.data(), n_local, PARTICLE, 0, MPI_COMM_WORLD );

MPI_Scatterv的参数如下:

int n_local                                    // number of particles on each processor
std::vector<Particle> particles;               // particles will be available on all processors but it will only be filled with particles on processor 0 and then scattered to all other processors.
std::vector<int> partition_sizes(n_proc);
std::vector<int> partition_offsets(n_proc);
std::vector<Particle> local_particles(n);
有趣的是,struct Particle的int部分(i, j)得到了正确的分散,所以我在所有local_particles[k]上都有正确的i,j值。但是,所有双精度值(x, y, vx, vy, ax, ay)都采用默认构造函数值。

其他人有过这样的经历吗?什么好主意吗?有人可以指给我一个详细的Scatterv文档,他们分散自定义MPI_Datatypes?

非常感谢!

正如Jonathan指出的那样,我使用了MPI_FLOAT而不是MPI_DOUBLE。在将类型列表元素从MPI_FLOAT更改为MPI_DOUBLE后,问题得到了解决。