MPI派生的数据类型问题是由结构填充和无阻塞通信缓冲区的问题引起的

用户13030001

嗨,我正在编写一个c++程序,我希望MPI通过派生数据类型进行通信。但是接收者没有收到发送者发出的完整信息。

这是我构建派生数据类型的方法:

// dg_derived_datatype.cpp

#include <mpi.h>
#include "dg_derived_datatype.h"

namespace Hash{

    MPI_Datatype Face_type;
};

void Construct_data_type(){

    MPI_Face_type();

}

void MPI_Face_type(){

    int num = 3;

    // Number of elements in each block (array of integers)
    int elem_blocklength[num]{2, 1, 5};

    // Byte displacement of each block (array of integers).
    MPI_Aint array_of_offsets[num];
    MPI_Aint intex, charex;
    MPI_Aint lb;
    MPI_Type_get_extent(MPI_INT, &lb, &intex);
    MPI_Type_get_extent(MPI_CHAR, &lb, &charex);

    array_of_offsets[0] = (MPI_Aint) 0;
    array_of_offsets[1] = array_of_offsets[0] + intex * 2;
    array_of_offsets[2] = array_of_offsets[1] + charex;

    MPI_Datatype array_of_types[num]{MPI_INT, MPI_CHAR, MPI_INT};

    // create and MPI datatype
    MPI_Type_create_struct(num, elem_blocklength, array_of_offsets, array_of_types, &Hash::Face_type);  
    MPI_Type_commit(&Hash::Face_type);

}

void Free_type(){

    MPI_Type_free(&Hash::Face_type);    

}

在这里,我导出我的数据类型Hash::Face_type并提交它。Hash::Face_type用于传送我的结构(face_pack2 INT + 1吨焦炭+ 5 INT)载体。

// dg_derived_datatype.h

#ifndef DG_DERIVED_DATA_TYPE_H
#define DG_DERIVED_DATA_TYPE_H

#include <mpi.h>

struct face_pack{

    int owners_key; 

    int facei; 

    char face_type;

    int hlevel;

    int porderx;

    int pordery; 

    int key;

    int rank;

};

namespace Hash{

    extern MPI_Datatype Face_type;
};

void Construct_data_type();

void Free_type();

#endif

然后在我的主程序中

// dg_main.cpp

#include <iostream>
#include <mpi.h>
#include "dg_derived_datatype.h"
#include <vector>

void Recv_face(int source, int tag, std::vector<face_pack>& recv_face);

int main(){
// Initialize MPI. 
// some code here.
// I create a vector of struct: std::vector<face_pack> face_info,
// to store the info I want to let proccesors communicate. 

Construct_data_type(); // construct my derived data type

MPI_Request request_pre1, request_pre2, request_next1, request_next2;

// send
if(num_next > 0){ // If fullfilled the current processor send info to the next processor (myrank + 1)

std::vector<face_pack> face_info;
// some code to construct face_info

// source my_rank, destination my_rank + 1
MPI_Isend(&face_info[0], num_n, Hash::Face_type, mpi::rank + 1, mpi::rank + 1, MPI_COMM_WORLD, &request_next2);

}

// recv
if(some critira){ // recv from the former processor (my_rank - 1)

std::vector<face_pack> recv_face;

Recv_face(mpi::rank - 1, mpi::rank, recv_face); // recv info from former processor

}
if(num_next > 0){

        MPI_Status status;
        MPI_Wait(&request_next2, &status);

}

Free_type();

// finialize MPI
}

void Recv_face(int source, int tag, std::vector<face_pack>& recv_face){

    MPI_Status status1, status2;

    MPI_Probe(source, tag, MPI_COMM_WORLD, &status1);

    int count;
    MPI_Get_count(&status1, Hash::Face_type, &count);

    recv_face = std::vector<face_pack>(count);

    MPI_Recv(&recv_face[0], count, Hash::Face_type, source, tag, MPI_COMM_WORLD, &status2);
}


问题在于接收者有时会收到不完整的信息。

例如,我face_info在发出之前先打印出:

// rank 2
owners_key3658 facei 0 face_type M neighbour 192 n_rank 0
owners_key3658 facei 1 face_type L neighbour 66070 n_rank 1
owners_key3658 facei 1 face_type L neighbour 76640 n_rank 1
owners_key3658 facei 2 face_type M neighbour 2631 n_rank 0
owners_key3658 facei 3 face_type L neighbour 4953 n_rank 1
...
owners_key49144 facei 1 face_type M neighbour 844354 n_rank 2
owners_key49144 facei 1 face_type M neighbour 913280 n_rank 2
owners_key49144 facei 2 face_type L neighbour 41619 n_rank 1
owners_key49144 facei 3 face_type M neighbour 57633 n_rank 2

哪个是正确的。

但是在接收方,我打印出接收到的消息:

owners_key3658 facei 0 face_type M neighbour 192 n_rank 0
owners_key3658 facei 1 face_type L neighbour 66070 n_rank 1
owners_key3658 facei 1 face_type L neighbour 76640 n_rank 1
owners_key3658 facei 2 face_type M neighbour 2631 n_rank 0
owners_key3658 facei 3 face_type L neighbour 4953 n_rank 1

... // at the beginning it's fine, however, at the end it messed up

owners_key242560 facei 2 face_type ! neighbour 2 n_rank 2
owners_key217474 facei 2 face_type ! neighbour 2 n_rank 2
owners_key17394 facei 2 face_type ! neighbour 2 n_rank 2
owners_key216815 facei 2 face_type ! neighbour 2 n_rank 2

当然,它丢失了face_type信息,这是一个字符。据我所知,std::vector权证在这里是连续的所以我不确定派生的mpi数据类型的哪一部分是错误的。消息传递有时有效,有时无效。

用户13030001

好吧,我有点想办法了。那里有两个。

第一个是的使用MPI_Type_get_extent()由于编译器可以填充c / c ++结构,因此可以确定,如果仅发送一个元素,但是如果发送多个元素,则尾随填充可能会引起问题(请参见下图)。

填充

因此,使用定义派生数据类型的更安全,更可行的方法是使用MPI_Get_address()这是我的方法:

// generate the derived datatype
void MPI_Face_type(){

    int num = 3;

    int elem_blocklength[num]{2, 1, 5};

    MPI_Datatype array_of_types[num]{MPI_INT, MPI_CHAR, MPI_INT};

    MPI_Aint array_of_offsets[num];
    MPI_Aint baseadd, add1, add2;

    std::vector<face_pack> myface(1);

    MPI_Get_address(&(myface[0].owners_key), &baseadd);
    MPI_Get_address(&(myface[0].face_type), &add1);
    MPI_Get_address(&(myface[0].hlevel), &add2);

    array_of_offsets[0] = 0;
    array_of_offsets[1] = add1 - baseadd;
    array_of_offsets[2] = add2 - baseadd;

    MPI_Type_create_struct(num, elem_blocklength, array_of_offsets, array_of_types, &Hash::Face_type);  

    // check that the extent is correct
    MPI_Aint lb, extent;
    MPI_Type_get_extent(Hash::Face_type, &lb, &extent); 
    if(extent != sizeof(myface[0])){
        MPI_Datatype old = Hash::Face_type;
        MPI_Type_create_resized(old, 0, sizeof(myface[0]), &Hash::Face_type);
        MPI_Type_free(&old);
    }
    MPI_Type_commit(&Hash::Face_type);
}

The second one is the use of non-blocking send MPI_Isend(). The program works properly after I changed the non-blocking send to blocking send.

The relative part of my program looks like this:

if(criteria1){

//form the vector using my derived datatype
std::vector<derived_type> my_vector;

// use MPI_Isend to send the vector to the target rank
MPI_Isend(... my_vector...);

}

if(critira2){

// need to recv message 
MPI_Recv();
}

if(critira1){

// the sender now needs to make sure the message has arrived. 
MPI_Wait();
}

Although I used MPI_Wait the recver did not get the full message. I check the man page of MPI_Isend(), it says man_page

A nonblocking send call indicates that the system may start copying data out of the send buffer. The sender should not modify any part of the send buffer after a nonblocking send operation is called until the send completes.

但是我不认为我修改了发送缓冲区吗?还是发送缓冲区中没有足够的空间来存储要发送的信息?以我的理解,非阻塞发送的工作方式是这样的,发件人将消息放入其缓冲区中,并在达到目标等级时发送到目标等级MPI_Recv因此,可能是发件人的缓冲区空间不足,无法在发送消息之前存储消息?如果我错了,请纠正我。

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章