MPI僵局具有集体功能

MPI deadlock with collective functions

本文关键字:功能 僵局 MPI      更新时间:2023-10-16

我正在用MPI库在C 中编写一个程序。发生僵局只有一个节点可行!我不使用发送或接收集体操作,而是仅使用两个集体功能(MPI_AllreduceMPI_Bcast(。如果有节点等待其他节点发送某些东西或接收,我实际上不明白这是什么原因。

void ParaStochSimulator::first_reacsimulator() {
    SimulateSingleRun();
}
double ParaStochSimulator::deterMinTau() {
    //calcualte minimum tau for this process
    l_nLocalMinTau = calc_tau(); //min tau for each node
    MPI_Allreduce(&l_nLocalMinTau, &l_nGlobalMinTau, 1, MPI_DOUBLE, MPI_MIN, MPI_COMM_WORLD);    
    //min tau for all nodes
    //check if I have the min value
    if (l_nLocalMinTau <= l_nGlobalMinTau && m_nCurrentTime < m_nOutputEndPoint) {
        FireTransition(m_nMinTransPos);
        CalculateAllHazardValues(); 
    }
    return l_nGlobalMinTau;
}
void ParaStochSimulator::SimulateSingleRun() {
    //prepare a run
    PrepareRun();
    while ((m_nCurrentTime < m_nOutputEndPoint) && IsSimulationRunning()) {
        deterMinTau();
        if (mnprocess_id == 0) { //master
            SimulateSingleStep();
            std::cout << "current time:*****" << m_nCurrentTime << std::endl;
            broad_casting(m_nMinTransPos);
            MPI_Bcast(&l_anMarking, l_nMinplacesPos.size(), MPI_DOUBLE, 0, MPI_COMM_WORLD);
            //std::cout << "size of mani place :" << l_nMinplacesPos.size() << std::endl;
        }
    }
    MPI_Bcast(&l_anMarking, l_nMinplacesPos.size(), MPI_DOUBLE, 0, MPI_COMM_WORLD);
    PostProcessRun();
}

作为您的"主"进程正在执行mpi_bcast,其他所有程序仍在运行循环,然后输入cesterintau,然后执行mpi_allreduce。

这是一个死锁

我相信您正在寻找的是:

void ParaStochSimulator::SimulateSingleRun() {
    //prepare a run
    PrepareRun();
    while ((m_nCurrentTime < m_nOutputEndPoint) && IsSimulationRunning()) {
        //All the nodes reduce tau at the same time
        deterMinTau();
        if (mnprocess_id == 0) { //master
            SimulateSingleStep();
            std::cout << "current time:*****" << m_nCurrentTime << std::endl;
            broad_casting(m_nMinTransPos);
            //Removed bordcast for master here
        }
        //All the nodes broadcast at every loop iteration
        MPI_Bcast(&l_anMarking, l_nMinplacesPos.size(), MPI_DOUBLE, 0, MPI_COMM_WORLD);
    }
    PostProcessRun();
}