在平行块中循环不平行

Non-parallel for loop in a parallel block

本文关键字:循环      更新时间:2023-10-16

我有一个平行块,它产生了一定数量的线程。然后,所有这些线程都应为循环启动一个"共享",该循环包含多个循环的多个平行线。例如这样的事情:

// 1. The parallel region spawns a number of threads.
#pragma omp parallel
{
    // 2. Each thread does something before it enters the loop below.
    doSomethingOnEachThreadAsPreparation();
    // 3. This loop should run by all threads synchronously; i belongs 
    // to all threads simultaneously
    // Basically there is only one variable i. When all threads reach this
    // loop i at first is set to zero.
    for (int i = 0; i < 100; i++)
    {
        // 4. Then each thread calls this function (this happens in parallel)
        doSomethingOnEachThreadAtTheStartOfEachIteration();
        // 5. Then all threads work on this for loop in parallel
        #pragma omp for
        for (int k = 0; i < 100000000; k++)
            doSomethingVeryTimeConsumingInParallel(k);
        // 6. After the parallel for loop there is (always) an implicit barrier 
        // 7. When all threads finished the for loop they call this method in parallel.
        doSomethingOnEachThreadAfterEachIteration();
        // 8. Here should be another barrier. Once every thread has finished
        // the call above, they jump back to the top of the for loop, 
        // where i is set to i + 1. If the condition for the loop
        // holds, continue at 4., otherwise go to 9. 
    }
    // 9. When the "non-parallel" loop has finished each thread continues.
    doSomethingMoreOnEachThread();
}

我认为使用 #pragma omp single和共享i变量,但我不确定了。

该功能实际上是无关紧要的;这是关于控制流的。我添加了关于我想要的评论。如果我正确理解它,3.处的循环通常会为每个线程创建一个i变量,并且通常不仅由单个线程执行循环头。但这就是我想要的。

您可以在所有线程中运行for循环。根据您的算法,可能会在每次迭代(如下(或所有迭代结束时都需要同步。

#pragma omp parallel
{
  // enter parallel region
  doSomethingOnEachThreadAsPreparation();
    //done in // by all threads
  for (int i = 0; i < 100; i++)
    {
        doSomethingOnEachThreadAtTheStartOfEachIteration();
#       pragma omp for
        // parallelize the for loop
        for (int k = 0; i < 100000000; k++)
            doSomethingVeryTimeConsumingInParallel(k);
        // implicit barrier
        doSomethingOnEachThreadAfterEachIteration();
#       pragma omp barrier
        // Maybe a barrier is required, 
        // so that all iterations are synchronous
        // but if it is not required by the algorithm
        // performances will be better without the barrier
    }
    doSomethingMoreOnEachThread();
    // still in parallel
}

正如Zulan指出的那样,通过omp single封闭了主for循环以稍后重新输入,除非您使用嵌套并行性。在这种情况下,将在每次迭代中重新创建线程,这将导致重大速度。

omp_set_nested(1);
#pragma omp parallel
{
  // enter parallel region
  doSomethingOnEachThreadAsPreparation();
    //done in // by all threads
# pragma omp single
  // only one thread runs the loop
  for (int i = 0; i < 100; i++)
    {
#     pragma omp parallel
      {
        // create a new nested parallel section
        // new threads are created and this will 
        // certainly degrade performances
        doSomethingOnEachThreadAtTheStartOfEachIteration();
#       pragma omp for
        // and we parallelize the for loop
        for (int k = 0; i < 100000000; k++)
            doSomethingVeryTimeConsumingInParallel(k);
        // implicit barrier
        doSomethingOnEachThreadAfterEachIteration();
      }
      // we leave the parallel section (implicit barrier)
    }
    // we leave the single section
    doSomethingMoreOnEachThread();
    // and we continue running in parallel
}