有效地等待线程池中的所有任务完成

Efficiently waiting for all tasks in a threadpool to finish

本文关键字:任务 等待 线程 有效地      更新时间:2023-10-16

我目前有一个程序与x工人在我的线程池。在主循环期间,y任务被分配给工人完成,但在任务发出后,我必须等待所有任务完成才能开始执行程序。我认为我目前的解决方案是低效的,必须有一个更好的方法来等待所有的任务完成,但我不确定如何去这个

// called in main after all tasks are enqueued to 
// std::deque<std::function<void()>> tasks
void ThreadPool::waitFinished()
{
    while(!tasks.empty()) //check if there are any tasks in queue waiting to be picked up
    {
        //do literally nothing
    }
}

更多信息:

threadpool结构
//worker thread objects
class Worker {
    public:
        Worker(ThreadPool& s): pool(s) {}
        void operator()();
    private:
        ThreadPool &pool;
};
//thread pool
class ThreadPool {
    public:
        ThreadPool(size_t);
        template<class F>
        void enqueue(F f);   
        void waitFinished();
        ~ThreadPool();
    private:
        friend class Worker;
        //keeps track of threads so we can join
        std::vector< std::thread > workers;
        //task queue
        std::deque< std::function<void()> > tasks;
        //sync
        std::mutex queue_mutex;
        std::condition_variable condition;
        bool stop;
};

或者这里是我的threadpool.hpp的要点

我想使用waitFinished()的例子:

while(running)
    //....
    for all particles alive
        push particle position function to threadpool
    end for
    threadPool.waitFinished();
    push new particle position data into openGL buffer
end while

所以这样我可以发送成千上万的粒子位置任务并行完成,等待它们完成并将新数据放入openGL位置缓冲区

这是您正在尝试的一种方法。在同一个互斥对象上使用两个条件变量是不适合轻松的人的,除非您知道内部发生了什么。我不需要原子处理成员,只是想演示每次运行之间完成了多少项。

示例工作负载函数生成一百万个随机int值,然后对它们进行排序(以某种方式加热我的办公室)。waitFinished将不会返回,直到队列为空没有线程繁忙。

#include <iostream>
#include <deque>
#include <functional>
#include <thread>
#include <condition_variable>
#include <mutex>
#include <random>
//thread pool
class ThreadPool
{
public:
    ThreadPool(unsigned int n = std::thread::hardware_concurrency());
    template<class F> void enqueue(F&& f);
    void waitFinished();
    ~ThreadPool();
    unsigned int getProcessed() const { return processed; }
private:
    std::vector< std::thread > workers;
    std::deque< std::function<void()> > tasks;
    std::mutex queue_mutex;
    std::condition_variable cv_task;
    std::condition_variable cv_finished;
    std::atomic_uint processed;
    unsigned int busy;
    bool stop;
    void thread_proc();
};
ThreadPool::ThreadPool(unsigned int n)
    : busy()
    , processed()
    , stop()
{
    for (unsigned int i=0; i<n; ++i)
        workers.emplace_back(std::bind(&ThreadPool::thread_proc, this));
}
ThreadPool::~ThreadPool()
{
    // set stop-condition
    std::unique_lock<std::mutex> latch(queue_mutex);
    stop = true;
    cv_task.notify_all();
    latch.unlock();
    // all threads terminate, then we're done.
    for (auto& t : workers)
        t.join();
}
void ThreadPool::thread_proc()
{
    while (true)
    {
        std::unique_lock<std::mutex> latch(queue_mutex);
        cv_task.wait(latch, [this](){ return stop || !tasks.empty(); });
        if (!tasks.empty())
        {
            // got work. set busy.
            ++busy;
            // pull from queue
            auto fn = tasks.front();
            tasks.pop_front();
            // release lock. run async
            latch.unlock();
            // run function outside context
            fn();
            ++processed;
            latch.lock();
            --busy;
            cv_finished.notify_one();
        }
        else if (stop)
        {
            break;
        }
    }
}
// generic function push
template<class F>
void ThreadPool::enqueue(F&& f)
{
    std::unique_lock<std::mutex> lock(queue_mutex);
    tasks.emplace_back(std::forward<F>(f));
    cv_task.notify_one();
}
// waits until the queue is empty.
void ThreadPool::waitFinished()
{
    std::unique_lock<std::mutex> lock(queue_mutex);
    cv_finished.wait(lock, [this](){ return tasks.empty() && (busy == 0); });
}
// a cpu-busy task.
void work_proc()
{
    std::random_device rd;
    std::mt19937 rng(rd());
    // build a vector of random numbers
    std::vector<int> data;
    data.reserve(100000);
    std::generate_n(std::back_inserter(data), data.capacity(), [&](){ return rng(); });
    std::sort(data.begin(), data.end(), std::greater<int>());
}
int main()
{
    ThreadPool tp;
    // run five batches of 100 items
    for (int x=0; x<5; ++x)
    {
        // queue 100 work tasks
        for (int i=0; i<100; ++i)
            tp.enqueue(work_proc);
        tp.waitFinished();
        std::cout << tp.getProcessed() << 'n';
    }
    // destructor will close down thread pool
    return EXIT_SUCCESS;
}

100
200
300
400
500
祝你好运。