使用<thread> </thread>在C ++中同时执行线程

时间:2014-06-14 14:11:07

标签: c++ multithreading simultaneous

好吧,我一直在四处寻找,我不确定为什么会这样。我已经看到很多关于在Linux上使用线程的Tuts,但是我现在分享的内容并不多。

代码:

int j = 0;
while(j <= 10)
{
    myThreads[j] = std::thread(task, j);
    myThreads[j].join();
    j+=1;
}

所以我只想创建10个线程并执行它们。 任务很简单,处理得很好,但问题是不是整个线程都在执行。

它实际上只执行了1个线程,它正在等待它完成然后执行另一个线程......

PS:我知道主要功能会在激活这些线程后退出,但我读到了这个,我相信我可以通过多种方式解决它。

所以我想同时执行所有这些线程,就是这样。

提前多多感谢, MarioAda。

3 个答案:

答案 0 :(得分:11)

您正在启动线程,然后立即加入它们。 你需要创建,完成你的工作,然后才能加入其他循环。 此外,你通常把线程放在一个向量中,这样你就可以引用/加入它们(你似乎在做什么,虽然在数组中,因为这是标记的C ++,我鼓励你使用vector代替)。

策略与之前的pthreads相同:你声明一个线程数组,推送它们运行,然后加入。

以下代码来自here

#include <thread>
#include <iostream>
#include <vector>

void hello(){
    std::cout << "Hello from thread " << std::this_thread::get_id() << std::endl;
}

int main(){
    std::vector<std::thread> threads;

    for(int i = 0; i < 5; ++i){
        threads.push_back(std::thread(hello));
    }

    for(auto& thread : threads){
        thread.join();
    }

    return 0;
}

答案 1 :(得分:2)

那是因为join会阻塞当前线程,直到你的线程结束。 你应该只在你已经拥有的循环中启动你的线程,并在第二个循环中调用线程的join()函数。

答案 2 :(得分:0)

还有一种使这些线程同时运行的更高级的技术。

天真的方法的问题是,开始创建的线程在创建最后一个线程之前有太多时间来运行其功能。因此,在刚创建完最后一个线程时,第一个线程已经执行了其功能的重要部分。

为了避免这种情况,我们可以使用一个计数器(由互斥锁保护)和一个条件变量。已创建并准备开始运行其内部功能的每个线程都将递增计数器,并检查其是否等于线程总数(即,该线程是否是最后一个递增计数器的线程)。如果是的话,它将(使用条件变量)通知所有其他线程该启动了。否则,它将等待条件变量,直到其他线程将计数器设置为其总数并通知其余线程(包括该线程)为止。

这样,所有线程都将(几乎)同时启动,只有在每个线程都已创建并且实际上已经准备好执行其功能之后。

这是我实现的类ConcurrentRunner的实现。

首先,它是C ++ 11兼容的简化版本,将更易于理解:

#include <mutex>
#include <condition_variable>
#include <vector>
#include <functional>
#include <thread>

// Object that runs multiple functions, each in its own thread, starting them as simultaneously as possible.
class ConcurrentRunner final
{
public:
    template<typename... BackgroundThreadsFunctions>
    explicit ConcurrentRunner(const std::function<void()>& this_thread_function, const BackgroundThreadsFunctions&... background_threads_functions)
        : _this_thread_function{this_thread_function}
        , _num_threads_total{1 + sizeof...(BackgroundThreadsFunctions)}
    {
        this->PrepareBackgroundThreads({ background_threads_functions... });
    }

    ConcurrentRunner(const ConcurrentRunner&) = delete;
    ConcurrentRunner& operator=(const ConcurrentRunner&) = delete;

    // Executes `ThreadProc` for this thread's function and waits for all of the background threads to finish.
    void Run()
    {
        this->ThreadProc(_this_thread_function);

        for (auto& background_thread : _background_threads)
            background_thread.join();
    }

private:
    // Creates the background threads: each of them will execute `ThreadProc` with its respective function.
    void PrepareBackgroundThreads(const std::vector<std::function<void()>>& background_threads_functions)
    {
        // Iterate through the vector of the background threads' functions and create a new thread with `ThreadProc` for each of them.
        _background_threads.reserve(background_threads_functions.size());
        for (const auto& background_thread_function : background_threads_functions)
        {
            _background_threads.emplace_back([this, function = background_thread_function]()
            {
                this->ThreadProc(function);
            });
        }
    }

    // Procedure that will be executed by each thread, including the "main" thread and all background ones.
    void ThreadProc(const std::function<void()>& function)
    {
        // Increment the `_num_threads_waiting_for_start_signal` while the mutex is locked, thus signalizing that a new thread is ready to start.
        std::unique_lock<std::mutex> lock{_mutex};
        ++_num_threads_waiting_for_start_signal;
        const bool ready_to_go = (_num_threads_waiting_for_start_signal == _num_threads_total);
        lock.unlock();

        if (ready_to_go)
        {
            // If this thread was the last one of the threads which must start simultaneously, notify all other threads that they are ready to start.
            _cv.notify_all();
        }
        else
        {
            // If this thread was not the last one of the threads which must start simultaneously, wait on `_cv` until all other threads are ready.
            lock.lock();
            _cv.wait(lock, [this]()
                     {
                         return (_num_threads_waiting_for_start_signal == _num_threads_total);
                     });
            lock.unlock();
        }

        // Execute this thread's internal function.
        function();
    }

private:
    std::function<void()> _this_thread_function;
    std::vector<std::thread> _background_threads;

    const unsigned int _num_threads_total;
    unsigned int _num_threads_waiting_for_start_signal{0}; // counter of the threads which are ready to start running their functions
    mutable std::mutex _mutex; // mutex that protects the counter
    std::condition_variable _cv; // waited on by all threads but the last one; notified when the last thread increments the counter
};

//---------------------------------------------------------------------------------------------------------------------------------------------------
// Example of usage:

#include <atomic>

int main()
{
    std::atomic<int> x{0};

    {
        ConcurrentRunner runner{[&]() { x += 1; }, [&]() { x += 10; }, [&]() { x += 100; }};
        runner.Run();
    }

    return (x.load() == 111) ? 0 : -1;
}

现在,相同的逻辑具有更多的模板,更少的分配,没有不必要的副本和类型擦除,但是在某种程度上更难于阅读(需要C ++ 17):

//---------------------------------------------------------------------------------------------------------------------------------------------------
// Helper template `ForEachTupleElement` (meant to be in some other header file).

#include <tuple>
#include <type_traits>
#include <utility>

namespace Detail
{
    template<typename Tuple, typename Function, std::size_t... I>
    constexpr void ForEachTupleElement(Tuple&& tuple, Function function, std::index_sequence<I...>)
    {
        int dummy[] = { 0, (((void)(function(std::get<I>(std::forward<Tuple>(tuple))))), 0)... };
        (void)dummy;
    }
}

// Applies a given function (typically - with a template operator(), e.g., a generic lambda) to each element of a tuple.
template<typename Tuple, typename Function, std::size_t... I>
constexpr void ForEachTupleElement(Tuple&& tuple, Function function)
{
    Detail::ForEachTupleElement(std::forward<Tuple>(tuple), function,
                                std::make_index_sequence<std::tuple_size_v<std::remove_cv_t<std::remove_reference_t<Tuple>>>>{});
}

//---------------------------------------------------------------------------------------------------------------------------------------------------

#include <mutex>
#include <condition_variable>
#include <array>
#include <thread>
#include <tuple>
#include <type_traits>
#include <utility>

// Common non-template part of the `ConcurrentRunner` implementation.
class ConcurrentRunnerBase
{
protected:
    inline ConcurrentRunnerBase() = default;
    inline ~ConcurrentRunnerBase() = default;

protected:
    unsigned int _num_threads_waiting_for_start_signal{0}; // protected by `mutex`
    mutable std::mutex _mutex;
    std::condition_variable _cv; // waited on by all threads but the last one; notified when the last thread increments the counter
};

// Object that runs multiple functions, each in its own thread, starting them as simultaneously as possible.
template<typename ThisThreadFunction, std::size_t NumberOfBackgroundThreads>
class ConcurrentRunner final : private ConcurrentRunnerBase
{
public:
    template<typename ThisThreadFunctionArg, typename... BackgroundThreadsFunctions>
    explicit ConcurrentRunner(ThisThreadFunctionArg&& this_thread_function, BackgroundThreadsFunctions&&... background_threads_functions)
        : _this_thread_function{std::forward<ThisThreadFunctionArg>(this_thread_function)}
    {
        static_assert(sizeof...(BackgroundThreadsFunctions) == NumberOfBackgroundThreads);
        this->Prepare(std::forward<BackgroundThreadsFunctions>(background_threads_functions)...);
    }

    ConcurrentRunner(const ConcurrentRunner&) = delete;
    ConcurrentRunner& operator=(const ConcurrentRunner&) = delete;

    // Executes `ThreadProc` for this thread's function and waits for all of the background threads to finish.
    void Run()
    {
        this->ThreadProc(std::move(_this_thread_function));

        for (auto& background_thread : _background_threads)
            background_thread.join();
    }

private:
    // Creates the background threads: each of them will execute `ThreadProc` with its respective function.
    template<typename... BackgroundThreadsFunctions>
    void Prepare(BackgroundThreadsFunctions&&... background_threads_functions)
    {
        // Copies of the argument functions (created by move constructors where possible), collected in a tuple.
        std::tuple<std::decay_t<BackgroundThreadsFunctions>...> background_threads_functions_tuple{
            std::forward<BackgroundThreadsFunctions>(background_threads_functions)...
        };

        // Iterate through the tuple of the background threads' functions and create a new thread with `ThreadProc` for each of them.
        unsigned int index_in_array = 0;
        ForEachTupleElement(std::move(background_threads_functions_tuple), [this, &index_in_array](auto&& function)
                            {
                                auto i = index_in_array++;
                                _background_threads[i] = std::thread{[this, function = std::move(function)]() mutable
                                {
                                    this->ThreadProc(std::move(function));
                                }};
                            });
    }

    // Procedure that will be executed by each thread, including the "main" thread and all background ones.
    template<typename Function>
    void ThreadProc(Function&& function)
    {
        // Increment the `_num_threads_waiting_for_start_signal` while the mutex is locked, thus signalizing that a new thread is ready to start.
        std::unique_lock lock{_mutex};
        ++_num_threads_waiting_for_start_signal;
        const bool ready_to_go = (_num_threads_waiting_for_start_signal == (1 + NumberOfBackgroundThreads));
        lock.unlock();

        if (ready_to_go)
        {
            // If this thread was the last one of the threads which must start simultaneously, notify all other threads that they are ready to start.
            _cv.notify_all();
        }
        else
        {
            // If this thread was not the last one of the threads which must start simultaneously, wait on `_cv` until all other threads are ready.
            lock.lock();
            _cv.wait(lock, [this]() noexcept -> bool
                     {
                         return (_num_threads_waiting_for_start_signal == (1 + NumberOfBackgroundThreads));
                     });
            lock.unlock();
        }

        // Execute this thread's internal function.
        std::forward<Function>(function)();
    }

private:
    ThisThreadFunction _this_thread_function;
    std::array<std::thread, NumberOfBackgroundThreads> _background_threads;
};

template<typename T, typename... U>
ConcurrentRunner(T&&, U&&...) -> ConcurrentRunner<std::decay_t<T>, sizeof...(U)>;

//---------------------------------------------------------------------------------------------------------------------------------------------------
// Example of usage:

#include <atomic>

int main()
{
    std::atomic<int> x{0};

    {
        ConcurrentRunner runner{[&]() { x += 1; }, [&]() { x += 10; }, [&]() { x += 100; }};
        runner.Run();
    }

    return (x.load() == 111) ? 0 : -1;
}