为什么增加工人数量(超过核心数量)仍然会减少执行时间?

时间:2017-07-04 10:50:31

标签: python multithreading parallel-processing multiprocessing

我始终确信没有必要拥有比CPU核心更多的线程/进程(从性能角度来看)。但是,我的python示例显示了不同的结果。

import concurrent.futures
import random
import time


def doSomething(task_num):
    print("executing...", task_num)
    time.sleep(1)  # simulate heavy operation that takes ~ 1 second    
    return random.randint(1, 10) * random.randint(1, 500)  # real operation, used random to avoid caches and so on...


def main():
    # This part is not taken in consideration because I don't want to
    # measure the worker creation time
    executor = concurrent.futures.ProcessPoolExecutor(max_workers=60)

    start_time = time.time()

    for i in range(1, 100): # execute 100 tasks
        executor.map(doSomething, [i, ])
    executor.shutdown(wait=True)

    print("--- %s seconds ---" % (time.time() - start_time))


if __name__ == '__main__':
    main()

计划结果:

  

1 WORKER --- 100.28233647346497秒---
  2工人--- 50.26122164726257秒---
  3工人--- 33.32741022109985秒---
  4工人--- 25.399883031845093秒---
  5工人--- 20.434186220169067秒---
  10名工人--- 10.903695344924927秒---
  50工人--- 6.363946914672852秒---
  60工人--- 4.819359302520752秒---

只需4个逻辑处理器,如何才能更快地工作?

这是我的电脑规格(在Windows 8和Ubuntu 14上测试):

  

CPU Intel(R)Core(TM)i5-3210M CPU @ 2.50GHz    插座:1    核心:2    逻辑处理器:4

2 个答案:

答案 0 :(得分:5)

原因是因为sleep()仅使用可忽略不计的CPU量。在这种情况下,它是对线程执行的实际工作的不良模拟。

所有sleep()确实会挂起线程,直到计时器到期为止。线程暂停时,它不使用任何CPU周期。

答案 1 :(得分:2)

我用更密集的计算(例如矩阵求逆)扩展了你的例子。您将看到您的预期:计算时间将减少到核心数量并随后增加(因为上下文切换的成本)。

import concurrent.futures
import random
import time
import numpy as np
import matplotlib.pyplot as plt


def doSomething(task_num):
    print("executing...", task_num)
    for i in range(100000):
        A = np.random.normal(0,1,(1000,1000))
        B = np.inv(A)

    return random.randint(1, 10) * random.randint(1, 500)  # real operation, used random to avoid caches and so on...

def measureTime(nWorkers: int):
    executor = concurrent.futures.ProcessPoolExecutor(max_workers=nWorkers)
    start_time = time.time()
    for i in range(1, 40):  # execute 100 tasks
        executor.map(doSomething, [i, ])
    executor.shutdown(wait=True)
    return (time.time() - start_time)

def main():
    # This part is not taken in consideration because I don't want to
    # measure the worker creation time
    maxWorkers = 20
    dT = np.zeros(maxWorkers)
    for i in range(maxWorkers):
        dT[i] = measureTime(i+1)
        print("--- %s seconds ---" % dT[i])
    plt.plot(np.linspace(1,maxWorkers, maxWorkers), dT)
    plt.show()

if __name__ == '__main__':
    main()