并行Python - 文件太多

时间:2012-02-20 14:00:13

标签: python parallel-processing

我正在尝试使用并行python并行运行代码,并将一些数字加在一起。一切正常,但是当我在循环中迭代代码时,由于“太多的文件错误”,它在41次迭代后不可避免地停止(在我的计算机上)。我已经很好地研究了这一点,并且我找到了一个有效的解决方案,但是使得代码比没有并行运行的代码慢得多。

import sys, time
import pp
import numpy
x = numpy.arange(-20.0,20.0,0.5)
k = numpy.arange(50)
grav = []
nswarm = 4
gravity = numpy.zeros([4,1])
print gravity
def function(raw_input,x,grav,k):
    f = 0
    for i in range(len(x)):
        f+=1
    a=raw_input[0]
    b=raw_input[1]
    c=raw_input[2]
    d=raw_input[3]
    grav.append((a+b+c+d)+f)
    #return grav

jobsList = []

for i in range(len(k)):
    # tuple of all parallel python servers to connect with
    ppservers = ()
    #ppservers = ("10.0.0.1",)

    if len(sys.argv) > 1:
        ncpus = int(sys.argv[1])
        # Creates jobserver with ncpus workers
        job_server = pp.Server(ncpus, ppservers=ppservers)
    else:
        # Creates jobserver with automatically detected number of workers
        job_server = pp.Server(ppservers=ppservers)

    #print "Starting pp with", job_server.get_ncpus(), "workers"
    start_time = time.time()

    # The following submits 4 jobs and then retrieves the results
    puts = ([1,2,3,4], [3,2,3,4],[4,2,3,6],[2,3,4,5])

    jobs = [(raw_input, job_server.submit(function,(raw_input,x,grav,k), (destroy,), ())) for raw_input in puts]
    for raw_input, job in jobs:
        r = job()
        jobsList.append(r)
        #print "Sum of numbers", raw_input, "is", r
    #print "Time elapsed: ", time.time() - start_time, "s"
    #job_server.print_stats()
    #for job in jobsList:
    #print job

    #print jobsList
    for n in numpy.arange(nswarm):
        gravity[n] = jobsList[n]
    del grav[0:len(grav)]
    del jobsList[0:len(jobsList)]
    #print gravity,'here' 
    print i
    job_server.destroy()

问题是在没有正确关闭服务器的情况下过度迭代“job_server” - 我认为,添加job_server.destroy()是我找到的解决方案,因为代码运行完成但它是真慢

是否有更好的方法可以关闭服务器,以便代码速度相当快?

0 个答案:

没有答案
相关问题