在工作进程之间共享对象

时间:2017-12-15 11:10:21

标签: r parallel-processing

我想在许多不同的工作进程上运行f(x),这些进程运行一个(多个奖励积分)远程计算机,其中x是一个大对象。

我的交互式R会话在node0上运行,我使用parallel库,因此我执行以下操作:

library(parallel)

cl <- makeCluster(rep("node1", times = 64))
clusterExport(cl, "x")
clusterExport(cl, "f")

clusterEvalQ(cl, f(x))

问题是发送x需要相当长的时间,因为它会与主进程通过网络连接运行的机器分开传输到每个工作进程。

问题: 是否可以只向每个节点发送一次x并让工作进程在本地复制它?

2 个答案:

答案 0 :(得分:2)

假设主服务器和远程主机之间的连接是瓶颈,您可以将一个副本传输到第一个工作程序,然后将其缓存到文件中,让其他工作程序从该缓存文件中读取数据。类似的东西:

library("parallel")

## Large data object
x <- 1:1e6
f <- function(x) mean(x)

## All N=64 workers are on the same host
cl <- makeCluster(rep("node1", times = 64))

## Send function
clusterExport(cl, "f")

## Send data to first worker (over slow connection)
clusterExport(cl[1], "x")

## Save to cache file (on remote machine)
cachefile <- clusterEvalQ(cl[1], {
  saveRDS(x, file = (f <- tempfile())); f
})[[1]]

## Load cache file into remaining workers
clusterExport(cl[-1], "cachefile")
clusterEvalQ(cl[-1], { x <- readRDS(file = cachefile); TRUE })

# Resolve function on all workers
y <- clusterEvalQ(cl, f(x))

答案 1 :(得分:0)

这是一个使用fifos的版本,我不确定它是多么可移植,在Linux下工作,我不确定这与性能明智地与@HenrikB的anwser进行比较:

library(parallel)

# create a very large cluster on a single (remote) node:
cl <- makePSOCKcluster(3)

# create a very large object
o <- 1:10

# create a fifo on the node and retrieve the name
fifo_name <- clusterEvalQ(cl[1], {
                        fifo_name <- tempfile()
                        system2("mkfifo", fifo_name)
                        fifo_name
})[[1]]

# send the very large object to one process on the node and the name of the fifo to all nodes
clusterExport(cl[1], "o")
clusterExport(cl, "fifo_name")

# does the actual sharing through the fifo
# note that a fifo has to be opened for reading 
# before writing on it
for(i in 2:length(cl)) {
  clusterEvalQ(cl[i], { ff <- fifo(fifo_name, "rb")  })
  clusterEvalQ(cl[1], { ff <- fifo(fifo_name, "wb")
                        saveRDS(o, ff)
                        close(ff)                    })
  clusterEvalQ(cl[i], { o <- readRDS(ff)
                        close(ff)                    })
}

# cleanup
clusterEvalQ(cl[1], {   unlink(fifo_name)            })

# check if everything is there
clusterEvalQ(cl, exists("o"))

# now you can do the actual work
...