TensorFlow分布在卡片上

时间:2018-03-27 20:07:09

标签: python tensorflow distributed-computing

我正在玩游戏并学习分布式张量流。

我最近创建了一个带有一个GPU服务器(两个卡)的集群 - 一个CPU服务器

我正在浏览各种文章,在TensorFlow分布式指南中,我看到通过明确地使用名称调用它们来分发卡片。 https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py 但是这里没有创建集群。

我可以创建一个TensorFlow集群,然后指定代码应该在哪个卡上运行吗?

如果是,下面看起来是否正确?

在一个github问题中我现在没有的链接但是下面的代码,该卡是用tf.device(replica_device_setter)指定的,但当我尝试这样做时,我的代码会抛出一个错误说明&# 34;无法分配设备进行操作' dummy_queue_Close_1':无法满足显式设备规范' / job:ps / task:0 / device:GPU:0'因为没有支持GPU设备的内核。"

这是因为我正在分配应该发生在CPU上的任务,而是我给tf.device(' / gpu:0 /')它会抛出错误吗?

此外,我无法分享我的官方代码,但它看起来非常类似于我参考的以下代码。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function


import numpy
import tensorflow as tf

tf.app.flags.DEFINE_string("ps_hosts", "localhost:2222", "...")
tf.app.flags.DEFINE_string("worker_hosts", "localhost:2223", "...")
tf.app.flags.DEFINE_string("job_name", "", "...")
tf.app.flags.DEFINE_integer("task_index", 0, "...")
tf.app.flags.DEFINE_integer('gpu_cards', 4, 'Number of GPU cards in a machine to use.')
FLAGS = tf.app.flags.FLAGS

def dense_to_one_hot(labels_dense, num_classes = 10) :
    """Convert class labels from scalars to one-hot vectors."""
    num_labels = labels_dense.shape[0]
    index_offset = numpy.arange(num_labels) * num_classes
    labels_one_hot = numpy.zeros((num_labels, num_classes))
    labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
    return labels_one_hot

def run_training(server, cluster_spec, num_workers) :
    is_chief = (FLAGS.task_index == 0)
    with tf.Graph().as_default():        
        with tf.device(tf.train.replica_device_setter(cluster = cluster_spec)) :            
            with tf.device('/cpu:0') :
                global_step = tf.get_variable('global_step', [],
                    initializer = tf.constant_initializer(0), trainable = False)
            with tf.device('/gpu:%d' % (FLAGS.task_index % FLAGS.gpu_cards)) :                            
                # Create the model
                x = tf.placeholder("float", [None, 784])
                W = tf.Variable(tf.zeros([784, 10]))
                b = tf.Variable(tf.zeros([10]))
                y = tf.nn.softmax(tf.matmul(x, W) + b)

                # Define loss and optimizer
                y_ = tf.placeholder("float", [None, 10])
                cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
                opt = tf.train.GradientDescentOptimizer(0.01)
                opt = tf.train.SyncReplicasOptimizer(opt, replicas_to_aggregate = num_workers,
                    replica_id = FLAGS.task_index, total_num_replicas = num_workers)
                train_step = opt.minimize(cross_entropy, global_step = global_step)

                # Test trained model
                correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
                accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

            init_token_op = opt.get_init_tokens_op()
            chief_queue_runner = opt.get_chief_queue_runner()

            init = tf.initialize_all_variables()
            sv = tf.train.Supervisor(is_chief = is_chief,
                init_op = init,
                global_step = global_step)
            # Create a session for running Ops on the Graph.
            config = tf.ConfigProto(allow_soft_placement = True)
            sess = sv.prepare_or_wait_for_session(server.target, config = config)

            if is_chief:
                sv.start_queue_runners(sess, [chief_queue_runner])                
                sess.run(init_token_op)

            for i in range(100000):
                source_data = numpy.random.normal(loc = 0.0, scale = 1.0, size = (100, 784))
                labels_dense = numpy.clip(numpy.sum(source_data, axis = 1) / 5 + 5, 0, 9).astype(int)
                labels_one_hot = dense_to_one_hot(labels_dense)
                _, cost, acc, step = sess.run([train_step, cross_entropy, accuracy, global_step], feed_dict = { x: source_data, y_ : labels_one_hot })
                print("[%d]: cost=%.2f, accuracy=%.2f" % (step, cost, acc))


def main(_) :
    ps_hosts = FLAGS.ps_hosts.split(",")
    worker_hosts = FLAGS.worker_hosts.split(",") 
    num_workers = len(worker_hosts)    
    print("gup_cards=%d; num_worders=%d" % (FLAGS.gpu_cards, num_workers))
    cluster_spec = tf.train.ClusterSpec({ "ps":ps_hosts, "worker" : worker_hosts })    
    server = tf.train.Server(cluster_spec, job_name = FLAGS.job_name, task_index = FLAGS.task_index)
    if FLAGS.job_name == "ps":
      server.join()
    elif FLAGS.job_name == "worker" :
      run_training(server, cluster_spec, num_workers)

if __name__ == '__main__' :
  tf.app.run()

1 个答案:

答案 0 :(得分:0)

我找到了一种方法,听起来很简单,而且很简单。

我以相同的方式创建了一个TensorFlow集群,并将n_workers参数传递给该集群,然后使用CUDA_VISIBLE_DEVICES的一个额外参数调用了代码的不同实例。

CUDA_VISIBLE_DEVICES是一个环境变量,可用于将TensorFlow或任何DL框架的视野限制为有限数量的卡。

CUDA_VISIBLE_DEVICES值的范围可以从-1到n(其中n是GPU的数量)。

-1 indicates no cards to use
 n indicates nth card to use

我希望正在寻找类似答案的人可以找到有用的方法。