Tensorflow:重新训练期间预先训练的嵌入初始化问题

时间:2017-07-30 15:19:38

标签: python tensorflow deep-learning word2vec

我的目标是(1)从文件中加载预训练的字嵌入矩阵作为初始值; (2)微调嵌入字而不是保持固定; (3)每次恢复模型时,加载微调字嵌入而不是预先训练的字嵌入。

我试过像:

class model():
    def __init__(self):
    # ...
    def _add_word_embed(self):
        W = tf.get_variable('W', [self._vsize, self._emb_size], 
                 initializer=tf.truncated_normal_initializer(stddev=1e-4))
        W.assign(load_and_read_w2v())
        # ...
    def _add_seq2seq(self):
        # ...
    def build_graph(self):
        self._add_word_embed()
        self._add_seq2seq()

但是每当我停止训练并重新启动训练时,这种方法都会涵盖微调词嵌入。调用sess.run(W.assign())后我也尝试了model.build_graph。但它引发了一个错误,图表已经完成,我不能再改变它了。你能告诉我实现它的正确方法吗?提前谢谢!

编辑:

这个问题并没有重复,因为它有一个新的要求:在培训开始时使用预先培训的词,并在其后发现 - 找到它。我也问过如何有效地做到这一点。在这个问题中,接受的答案并不符合这一要求。在你将任何问题标记为重复之前,你能想到两次吗?

1 个答案:

答案 0 :(得分:3)

以下是如何操作的玩具示例:

# The graph

# Inputs
vocab_size = 2
embed_dim = 2
embedding_matrix = np.ones((vocab_size, embed_dim))

#The weight matrix to initialize with embeddings
W = tf.get_variable(initializer=tf.zeros([vocab_size, embed_dim]), name='embed', trainable=True)

# global step used to take care of the weight initialization 
# for the first time will be loaded from numpy array and not during retraining.
global_step = tf.Variable(0, dtype=tf.int32, trainable=False, name='global_step')

# Initialiazation of weights based on global_step
initW = tf.cond(tf.equal(global_step, 0), lambda:W.assign(embedding_matrix), lambda: W)
inc = tf.assign_add(W,[[1, 1],[1, 1]])

# Update global step
update = tf.assign_add(global_step, 1)
op = tf.group(inc, update)

# init_fn 
def init_embed(sess):
  sess.run(initW)

现在,如果我们在会话中运行上述内容:

sv = tf.train.Supervisor(logdir='tmp',init_fn=init_embed)
with sv.managed_session() as sess:
   print('global step:', sess.run(global_step))
   print('Initial weight:')
   print(sess.run(W))
   for i in range(2):  
      sess.run([op])
    _ W, g_step= sess.run([W, global_step])
   print('Final weight:')        
   print(_W)
   sv.saver.save(sess,sv.save_path, global_step=g_step)

# Output at first run
   Initial weight:
   [[ 1.  1.]
   [ 1.  1.]]

   Final weight:
   [[ 3.  3.]
   [ 3.  3.]]

#Output at second run
   Initial weight:
   [[ 3.  3.]
   [ 3.  3.]]
   Final weight:
   [[ 5.  5.]
   [ 5.  5.]]