如何设置Keras中不可训练的图层的某些权重

时间:2019-01-07 08:36:01

标签: tensorflow keras eager-execution

我正在使用tensorflow急切执行并使用Keras库构建模型。 我的目标是建立一个神经网络,其中输入层中的某些权重设置为零,因此不应对其进行训练。

到目前为止,我正在使用该方法在每个优化步骤之后将某些图层的权重手动设置为零。

这是一个玩具示例中的代码示例:

import tensorflow as tf
import tensorflow.contrib.eager as tfe
import numpy as np

tf.enable_eager_execution()

model = tf.keras.Sequential([
  tf.keras.layers.Dense(2, activation=tf.sigmoid, input_shape=(2,)), 
  tf.keras.layers.Dense(2, activation=tf.sigmoid)
])

#set the weights
weights=[np.array([[0, 0.25],[0.2,0.3]]),np.array([0.35,0.35]),np.array([[0.4,0.5],[0.45,        0.55]]),np.array([0.6,0.6])]

model.set_weights(weights)

model.get_weights()

features = tf.convert_to_tensor([[0.05,0.10 ]])
labels =  tf.convert_to_tensor([[0.01,0.99 ]])

mask =np.array([[0, 1],[1,1]])

#define the loss function
def loss(model, x, y):
  y_ = model(x)
  return tf.losses.mean_squared_error(labels=y, predictions=y_)

#define the gradient calculation
def grad(model, inputs, targets):
  with tf.GradientTape() as tape:
    loss_value = loss(model, inputs, targets)
  return loss_value, tape.gradient(loss_value,   model.trainable_variables) 

#create optimizer an global Step
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
global_step = tf.train.get_or_create_global_step()

#optimization step
loss_value, grads = grad(model, features, labels)
optimizer.apply_gradients(zip(grads, model.variables),global_step)

#masking the optimized weights 
weights=(model.get_weights())[0]
masked_weights=tf.multiply(weights,mask)
model.set_weights([masked_weights])

将某些权重更改为零会对梯度下降算法产生什么影响? 喀拉拉邦还有其他方法可以解决这个问题吗?

0 个答案:

没有答案