我正在尝试在不使用内置优化器或损失的情况下更新ANN中的权重(我有这样做的理由,并可以解释是否有必要)。因为model.compile需要损失函数和优化器,所以我定义了自定义虚拟损失和优化器函数并使用它们。我认为这些对重量不会有任何影响(如果我对此有误,请告诉我)。
从我的角度来看,从自定义图层方法中更新权重似乎是合理的,但是权重根本没有更新。
这是一个最小的工作示例:
from keras import backend as K
from keras.layers import Layer
class MyLayer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
super(MyLayer, self).build(input_shape) # Be sure to call this at the end
def call(self, x):
self.kernel = self.kernel * 0
return K.dot(x, self.kernel)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
model = Sequential()
model.add(MyLayer(input_shape = (1, 1), output_dim = 1))
model.summary()
def dummy_loss(y_true, y_pred): return y_pred
class dummy_opt(keras.optimizers.Optimizer):
def __init__(self): return(None)
def get_updates(self, loss, params): return(np.array(0))
def get_configs(self): return(0)
dummyOpt = dummy_opt()
model.compile(optimizer = dummyOpt, loss = dummy_loss)
print_weights = LambdaCallback(on_epoch_end=lambda epoch, logs: print(model.layers[0].get_weights()))
model.fit(x = np.ones(10).reshape(10,1,1), y=np.zeros(10), epochs = 5, verbose = 2, callbacks = [print_weights])
我希望权重为零,因为我在自定义层的调用方法中将self.kernel乘以零。
这是结果(权重不变)
Epoch 1/5
4s - loss: 0.0000e+00
[array([[-0.03051628]], dtype=float32)]
Epoch 2/5
0s - loss: 0.0000e+00
[array([[-0.03051628]], dtype=float32)]
Epoch 3/5
0s - loss: 0.0000e+00
[array([[-0.03051628]], dtype=float32)]
Epoch 4/5
0s - loss: 0.0000e+00
[array([[-0.03051628]], dtype=float32)]
Epoch 5/5
0s - loss: 0.0000e+00
[array([[-0.03051628]], dtype=float32)]
<keras.callbacks.History at 0x1cb0319a908>
那为什么我不能在自定义层的call方法内部更改权重?