将简单的测试模型从Keras转换为PyTorch之后,获得了非常不同的分数

时间:2019-04-19 15:12:19

标签: keras pytorch

我正试图从Keras过渡到PYTorch。

阅读教程和类似问题后,我提出了以下简单模型进行测试。但是,以下两个模型给我的得分却截然不同:Keras(0.9),PyTorch(0.03)。

有人可以给我指导吗?

基本上我的数据集具有120个要素和具有3个如下所示的类的多标签。

[
    [1,1,1],
    [0,1,1],
    [1,0,0],
    ...
]

def score(true, pred):
    lrl = label_ranking_loss(true, pred)
    lrap = label_ranking_average_precision_score(true, pred)
    print('LRL:', round(lrl), 'LRAP:', round(lrap))

#Keras:
model= Sequential()
model.add(Dense(60, activation="relu", input_shape=(120,)))
model.add(Dense(30, activation="relu"))
model.add(Dense(3, activation="sigmoid"))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=32, epochs=100)
pred = model.predict(x_test)
score(y_test, pred)

#PyTorch
model = torch.nn.Sequential(
    torch.nn.Linear(120, 60),
    torch.nn.ReLU(),
    torch.nn.Linear(60, 30),
    torch.nn.ReLU(),
    torch.nn.Linear(30, 3),
    torch.nn. Sigmoid())
loss_fn = torch.nn. BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

epochs = 100
batch_size = 32
n_batch = int(x_train.shape[0]/batch_size)
for epoch in range(epochs):
    avg_cost = 0
    for i in range(n_batch):
        x_batch = x_train[i*batch_size:(i+1)*batch_size]
        y_batch = y_train[i*batch_size:(i+1)*batch_size]
        x, y = Variable(torch.from_numpy(x_batch).float()), Variable(torch.from_numpy(y_batch).float(), requires_grad=False)
        pred = model(x)
        loss = loss_fn(pred, y)
        loss.backward()
        optimizer.step()
        avg_cost += loss.item()/n_batch
    print(epoch, avg_cost)

x, y = Variable(torch.from_numpy(x_test).float()), Variable(torch.from_numpy(y_test).float(), requires_grad=False)
pred = model(x)
score(y_test, pred.data.numpy())

1 个答案:

答案 0 :(得分:0)

您需要在每次迭代开始时调用optimizer.zero_grad(),否则来自不同批次的梯度会不断累积。

相关问题