将tf.keras模型转换为估计量会使损失最严重

时间:2019-09-10 14:06:09

标签: tensorflow keras deep-learning tf.keras

我做了一个实验,比较了tf.keras在转换为估算器之前和之后的性能,并得出了非常不同的损失: 1)tf.keras模型(无估算器):1706.100 RMSE(+/- 260.064) 2)tf.keras转换为估算器:3912.574 RMSE(+/- 132.833)

我确定在估算器转换或数据集API中做错了什么,但我无法找出根本原因。任何帮助表示赞赏。

这与我的另一篇文章“将CNN-LSTM从keras转换为tf.keras降低准确性”有关。

没有估算器的部分代码

/*
|--------------------------------------------------------------------------
| Web Routes
|--------------------------------------------------------------------------
|
| Here is where you can register web routes for your application. These
| routes are loaded by the RouteServiceProvider within a group which
| contains the "web" middleware group. Now create something great!
|
*/

Route::get('/welcome', function () {
    return view('welcome');
});

带有估算器转换的部分代码

import tensorflow as tf
from math import sqrt
from sklearn.metrics import mean_squared_error
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import ConvLSTM2D
from matplotlib import pyplot
import numpy as np
import pandas as pd
import warnings

def model_fit_convlstm(train_supv, config, first_round):
    # train X shape = (48, 36) 
    train_x, train_y = train_supv[:, :-1], train_supv[:, -1]
    # train X lstm shape = (48, 36, 1)
    train_x_convlstm = train_x.reshape(train_x.shape[0], config['n_seq'], 1, config['n_steps'], config['n_feature'])
    #
    input_shape = (config['n_seq'], 1, config['n_steps'], config['n_feature'])
    # relu = tf.nn.relu
    relu = 'relu'
    model = Sequential()
    model.add(ConvLSTM2D(config['n_filters'], (1,config['n_kernel']), activation=relu, input_shape=input_shape))
    model.add(Flatten())
    model.add(Dense(config['n_nodes'], activation=relu))
    model.add(Dense(1))    
    adam = tf.train.AdamOptimizer()
    # adam = 'adam'
    mse = tf.keras.losses.mean_squared_error
    # mse = 'mse'
    model.compile(optimizer=adam, loss=mse) 
    model.fit(train_x_convlstm, train_y, epochs=config['n_epochs'], batch_size=config['n_batch'], verbose=0)
    return model

def model_predict_convlstm(model, test_supv, config, first_round):
    test_x, test_y = test_supv[:, :-1], test_supv[:, -1]
    test_x_convlstm = test_x.reshape(test_x.shape[0], config['n_seq'], 1, config['n_steps'], config['n_feature'])
    if first_round.state:
        print('test X shape = {}'.format(test_x.shape))
        print('test X convlstm shape = {}'.format(test_x_convlstm.shape))
    #
    predictions = np.array([])
    for row in test_x_convlstm:
        test_x_convlstm_row = row.reshape(1, config['n_seq'], 1, config['n_steps'], config['n_feature'])
        yhat = model.predict(test_x_convlstm_row, verbose=0)
        # print(yhat)
        predictions = np.append(predictions, yhat)
    #
    return predictions

1 个答案:

答案 0 :(得分:0)

更多来自我的观察。我尝试了另一种神经网络方法(MLP)。在这种情况下,keras和tf.keras的预测非常接近。结论是,估计量转换引入的差距仅发生在某些tf.keras库和层上。 罪魁祸首是ConvLSTM2D和Flatten,我怀疑tf.keras可能具有不同的默认值或略有不同的算法。 以下示例在预测性能上没有差距

tf.keras

import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
relu = tf.keras.activations.relu
model = Sequential()
model.add(Dense(config['n_nodes'], activation=relu, input_dim=config['n_input']))
model.add(Dense(1))
adam = tf.keras.optimizers.Adam()
mse = tf.keras.losses.mean_squared_error
model.compile(loss=mse, optimizer=adam, verbose=0)
# model.fit(train_x, train_y, epochs=config['n_epochs'], batch_size=config['n_batch'], verbose=0)
with warnings.catch_warnings():
  warnings.filterwarnings("ignore")
  estimator = tf.keras.estimator.model_to_estimator(keras_model=model)
train_input_fn = tf.estimator.inputs.numpy_input_fn(
    x = {model.input_names[0]: train_x.astype(np.float32)},
    y = train_y.astype(np.float32),
    num_epochs = config['n_epochs'],
    batch_size = config['n_batch'],
    shuffle = False
)
estimator.train(input_fn=train_input_fn)

keras

from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(config['n_nodes'], activation='relu', input_dim=config['n_input']))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.fit(train_x, train_y, epochs=config['n_epochs'], batch_size=config['n_batch'], verbose=0)
相关问题