如何为根据Keras模型创建的估算器构建输入函数

时间:2019-03-25 13:37:50

标签: tensorflow tensorflow-datasets tensorflow-estimator tf.keras

我正在从以下的keras模型创建一个估计器

estimator = tf.keras.estimator.model_to_estimator(keras_model=keras_model, model_dir=model_dir)

我的模型就像

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
main_input (InputLayer)      (None, 8)                 0         
_________________________________________________________________
dense1 (Dense)               (None, 50)                450       
_________________________________________________________________
dense2 (Dense)               (None, 40)                2040      
_________________________________________________________________
dense3 (Dense)               (None, 30)                1230      
_________________________________________________________________
dense4 (Dense)               (None, 20)                620       
_________________________________________________________________
dense5 (Dense)               (None, 10)                210       
_________________________________________________________________
main_output (Dense)          (None, 8)                 88        
=================================================================
Total params: 4,638
Trainable params: 4,638
Non-trainable params: 0

然后我尝试为估算器创建一个input_fn

def train_input_fn():
    dataset = csv_input_fn(training_data_path)
    dataset = dataset.batch(128).repeat(-1)
    train_iterator = dataset.make_one_shot_iterator()
    features, labels = train_iterator.get_next()
    return features, labels
def csv_input_fn(csv_path, batch_size=None, buffer_size=None, repeat=None):
    dataset = tf.data.TextLineDataset(filenames).skip(0)
    dataset = dataset.map(_parse_line)

    if buffer_size is not None:
        dataset = dataset.shuffle(buffer_size=10000)

    if batch_size is not None:
        dataset = dataset.batch(batch_size)

    if buffer_size is not None:
        dataset = dataset.repeat(repeat)

    return dataset

def _parse_line(line):
    fields = tf.decode_csv(line, FIELD_DEFAULTS)
    features = dict(zip(COLUMNS, fields))
    features.pop("DATE")
    label = features.pop("LABEL")
    return features, label

但是有错误

KeyError: "The dictionary passed into features does not have the expected inputs keys defined in the keras model.
Expected keys: {'main_input'}
features keys: {'TURNOVER', 'VOLUME', 'CLOSE', 'P_CHANGE', 'OPEN', 'PRICE_CHANGE', 'LOW', 'HIGH'}
Difference: {'VOLUME', 'CLOSE', 'LOW', 'P_CHANGE', 'main_input', 'OPEN', 'PRICE_CHANGE', 'TURNOVER', 'HIGH'}"

在keras模型中,看起来像{'main_input'}是输入名称  {'TURNOVER','VOLUME','CLOSE','P_CHANGE','OPEN','PRICE_CHANGE','LOW','HIGH'}是我的数据集中的要素,因此它们彼此不匹配。有人知道如何转换吗?

2 个答案:

答案 0 :(得分:0)

是的,您可以将要素列转换为numpy数组,并将其输入模型中。

# Simulate csv data
x = np.random.randn(100,8)
df = pd.DataFrame(data=x, columns=['TURNOVER', 'VOLUME', 'CLOSE', 'P_CHANGE', 'OPEN', 'PRICE_CHANGE', 'LOW', 'HIGH'])

# Convert df to array
train_data = df.to_numpy() # requires pandas 0.24 else use df.values
train_labels = np.zeros((100,8))

train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={model.input_names[0]: train_data},  # input_names[0] would be 'main_input'
y=train_labels,
batch_size=100,
num_epochs=None,
shuffle=True)  

estimator = tf.keras.estimator.model_to_estimator(model)
estimator.train(input_fn=train_input_fn, steps=1)

https://www.tensorflow.org/guide/estimators#creating_estimators_from_keras_models

答案 1 :(得分:0)

尝试使用tf.data.experimental.make_csv_dataset。它接受单个csv文件或文件列表。它还处理批处理和混排,因此您不必显式地添加它们。

dataset = tf.data.experimental.make_csv_dataset('file.csv', batch, ...)

这将返回一批OrderedDict类型,因此您必须应用一些解析函数。

另一种方法是使用CsvDataset

dataset = tf.data.experimental.CsvDataset('file.csv', [dtype]).batch(1)

它需要record_defaults参数,它是文件中值的dtypes的列表。这是一个标准的数据集对象,因此您需要应用shulle,batch和任何适合您数据的解析函数

https://www.tensorflow.org/api_docs/python/tf/data/experimental/CsvDataset https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/data/experimental/make_csv_dataset

相关问题