TensorflowJS拟合时,损失为NaN

时间:2018-08-27 13:36:36

标签: javascript tensorflow machine-learning tensorflow.js

我正在尝试使用TensorflowJS制作与Tensorflow相同的python版本示例。 不幸的是,当我运行脚本时,我不知道为什么训练时记录的损失值为NaN。

我想要实现的是一个简单的文本分类,该文本分类基于训练后的模型返回0或1。 这是我正在关注的Python教程https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub

那是我到目前为止翻译的代码:

import * as tf  from '@tensorflow/tfjs'

// Load the binding:
//require('@tensorflow/tfjs-node');  // Use '@tensorflow/tfjs-node-gpu' if running with GPU.

// utils
const tuple = <A, B>(a: A, b: B): [A, B] => [a, b]

// prepare the data, first is result, second is the raw text
const data: [number, string][] = [
    [0, 'aaaaaaaaa'],
    [0, 'aaaa'],
    [1, 'bbbbbbbbb'],
    [1, 'bbbbbb']
]

// normalize the data
const arrayFill = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
const normalizeData = data.map(item => {
    return tuple(item[0], item[1].split('').map(c => c.charCodeAt(0)).concat(arrayFill).slice(0, 10))
})

const xs = tf.tensor(normalizeData.map(i => i[1]))
const ys = tf.tensor(normalizeData.map(i => i[0]))

console.log(xs)

// Configs
const LEARNING_RATE = 1e-4

// Train a simple model:
//const optimizer = tf.train.adam(LEARNING_RATE)
const model = tf.sequential();
model.add(tf.layers.embedding({inputDim: 1000, outputDim: 16}))
model.add(tf.layers.globalAveragePooling1d({}))
model.add(tf.layers.dense({units: 16, activation: 'relu'}))
model.add(tf.layers.dense({units: 1, activation: 'sigmoid'}))
model.summary()
model.compile({optimizer: 'adam', loss: 'binaryCrossentropy', metrics: ['accuracy']});

model.fit(xs, ys, {
  epochs: 10,
  validationData: [xs, ys],
  callbacks: {
    onEpochEnd: async (epoch, log) => {
      console.log(`Epoch ${epoch}: loss = ${log.loss}`);
    }
  }
});

(here pure JS code) 这就是我得到的输出

_________________________________________________________________
Layer (type)                 Output shape              Param #
=================================================================
embedding_Embedding1 (Embedd [null,null,16]            16000
_________________________________________________________________
global_average_pooling1d_Glo [null,16]                 0
_________________________________________________________________
dense_Dense1 (Dense)         [null,16]                 272
_________________________________________________________________
dense_Dense2 (Dense)         [null,1]                  17
=================================================================
Total params: 16289
Trainable params: 16289
Non-trainable params: 0
_________________________________________________________________
Epoch 0: loss = NaN
Epoch 1: loss = NaN
Epoch 2: loss = NaN
Epoch 3: loss = NaN
Epoch 4: loss = NaN
Epoch 5: loss = NaN
Epoch 6: loss = NaN
Epoch 7: loss = NaN
Epoch 8: loss = NaN
Epoch 9: loss = NaN

1 个答案:

答案 0 :(得分:0)

损失或预测可能变为NaN。这是vanishing gradient问题的结果。在训练期间,梯度(偏导数)可能会变得很小(趋于0)。 binarycrossentropy损失函数在计算中使用对数。而且,根据涉及对数的数学运算,结果可能为NaN。

binary cross entropy

如果模型的权重变为NaN,则预测ŷ也有可能变为NaN,从而造成损失。 可以调整时期数以避免此问题。解决该问题的另一种方法是更改​​损失或优化器功能。

话虽如此,代码的损失不是NaN。这是stackblitz上代码的执行。 另外,请注意以下answer,其中为了不预测NaN而对模型进行了修正

相关问题