运行tf-lite实验层rnn模型时出现不兼容的形状错误

时间:2019-11-09 09:25:12

标签: tensorflow keras deep-learning recurrent-neural-network tf-lite

我使用softmax时出错

logits and labels must have the same first dimension, got logits shape [24,3] and labels shape [384]
     [[{{node loss/output_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]]

我使用S形时出错

InvalidArgumentError: Incompatible shapes: [24] vs. [24,16]
     [[{{node metrics/sparse_categorical_accuracy/Equal}}]]

import os
os.environ['TF_ENABLE_CONTROL_FLOW_V2'] = '1'
import tensorflow as tf
import numpy as np
from tensorflow.lite.experimental.examples.lstm.rnn import bidirectional_dynamic_rnn


def build_LSTM_layer(num_layers):
    lstm_layers=[]
    for i in range(num_layers):
        lstm_layers.append(tf.lite.experimental.nn.TFLiteLSTMCell(num_units=50,name='rnn{}'.format(i),forget_bias=1.0))
    final_lstm_layer=tf.keras.layers.StackedRNNCells(lstm_layers)
    return final_lstm_layer
def build_bidirectional(inputs,num_layers,use_dynamic_rnn=True):
    lstm_inputs=transposed_inp=tf.transpose(inputs,[1,0,2])
    outputs,output_states=bidirectional_dynamic_rnn(build_LSTM_layer(num_layers),build_LSTM_layer(num_layers),lstm_inputs,dtype="float32",time_major=True)
    fw_lstm_output,bw_lstm_output=outputs
    final_out=tf.concat([fw_lstm_output,bw_lstm_output],axis=2)

    final_out=tf.unstack(final_out,axis=0)
    resultant_out=final_out[-1]
    return resultant_out


tf.reset_default_graph()
model_tf = tf.keras.models.Sequential([
  tf.keras.layers.Input(shape=(X.shape[1],), name='input'),
  tf.keras.layers.Embedding(input_dim=len(vocab)+1,output_dim=100,input_length=X.shape[1]),
  tf.keras.layers.Lambda(build_bidirectional, arguments={'num_layers' : 2, 'use_dynamic_rnn' : True}),
  tf.keras.layers.Dense(3,activation='softmax',name='output')  
])
model_tf.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['sparse_categorical_accuracy'])
model_tf.summary()

我已经使用具有双向包装的keras和LSTM层创建了相同的模型,并且可以正常工作,但是我无法将其转换为tf-lite,并且不支持少量RNN层

因此,我正在尝试使用这种方法来使用tf.experimental图层,该图层在Y =(N,)的形状时有效,但是当我将形状更改为Y =(N,N,1)时不起作用

输入是令牌序列,输出应该是我从keras模型而不是从上面的模型中获得的NER标签

X.shape = (30, 16)
y.shape = (30, 16, 1)

I/P = array([[15., 10., 38.,  4., 32., 57., 39.,  0.,  0.,  0.,  0.,  0.,  0., 0.,  0.,  0.]])
O/P = array([[[1.],[1.],[1.],[1.],[2.],[1.],[1.],[0.],[0.],[0.],
         [0.],[0.],[0.],[0.],[0.],[0.]]])

0 个答案:

没有答案