logits和标签的大小必须与logits_size相同

时间:2018-02-16 08:53:59

标签: tensorflow convolution tensorflow-estimator

您使用我自己的数据集训练模型,但我有错误,我在下面提到。我的数据集有124个类,标签是0到123,大小是60 * 60灰色,批次是10,结果是:

lables.eval() - > [1 101 101 103 103 103 103 100 102 1] - len(lables.eval())= 10

原始图片大小 - > (?,60,60,1)

第一个卷积层(?,30,30,32)

第二个卷积层。 (?,15,15,64)

变平。 (?,14400)

密集.1(?,2048)

密集.2(?,124)

错误

ensorflow.python.framework.errors_impl.InvalidArgumentError: logits and 
labels must have the same first dimension, got logits shape [40,124] and 
labels shape [10]

def model_fn(features, labels, mode, params):
   # Reference to the tensor named "image" in the input-function.
   x = features["image"]
   # The convolutional layers expect 4-rank tensors
    # but x is a 2-rank tensor, so reshape it.
    net = tf.reshape(x, [-1, img_size, img_size, num_channels])
    # First convolutional layer.
    net = tf.layers.conv2d(inputs=net, name='layer_conv1',
                           filters=32, kernel_size=3,
                            padding='same', activation=tf.nn.relu)
    net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)

    # Second convolutional layer.
    net = tf.layers.conv2d(inputs=net, name='layer_conv2',
                   filters=64, kernel_size=3,
                   padding='same', activation=tf.nn.relu)
    net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)

    # Flatten to a 2-rank tensor.
    net = tf.contrib.layers.flatten(net)
    # Eventually this should be replaced with:
    # net = tf.layers.flatten(net)
    # First fully-connected / dense layer.
    # This uses the ReLU activation function.
    net = tf.layers.dense(inputs=net, name='layer_fc1',
                  units=2048, activation=tf.nn.relu)
    # Second fully-connected / dense layer.
    # This is the last layer so it does not use an activation function.
    net = tf.layers.dense(inputs=net, name='layer_fc_2',
                  units=num_classes)

    # Logits output of the neural network.
    logits = net
    y_pred = tf.nn.softmax(logits=logits)
    y_pred_cls = tf.argmax(y_pred, axis=1)

    if mode == tf.estimator.ModeKeys.PREDICT:
         spec = tf.estimator.EstimatorSpec(mode=mode,
                                  predictions=y_pred_cls)
    else:
         cross_entropy = 
               tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels,
                                                              logits=logits)
          loss = tf.reduce_mean(cross_entropy)
          optimizer = 
          tf.train.AdamOptimizer(learning_rate=params["learning_rate"])

          train_op = optimizer.minimize(
               loss=loss, global_step=tf.train.get_global_step())
           metrics = \
                  {
                   "accuracy": tf.metrics.accuracy(labels, y_pred_cls)
                   }

           spec = tf.estimator.EstimatorSpec(
                         mode=mode,
                         loss=loss,
                         train_op=train_op,
                         eval_metric_ops=metrics)

     return spec`

这个标签来自这里通过tfrecords:

def input_fn(filenames, train, batch_size=10, buffer_size=2048):
# Args:
# filenames:   Filenames for the TFRecords files.
# train:       Boolean whether training (True) or testing (False).
# batch_size:  Return batches of this size.
# buffer_size: Read buffers of this size. The random shuffling
#              is done on the buffer, so it must be big enough.

# Create a TensorFlow Dataset-object which has functionality
# for reading and shuffling data from TFRecords files.
dataset = tf.data.TFRecordDataset(filenames=filenames)

# Parse the serialized data in the TFRecords files.
# This returns TensorFlow tensors for the image and labels.
dataset = dataset.map(parse)

if train:
    # If training then read a buffer of the given size and
    # randomly shuffle it.
    dataset = dataset.shuffle(buffer_size=buffer_size)

    # Allow infinite reading of the data.
    num_repeat = None
else:
    # If testing then don't shuffle the data.

    # Only go through the data once.
    num_repeat = 1

# Repeat the dataset the given number of times.
dataset = dataset.repeat(num_repeat)

# Get a batch of data with the given size.
dataset = dataset.batch(batch_size)

# Create an iterator for the dataset and the above modifications.
iterator = dataset.make_one_shot_iterator()

# Get the next batch of images and labels.
images_batch, labels_batch = iterator.get_next()

# The input-function must return a dict wrapping the images.

x = {'image': images_batch}
y = labels_batch
print(x, ' - ', y.get_shape())
return x, y

我通过此代码生成labeles,例如image name = math-1,lable = 1

def get_lable_and_image(path):
lbl = []
img = []
for filename in glob.glob(os.path.join(path, '*.png')):
    img.append(filename)
    lable = filename[41:].split()[0].split('-')[1]
    lbl.append(int(lable))

lables = np.array(lbl)
images = np.array(img)
# print(images[1], lables[1])

return images, lables

我推图像和标签来创建tfrecords

def convert(image_paths, labels, out_path):
# Args:
# image_paths   List of file-paths for the images.
# labels        Class-labels for the images.
# out_path      File-path for the TFRecords output file.

print("Converting: " + out_path)

# Number of images. Used when printing the progress.
num_images = len(image_paths)

# Open a TFRecordWriter for the output-file.
with tf.python_io.TFRecordWriter(out_path) as writer:

    # Iterate over all the image-paths and class-labels.
    for i, (path, label) in enumerate(zip(image_paths, labels)):
        # Print the percentage-progress.
        print_progress(count=i, total=num_images-1)

        # Load the image-file using matplotlib's imread function.
        img = imread(path)

        # Convert the image to raw bytes.
        img_bytes = img.tostring()

        # Create a dict with the data we want to save in the
        # TFRecords file. You can add more relevant data here.
        data = \
            {
                'image': wrap_bytes(img_bytes),
                'label': wrap_int64(label)
            }

        # Wrap the data as TensorFlow Features.
        feature = tf.train.Features(feature=data)

        # Wrap again as a TensorFlow Example.
        example = tf.train.Example(features=feature)

        # Serialize the data.
        serialized = example.SerializeToString()

        # Write the serialized data to the TFRecords file.
        writer.write(serialized)

0 个答案:

没有答案
相关问题