每当我训练任何网络时,准确性都非常差

时间:2019-04-05 05:41:06

标签: python tensorflow machine-learning neural-network deep-learning

我正在尝试在从“ UCI机器学习存储库”站点下载的Alabone数据集上训练网络。数据集如下:

M,0.455,0.365,0.095,0.514,0.2245,0.101,0.15,15
M,0.35,0.265,0.09,0.2255,0.0995,0.0485,0.07,7
F,0.53,0.42,0.135,0.677,0.2565,0.1415,0.21,9
M,0.44,0.365,0.125,0.516,0.2155,0.114,0.155,10
I,0.33,0.255,0.08,0.205,0.0895,0.0395,0.055,7

我给了与他们提到的完全相同的列名。但是,当我尝试应用神经网络对其进行训练时,它的准确率总是很差,仅为50%。 我是该领域的新手,所以我不知道我使用的是错误的激活函数?还是执行错误的代码?还是没有对数据进行很好的预处理? 因此,请帮助我找到我所犯的错误。 这是我的整个代码:

import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder

def read_dataset():
    df = pd.read_csv("abalone.data.txt")

    X = np.array(df.drop("Sex", 1))
    y = np.array(df["Sex"])

    encoder = LabelEncoder()
    encoder.fit(y)
    y = encoder.transform(y)
    Y = one_hot_encode(y)
    # print(X.shape)
    return X, Y


def one_hot_encode(label):
    n_label = len(label)
    n_unique_label = len(np.unique(label))
    one_hot_encode = np.zeros((n_label, n_unique_label))
    one_hot_encode[np.arange(n_label), label] = 1
    return one_hot_encode


X, y = read_dataset()

train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.2)

n_nodes_1 = 60
n_nodes_2 = 60
n_nodes_3 = 60
n_nodes_4 = 60

model_path = "C:\\Users\Kashif\Projects\DeepLearning-Tensorflow\Learnings\AlaboneDetection\AlaboneModel"
n_class = 3
input_size = X.shape[1]

x = tf.placeholder(tf.float32, [None, input_size])
y = tf.placeholder(tf.float32, [None, n_class])


def neural_network(x):
    hidden_1 = {"weights": tf.Variable(tf.random_normal([input_size, n_nodes_1])),
                "biases": tf.Variable(tf.random_normal([n_nodes_1]))}

    hidden_2 = {"weights": tf.Variable(tf.random_normal([n_nodes_1, n_nodes_2])),
                "biases": tf.Variable(tf.random_normal([n_nodes_2]))}

    hidden_3 = {"weights": tf.Variable(tf.random_normal([n_nodes_2, n_nodes_3])),
                "biases": tf.Variable(tf.random_normal([n_nodes_3]))}

    hidden_4 = {"weights": tf.Variable(tf.random_normal([n_nodes_3, n_nodes_4])),
                "biases": tf.Variable(tf.random_normal([n_nodes_4]))}

    out_layer = {"weights": tf.Variable(tf.random_normal([n_nodes_4, n_class])),
                "biases": tf.Variable(tf.random_normal([n_class]))}

    # (input * weights) + biases

    layer_1 = tf.add(tf.matmul(x, hidden_1["weights"]), hidden_1["biases"])
    layer_1 = tf.nn.relu(layer_1)

    layer_2 = tf.add(tf.matmul(layer_1, hidden_2["weights"]), hidden_2["biases"])
    layer_2 = tf.nn.relu(layer_2)

    layer_3 = tf.add(tf.matmul(layer_2, hidden_3["weights"]), hidden_3["biases"])
    layer_3 = tf.nn.relu(layer_3)

    layer_4 = tf.add(tf.matmul(layer_3, hidden_4["weights"]), hidden_4["biases"])
    layer_4 = tf.nn.relu(layer_4)

    output = tf.matmul(layer_4, out_layer["weights"]) + out_layer["biases"]

    return output


def train_neural_network(x):
    prediction = neural_network(x)
    cost_function = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction, labels=y))
    optimizer = tf.train.AdamOptimizer().minimize(cost_function)

    init = tf.global_variables_initializer()
    loss_trace = []
    accuracy_trace = []
    #saver = tf.train.Saver()

    epochs = 1000

    with tf.Session() as sess:
        sess.run(init)
        for i in range(epochs):
            sess.run(optimizer, feed_dict={x: train_X, y: train_y})
            loss = sess.run(cost_function, feed_dict={x: train_X, y: train_y})
            accuracy = np.mean(np.argmax(sess.run(prediction,feed_dict={x:train_X,y:train_y}),axis=1) == np.argmax(train_y,axis=1))
            loss_trace.append(loss)
            accuracy_trace.append(accuracy)
            print('Epoch:', (i + 1), 'loss:', loss, 'accuracy:', accuracy)

        #saver.save(sess, model_path)
        print('Final training result:', 'loss:', loss, 'accuracy:', accuracy)
        loss_test = sess.run(cost_function, feed_dict={x: test_X, y: test_y})
        test_pred = np.argmax(sess.run(prediction, feed_dict={x: test_X, y: test_y}), axis=1)
        accuracy_test = np.mean(test_pred == np.argmax(test_y, axis=1))
        print('Results on test dataset:', 'loss:', loss_test, 'accuracy:', accuracy_test)


train_neural_network(x)

这是我最后三个纪元和最终准确性结果的最后结果。

Epoch: 997 loss: 24.625622 accuracy: 0.518407662376534
Epoch: 998 loss: 22.168245 accuracy: 0.48757856929063154
Epoch: 999 loss: 21.896841 accuracy: 0.5001496557916791
Epoch: 1000 loss: 22.28085 accuracy: 0.4968572283747381
Final training result: loss: 22.28085 accuracy: 0.4968572283747381
Results on test dataset: loss: 23.206755 accuracy: 0.4688995215311005

1 个答案:

答案 0 :(得分:0)

我是Tensorflow的新手。也许您可以尝试两件事:

1。降低学习率,例如0.0001。因为你的损失是振荡

2。增加层数。因为您的模型可能不合适。

如果上述方法不能解决您的问题,则可以打印数据并检查train_X和train_y是否正确