单热矢量预测总是返回相同的值

时间:2018-01-07 23:33:21

标签: neural-network deep-learning keras bigdata conv-neural-network

我的深度神经网络为每个输入返回相同的输出。我试过(没有运气)不同的变体:

  • 损失
  • 优化
  • 网络拓扑/图层类型
  • 纪元数(1-100)

我有3个输出(一个热),每个输入输出它们就像(每次训练后都会改变):

  

4.701869785785675049e-01 4.793547391891479492e-01 2.381391078233718872e-01

这个问题的发生可能是因为我的训练数据(股票预测)具有高度随机性。

数据集也大量转向其中一个答案(这就是我使用sample_weight的原因 - 按比例计算)。

我想我可以排除过度拟合(甚至在1个时期也会发生,我有辍学层)。

我网络的一个例子:

xs_conv = xs.reshape(xs.shape[0], xs.shape[1], 1)
model_conv = Sequential()
model_conv.add(Conv1D(128, 15, input_shape=(input_columns,1), activation='relu'))
model_conv.add(MaxPooling1D(pool_size=3))
model_conv.add(Dropout(0.4))
model_conv.add(Conv1D(64, 15, input_shape=(input_columns,1), activation='relu'))
model_conv.add(MaxPooling1D(pool_size=3))
model_conv.add(Dropout(0.4))
model_conv.add(Flatten())
model_conv.add(Dense(128, activation='relu'))
model_conv.add(Dropout(0.4))
model_conv.add(Dense(3, activation='sigmoid'))

model_conv.compile(loss='mean_squared_error', optimizer='nadam', metrics=['accuracy'])
model_conv.fit(xs_conv, ys, epochs=10, batch_size=16, sample_weight=sample_weight, validation_split=0.3, shuffle=True)

我会理解输出是否是随机的,但发生的事情似乎非常特殊。有什么想法吗?

数据:computed.csv

整个代码:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from keras.layers import Input, Dense, Conv1D, Dropout, MaxPooling1D, Flatten
from keras.models import Model, Sequential
from keras import backend as K
import random

DATA_DIR = '../../Data/'
INPUT_DATA_FILE = DATA_DIR + 'computed.csv'

def get_y(row):
    profit = 0.010
    hot_one = [0,0,0]
    hot_one[0] = int(row.close_future_5 >= profit)
    hot_one[1] = int(row.close_future_5 <= -profit)
    hot_one[2] = int(row.close_future_5 < profit and row.close_future_10 > -profit)
    return hot_one

def rolling_window(window, arr):
    return [np.array(arr[i:i+window]).transpose().flatten().tolist() for i in range(0, len(arr))][0:-window+1]

def prepare_data(data, widnow, test_split):
    xs1 = data.iloc[:,1:26].as_matrix()
    ys1 = [get_y(row) for row in data.to_records()]
    xs = np.array(rolling_window(window, xs1)).tolist()
    ys = ys1[0:-window+1]
    zipped = list(zip(xs, ys))
    random.shuffle(zipped)

    train_size = int((1.0 - test_split) * len(data))

    xs, ys = zip(*zipped[0:train_size])
    xs_test, ys_test = zip(*zipped[train_size:])
    return np.array(xs), np.array(ys), np.array(xs_test), np.array(ys_test)

def get_sample_weight(y):
    if(y[0]): return ups_w
    elif(y[1]): return downs_w
    else: return flats_w

data = pd.read_csv(INPUT_DATA_FILE)
window = 30
test_split = .9

xs, ys, xs_test, ys_test = prepare_data(data, window, test_split)

ups_cnt = sum(y[0] for y in ys)
downs_cnt = sum(y[1] for y in ys)
flats_cnt = sum(y[0] == False and y[1] == False for y in ys)
total_cnt = ups_cnt + downs_cnt + flats_cnt
ups_w = total_cnt/ups_cnt
downs_w = total_cnt/downs_cnt
flats_w = total_cnt/flats_cnt

sample_weight = np.array([get_sample_weight(y) for y in ys])

_, input_columns = xs.shape


xs_conv = xs.reshape(xs.shape[0], xs.shape[1], 1)
model_conv = Sequential()
model_conv.add(Conv1D(128, 15, input_shape=(input_columns,1), activation='relu'))
model_conv.add(MaxPooling1D(pool_size=3))
model_conv.add(Dropout(0.4))
model_conv.add(Conv1D(64, 15, input_shape=(input_columns,1), activation='relu'))
model_conv.add(MaxPooling1D(pool_size=3))
model_conv.add(Dropout(0.4))
model_conv.add(Flatten())
model_conv.add(Dense(128, activation='relu'))
model_conv.add(Dropout(0.4))
model_conv.add(Dense(3, activation='sigmoid'))

model_conv.compile(loss='mean_squared_error', optimizer='nadam', metrics=['accuracy'])
model_conv.fit(xs_conv, ys, epochs=1, batch_size=16, sample_weight=sample_weight, validation_split=0.3, shuffle=True)

xs_test_conv = xs_test.reshape(xs_test.shape[0], xs_test.shape[1], 1)
res = model_conv.predict(xs_test_conv)

plotdata = pd.concat([pd.DataFrame(res, columns=['res_up','res_down','res_flat']), pd.DataFrame(ys_test, columns=['ys_up','ys_down','y_flat'])], axis = 1)

plotdata[['res_up', 'ys_up']][3000:3500].plot(figsize=(20,4))
plotdata[['res_down', 'ys_down']][3000:3500].plot(figsize=(20,4))

1 个答案:

答案 0 :(得分:1)

我已使用附加数据运行您的模型,到目前为止可以说最大的问题是缺少数据清理

例如,第{62}行的inf中有一个.csv值。我用

过滤掉了它们
xs1 = xs1[np.isfinite(xs1).all(axis=1)]

...我收集了一些xs的统计数据,即min,max和mean。结果非常了不起:

-43.0049723138
32832.3333333    # !!!
0.213126234391

平均而言,这些值接近0,但有些值高出6个数量级。这些特定的行肯定会伤害神经网络,所以你应该过滤掉它们,或者想出一种巧妙的方法来规范化这些特征。

但即便如此,该模型的验证准确率仍为71-79%。结果分布有点偏向第3类,但总的来说非常多样,它命名为特殊:第1类为19%,第2类为7%,第3类为73%。示例测试输出:

[[  1.93120316e-02   4.47684433e-04   9.97518778e-01]
 [  1.40607255e-02   2.45630667e-02   9.74113524e-01]
 [  3.07740629e-01   4.80920941e-01   2.28664145e-01]
 ..., 
 [  5.72797097e-02   9.45571139e-02   8.07634115e-01]
 [  1.05512664e-01   8.99530351e-02   6.70437515e-01]
 [  5.24505274e-03   1.46622911e-01   9.42657173e-01]]
相关问题