输入到隐藏层权重更新,多层感知器神经网络

时间:2018-03-13 11:35:54

标签: matlab neural-network

我试图实现一个简单的多层神经网络来解决XOR,它只是为了学习多层网和重量更新是如何工作的。 我提出了matlab代码,它的工作正常并取得了良好的效果, 但更新输入到隐藏层权重的部分似乎是错误的, 因为它在错误的输入中乘以隐藏的层增量,在这部分代码中:

    w1(1) = w1(1) + learnRate * (delta2(1) * neur1(1));
    w1(2) = w1(2) + learnRate * (delta2(1) * neur1(1));
    w1(3) = w1(3) + learnRate * (delta2(2) * neur1(2));
    w1(4) = w1(4) + learnRate * (delta2(2) * neur1(2));

我认为这应该改为:

    w1(1) = w1(1) + learnRate * (delta2(1) * neur1(1));
    w1(2) = w1(2) + learnRate * (delta2(1) * neur1(2));
    w1(3) = w1(3) + learnRate * (delta2(2) * neur1(1));
    w1(4) = w1(4) + learnRate * (delta2(2) * neur1(2));

任何人都可以解释为什么在我做出改变时失败了?!

完整代码(有效,但似乎有误):

clear
clc

maxIt = 3000;

%% Neural Net initialize

neur1 = zeros(2,1);
neur2 = zeros(2,1);
neur3 = 0;

out2 = zeros(2,1);
out3 = 0;

out = zeros(4,1);
outCount = 1;

% weights ()
w1 = zeros(4,1) + 0.2;
w2 = zeros(2,1) + 0.2;
b1 = zeros(2,1) + 0.2;
b2 = 0.2;

desired = [0; ...
           1; ...
           1; ...
           0; ];

learnRate = 0.7;

input = [0 0;0 1;1 0;1 1];

ETotal = zeros(maxIt,1);

%% Main Loop

for iteration = 1 : maxIt

for dataCount = 1 : 4

    neur1 = input(dataCount,:)';


    neur2(1) = neur1(1) * w1(1) + neur1(2) * w1(2) + 1 * b1(1);
    neur2(2) = neur1(1) * w1(3) + neur1(2) * w1(4) + 1 * b1(2);
    out2(1) = 1./(1+exp(-neur2(1)));
    out2(2) = 1./(1+exp(-neur2(2)));

    neur3 = out2(1) * w2(1) + out2(2) * w2(2) + 1 * b2;
    out3 = 1./(1+exp(-neur3));

    %% Backpropagation

    out(dataCount) = out3;

    %delta
    err = desired(dataCount) - out3;
    delta3 = err * (out3 * (1 - out3));

    delta2(1) = (delta3 * w2(1)) * (out2(1) * (1 - out2(1)));
    delta2(2) = (delta3 * w2(2)) * (out2(2) * (1 - out2(2)));

    %weight update
    w2(1) = w2(1) + learnRate * (delta3 * out2(1));
    w2(2) = w2(2) + learnRate * (delta3 * out2(2));
    b2 = b2 + learnRate * (delta3 * 1);   %bias weight update

    w1(1) = w1(1) + learnRate * (delta2(1) * neur1(1));
    w1(2) = w1(2) + learnRate * (delta2(1) * neur1(1));
    w1(3) = w1(3) + learnRate * (delta2(2) * neur1(2));
    w1(4) = w1(4) + learnRate * (delta2(2) * neur1(2));
    b1(1) = b1(1) + learnRate * (delta2(1) * 1);
    b1(2) = b1(2) + learnRate * (delta2(2) * 1);

    fprintf('%.5f - ', out3);
    %fprintf('[%.10f,%.10f]-- ', w2(1), w2(2));
end
ETotal(iteration) = mse(out,desired);
outCount = 1;

%fprintf('\n %i -- %.20f \n', iteration, ETotal(iteration) );
fprintf('\n');
end

plot(ETotal);

这个网有2个输入(neur1)和2个感知器隐藏层(neur2)和1个感知器作为输出(neur3)

out2,3是使用sigmoid

的每个感知器的输出

w1,w2为权重,b1,b2为偏差

network design image

感谢。

0 个答案:

没有答案
相关问题