带正则化的逻辑回归

时间:2019-03-04 17:28:28

标签: machine-learning octave logistic-regression regularized

我在MachineLearning课程中做作业。 我正在用Octave编写程序。我需要使用以下公式制作实现梯度下降法进行逻辑回归的函数: enter image description here

但是在计算了新的theta之后,成本函数变得更糟。怎么了? 那是我的代码:

function [J, grad] = costFunctionReg(theta, X, y, lambda, alpha, num_iters)
%COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization
%   J = COSTFUNCTIONREG(theta, X, y, lambda) computes the cost of using
%   theta as the parameter for regularized logistic regression and the
%   gradient of the cost w.r.t. to the parameters. 

% Initialize some useful values
m = length(y); % number of training examples

% You need to return the following variables correctly 
J = 0;
grad = zeros(size(theta));
J_history = zeros(num_iters, 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
%               You should set J to the cost.
%               Compute the partial derivatives and set grad to the partial
%               derivatives of the cost w.r.t. each parameter in theta


h = sigmoid (X * theta');

% J(Q) = 1/m SUMM ( -y log (h) - (1 - y) log (1 - h))
%disp (h);

J = (1/m) * sum(-y .* log(h) - (1-y) .* log(1-h)) + (1/(2*m)) * lambda * sum(theta.^2)

for iter = 1:num_iters
    theta(1) = theta(1) - alpha * ((1/m) * sum(-y .* log(h) - (1-y) .* log(1-h)) * X(1) + lambda/m * theta(1));
    theta(2) = theta(2) - alpha * ((1/m) * sum(-y .* log(h) - (1-y) .* log(1-h)) * X(2) + lambda/m * theta(2));
    theta(3) = theta(3) - alpha * (1/m) * sum(-y .* log(h) - (1-y) .* log(1-h));
    theta
    J_history(iter) = costFunction(theta, X, y)
end
%costFunctionReg([0.01, 0.1, 0.02], X1(:, [1,2,3]), X(:,3), 0.002, 0.001, 10)


% =============================================================

end

0 个答案:

没有答案
相关问题