脚本时间改进

时间:2014-11-25 18:46:06

标签: python matlab

我试图提高部分剧本的时间效率但我不再有任何想法。我在Matlab和Python中运行了以下脚本,但Matlab的实现速度比Python的快4倍。知道怎么改进吗?

的Python

import time
import numpy as np

def ComputeGradient(X, y, theta, alpha):
    m = len(y)
    factor = alpha / m
    h = np.dot(X, theta)
    theta = [theta[i] - factor * sum((h-y) * X[:,i]) for i in [0,1]]
    #Also tried this but with worse performances
    #diff = np.tile((h-y)[:, np.newaxis],2)
    #theta = theta - factor * sum(diff * X)
    return theta

if __name__ == '__main__':
    data = np.loadtxt("data_LinReg.txt", delimiter=',')    
    theta = [0, 0]
    alpha = 0.01
    X = data[:,0]
    y = data[:,1]
    X = np.column_stack((np.ones(len(y)), X))
    start_time = time.time()
    for i in range(0, 1500, 1):
        theta = ComputeGradient(X, y, theta, alpha)
    stop_time = time.time()
    print("--- %s seconds ---" % (stop_time - start_time))

- > 0.048s

Matlab的

data = load('data_LinReg.txt');
X = data(:, 1); y = data(:, 2);
m = length(y);
X = [ones(m, 1), data(:,1)]; % Add a column of ones to x
theta = zeros(2, 1);
iterations = 1500;
alpha = 0.01;
tic
for i = 1:1500
   theta = gradientDescent(X, y, theta, alpha);
end
toc

function theta = gradientDescent(X, y, theta, alpha)
   m = length(y); % number of training examples
   h = X * theta;
   t1 = theta(1) - alpha * sum(X(:,1).*(h-y)) / m;
   t2 = theta(2) - alpha * sum(X(:,2).*(h-y)) / m;
   theta = [t1; t2];
end

- > 0.01S

[编辑] :解决方案途径

一种可能的途径是使用numpy vectorization而不是python root函数。在建议的代码中,将sum替换为np.sum可以提高时间效率,使其更接近Matlab(0.019s而不是0.048s)

此外,我分别测试了向量上的函数:np.dot,np.sum,*(product),并且所有这些函数似乎比等效的Matlab更快(在某些情况下真的更快)。我想知道为什么它在Python中仍然较慢......

2 个答案:

答案 0 :(得分:1)

该解决方案提供了一个优化的MATLAB实现 -

  • gradient-descent实施的功能内联。
  • 预先计算在循环内重复使用的某些值。

代码 -

data = load('data_LinReg.txt');

iterations = 1500;
alpha = 0.01;
m = size(data,1);

M = alpha/m; %// scaling factor

%// Pre-compute certain values that are repeatedly used inside the loop
sum_a = M*sum(data(:,1));
sum_p = M*sum(data(:,2));
sum_ap = M*sum(data(:,1).*data(:,2));
sum_sqa = M*sum(data(:,1).^2);
one_minus_alpha = 1 - alpha;
one_minus_sum_sqa = 1 - sum_sqa;

%// Start processing
t1n0 = 0;
t2n0 = 0;
for i = 1:iterations
    temp = t1n0*one_minus_alpha - t2n0*sum_a + sum_p;
    t2n0 = t2n0*one_minus_sum_sqa - t1n0*sum_a + sum_ap;
    t1n0 = temp;
end
theta = [t1n0;t2n0];

快速测试表明,与问题中发布的MATLAB代码相比,这提供了明显的加速。

现在,我对python不太熟悉,但我认为这个MATLAB代码可以很容易地移植到python。

答案 1 :(得分:0)

我不知道它会产生多大的差别,但你可以通过以下方式简化你的功能:

s = alpha / size(X,1);
gradientDescent = @(theta)( theta - s * X' * (X*theta - y) );

由于您需要theta_ {i}才能找到theta_ {i + 1},我看不出任何方法可以避免循环。

相关问题