Matlab中的快速CVX求解器

时间:2016-12-05 21:59:58

标签: matlab optimization least-squares convex-optimization cvx

我想知道Matlab中最快的凸优化器是什么,或者有什么方法可以加速当前的求解器?我正在使用CVX,但它已经永远地解决了我的优化问题。 我的优化是解决

Array
(
    [id] => 1
    [name] => MSI Nightblade MI2-027XES Gaming
    [description] => Forjado com a paix�o pelo jogo, o MSI Nightblade MI2 foi feito para aqueles que anseiam por uma experi�ncia de jogo imersiva. Com armazenamento em abund�ncia, refrigera��o eficiente e gr�ficos de alta qualidade, esta pequena m�quina gaming est� pronta para desbloquear a sua pr�xima aventura gaming.
    [price] => 799
    [stock] => 147
    [sold] => 32
    [added] => 2010-11-22
    [type] => COMPUTER
    [subtype] => DESKTOP
    [discount] => 10
)
1

其中A和B的大小非常大。

有什么方法可以通过最小二乘解算器解决这个问题,然后将其转移到约束版本以使其更快?

1 个答案:

答案 0 :(得分:0)

我不确定x.d <= delta是什么意思,但我只是假设它应该是x <= delta

您可以使用投影梯度法或加速投影梯度法(这只是投影梯度法的略微修改,“神奇地”收敛得更快)来解决这个问题。这是一些python代码,显示如何最小化.5 || Ax-b || ^ 2受制于使用FISTA的0 <= x <= delta的约束,FISTA是加速投影梯度方法。有关投影梯度法和FISTA的更多细节可以在Boyd的manuscript近端算法中找到。

import numpy as np
import matplotlib.pyplot as plt

def fista(gradf,proxg,evalf,evalg,x0,params):
    # This code does FISTA with line search

    maxIter = params['maxIter']
    t = params['stepSize'] # Initial step size
    showTrigger = params['showTrigger']

    increaseFactor = 1.25
    decreaseFactor = .5

    costs = np.zeros((maxIter,1))

    xkm1 = np.copy(x0)
    vkm1 = np.copy(x0)

    for k in np.arange(1,maxIter+1,dtype = np.double):

        costs[k-1] = evalf(xkm1) + evalg(xkm1)
        if k % showTrigger == 0:
            print "Iteration: " + str(k) + "    cost: " + str(costs[k-1])

        t = increaseFactor*t

        acceptFlag = False
        while acceptFlag == False:
            if k == 1:
                theta = 1
            else:
                a = tkm1
                b = t*(thetakm1**2)
                c = -t*(thetakm1**2)
                theta = (-b + np.sqrt(b**2 - 4*a*c))/(2*a)

            y = (1 - theta)*xkm1 + theta*vkm1
            (gradf_y,fy) = gradf(y)
            x = proxg(y - t*gradf_y,t)
            fx = evalf(x)
            if fx <= fy + np.vdot(gradf_y,x - y) + (.5/t)*np.sum((x - y)**2):
                acceptFlag = True
            else:
                t = decreaseFactor*t

        tkm1 = t
        thetakm1 = theta
        vkm1 = xkm1 + (1/theta)*(x - xkm1)
        xkm1 = x

    return (xkm1,costs)


if __name__ == '__main__':

    delta = 5.0
    numRows = 300
    numCols = 50
    A = np.random.randn(numRows,numCols)
    ATrans = np.transpose(A)
    xTrue = delta*np.random.rand(numCols,1)
    b = np.dot(A,xTrue)
    noise = .1*np.random.randn(numRows,1)
    b = b + noise

    def evalf(x):
        AxMinusb = np.dot(A, x) - b
        val = .5 * np.sum(AxMinusb ** 2)
        return val

    def gradf(x):
        AxMinusb = np.dot(A, x) - b
        grad = np.dot(ATrans, AxMinusb)
        val = .5 * np.sum(AxMinusb ** 2)
        return (grad, val)

    def evalg(x):
        return 0.0

    def proxg(x,t):
        return np.maximum(np.minimum(x,delta),0.0)

    x0 = np.zeros((numCols,1))
    params = {'maxIter': 500, 'stepSize': 1.0, 'showTrigger': 5}
    (x,costs) = fista(gradf,proxg,evalf,evalg,x0,params)

    plt.figure()
    plt.plot(x)
    plt.plot(xTrue)

    plt.figure()
    plt.semilogy(costs)