我实现渐变增强

时间:2013-10-12 09:38:46

标签: python machine-learning regression adaboost

目前我正在阅读Peter Harrington的“机器学习行动”。我尝试使用本书的一些代码来实现AddBoost Gradient Boosting。我使用了来自的元素的伪代码 统计学习“ Trevor Hastie等人。和 Friedman的Gradient Boost算法。我花了不少时间和精力在python中实现算法所以我会非常如果你能指出我的错误,那就感激不尽。

stumpReg(), buildstumReg(), gradBoostPredict()

是从本书改编的辅助函数。我使用绝对错误丢失作为丢失函数  这是代码:

"""
stumpReg
Description: Creates a simple stump by taking mean values of target 
        variable(classLabel) in each of 2 branches
Parameters: 
dataSet - training data
classLabel - target for the prediction (dependent variable)
dim - dimension of the feature vector
thresh - threshold value
ineq - inequality('less than', 'greater than') 
Returns:
retArr - the resulting array after splitting
select - boolean array that defines values in 2 branches
"""
def stumpReg(dataSet,dim,thresh,ineq):
    retArr = ones((dataSet.shape[0],1))
    if ineq == 'lt':
        select = dataSet[:,dim] <= thresh
        retArr[select] = mean(dataSet[:,-1][select])
        retArr[~select] = mean(dataSet[:,-1][~select])
    else:
        select = dataSet[:,dim] > thresh
        retArr[select] = mean(dataSet[:,-1][~select])
        retArr[~select] = mean(dataSet[:,-1][select])
    return retArr, select

def buildStumpReg(dataSet,classLabel):
    dataSet = mat(dataSet); classLabel = mat(classLabel).T
    m,n = shape(dataSet)
    stepNum = 10.0; bestClassEst = mat(zeros((m,1))); bestStump = {}
    minError = inf
    for i in range(n):
        minRange = dataSet[:,i].min(); maxRange = dataSet[:,i].max()
        stepSize = (maxRange - minRange) / stepNum
        for j in range(int(stepNum)):
            for ineq in ['lt','gt']:
                thresh = (minRange + float(j) * stepSize)
                classArr, selected = stumpReg(dataSet,i,thresh,ineq) 
                errArr = err(classLabel,classArr)
                totalErr = errArr.sum()
                if totalErr < minError:
                    minError = totalErr
                    bestClassEst = classArr.copy()
                    bestSelect = selected.copy()
                    bestStump['dim'] = i
                    bestStump['thresh'] = thresh
                    bestStump['ineq'] = ineq
    return bestStump, minError, bestClassEst, bestSelect

def findMinGamma(sub,x,y):
    gamma = inf
    for error in [sub.max(), sub.min(), sub.mean()]:
        if err(x+gamma,y).sum() > err(x+error,y).sum():
            gamma = error
    return gamma

def TreeBoost(dataset, classLab, numIt=19):
    N, numFeat = dataset.shape # N is the number of training entries
    weakPredictors = []
    stump, error, classEst,sel = buildStumpReg(dataset,classLab.T)
    weakPredictors.append(classEst.T)
    gradLoss = zeros((N,1))
    for m in range(numIt):
        gradLoss = sign(classLab.T - classEst) #gradient of the absolute error loss function
        bestFittedStump,fittedError,f,selected = buildStumpReg(dataset,gradLoss) # fitting a tree to target *gradLoss*
        f=mat(f)
        left = f[selected]
        yLeft = gradLoss[selected]
        right = f[~selected]
        yRight = gradLoss[~selected]
        subLeft=yLeft-left
        subRight=yRight-right
        gammaLeft = findMinGamma(subLeft,left,yLeft)
        gammaRight = findMinGamma(subRight,right,yRight)
        gamma = selected*gammaLeft + ~selected*gammaRight
        bestFittedStump['gamma'] = gamma
        weakPredictors.append(bestFittedStump)
        f += multiply(f,gamma)
    return f,weakPredictors

def gradBoostPredict(testData, weakPredictors):
    testData = mat(testData)
    m = testData.shape[0]
    pred = mat(zeros((m,1)))
    classEst = weakPredictors[0]
    for i in range(1,len(weakPredictors)):
        classEst, select = stumpReg(testData,classEst, weakPredictors[i]['dim'], \
                                    weakPredictors[i]['thresh'], weakPredictors[i]['ineq'])
        classEst += multiply(classEst,weakPredictors[i]['gamma'])
        classEst = classEst.T
    return pred

有些事情是非常错误的,因为预测与实际价值相差甚远。如果需要,我会非常高兴再给予澄清。 提前谢谢。

0 个答案:

没有答案
相关问题