使用贝叶斯神经网络的分层模型,pymc3的NUTS表现不佳?

时间:2015-07-06 08:03:19

标签: neural-network bayesian hierarchical mcmc pymc3

我有一个用于学习贝叶斯网络的分层模型,只有一个隐藏层。网络参数分为4组输入到隐藏和隐藏到输出的权重和偏差。在每个参数组上定义高斯先验。超参数,这些先验的标准偏差,具有参数α= 1的Gamma分布。和beta = 1/60。输出噪声也是高斯噪声; Gamma(alpha = 1,beta = 200)高于其标准偏差。 NUTS步进函数用于采样,其缩放参数设置为仅参数的最大后验(不包括超参数)。数据是一维的并且从[0,1]开始,其中使用简单的一维正弦函数来提供观察。 我期望一组采样网络插入数据,当距离从这些观察点增加时开始不同意/发散,产生类似于高斯过程模型产生的形状。  令人惊讶的是,结果与我的预期不同。看起来有些烦人的约束阻止了采样器做得好并从整个后验采样: enter image description here (红线由MAP网络产生,黑线是底层功能,3个小红点是数据) pymc3伙伴们对这个问题的原因有什么解释,我该如何解决?

import numpy as np
import  theano
import theano.tensor as T
import pymc3 as pm
import matplotlib.pyplot as plt
import scipy

#
co = 3 #  4 ,5,6,7 ,8
numHiddenUnits = 100
numObservations= 3  # 6 ,7, 8
randomSeed = 1235
numSamples = 5500



def z_score(x,mean=None,std=None):
    if mean is None or std is None:
        mean,std = np.mean(x,axis=0),np.std(x,axis=0)
    return x - mean/std,mean,std



def sample(nHiddenUnts,X,Y):
    '''
     samples a set of ANNs from the posterior
   '''
    nFeatures = X.shape[1]
    with pm.Model() as model:

        #Gamma Hyperpriors

        alpha,beta = 1.,1./60.
        # standard deviation: Bias(Hidden-out)
        bhoSd =  pm.Gamma('bhoSd',alpha=alpha,beta=beta)
        #standard deviation: Weights (Hidden-out)
        whoSd =  pm.Gamma('whoSd',alpha=alpha,beta=beta)
        bihSd =  pm.Gamma('bihSd',alpha=alpha,beta=beta)
        #standard deviation: Bias (input-hidden)
        wihSd =  pm.Gamma('wihSd',alpha=alpha,beta=beta)
        #standard deviation:  output noise
        noiseSd = pm.Gamma('noiseSd',alpha=alpha,beta=200.)

        wihSd.tag.test_value= bihSd.tag.test_value=   whoSd.tag.test_value= bhoSd.tag.test_value = 200
    noiseSd.tag.test_value = 0.002


        #priors
        #Bias (HiddenOut)
        bho = pm.Normal('bho',mu=0,sd=bhoSd)
        bho.tag.test_value = 1
        who = pm.Normal('who',mu=0,sd=whoSd,shape=(nHiddenUnts,1) )
        who.tag.test_value =  np.random.normal(size=nHiddenUnts,loc=0,scale=1).reshape(nHiddenUnts,1)  #np.ones(shape=(nHiddenUnts,1))
        #Bias input-hidden
        bih = pm.Normal('bih',mu=0,sd=bihSd ,shape=nHiddenUnts)
        bih.tag.test_value =np.random.normal(size=nHiddenUnts,loc=0,scale=1)#np.ones(shape=nHiddenUnts)
    wih= pm.Normal('wih',mu=0,sd=wihSd ,shape= (nFeatures,nHiddenUnts))
        wih.tag.test_value =np.random.normal(size=nFeatures*nHiddenUnts,loc=0,scale=1).reshape(nFeatures,nHiddenUnts)#np.ones(shape= (nFeatures,nHiddenUnts))


        netOut=T.dot( T.nnet.sigmoid( T.dot( X , wih ) + bih ) , who ) + bho


        #likelihood
        likelihood = pm.Normal('likelihood',mu=netOut,sd=noiseSd,observed= Y)

        print("model built")
        #==================================================================

        start1 = pm.find_MAP(fmin=scipy.optimize.fmin_l_bfgs_b, vars=[bho,who,bih,wih],model=model)
        #start2 = pm.find_MAP(start=start1,    fmin=scipy.optimize.fmin_l_bfgs_b, vars=[noiseSd,wihSd,bihSd ,whoSd,bhoSd],model=model)
        step = pm.NUTS(scaling=start1)
        #step =  pm.HamiltonianMC(scaling=start1,path_length=5.,step_scale=.05,)
        trace = pm.sample(10,step,start=start1, progressbar=True,random_seed=1234)[:]
        step1 = pm.NUTS(scaling=trace[-1])
        print '-'
        trace = pm.sample(numSamples,step1,start=trace[-1], progressbar=True,random_seed=1234)[100:]

           #========================================================================
        return trace,start1

#underlying function
def g(x):
    global co
    return np.prod( x+np.sin(co*np.pi*x),axis=1)

np.random.seed(randomSeed)
XX= np.atleast_2d(np.random.uniform(0,1.,size =numObservations)).T
Y = np.atleast_2d(g(XX)).T
X,mean,std = z_score(XX)

trace,map_= sample(numHiddenUnits, X, Y)


 data =np.atleast_2d( np.linspace(0., 1., 100)).T
 theano.config.compute_test_value = 'off'

 d = T.dmatrix()
 w= T.dmatrix()
 b = T.vector()
 bo = T.dscalar()
 wo = T.dmatrix()
 y= T.dot( T.nnet.sigmoid( T.dot(d,w)+b),wo)+bo
 f = theano.function([d,w,b,wo,bo],y)


 data1,mean,std = z_score(data, mean, std)
 print trace['wih'].shape
 for s in trace[::1]:
     plt.plot(data, f(data1,s['wih'],s['bih'],s['who'],s['bho']),c='blue',alpha =0.15)


 plt.plot(data,g(data),'black')

 # prediction of maximum a posteriori network
 plt.plot(data,   f(data1,map_['wih'],map_['bih'],map_['who'],map_['bho']),c='red')
 plt.plot(XX,Y,'r.',markersize=10)

 plt.show()

更新:我通过以下方式更改了代码:首先,似乎分配模型参数的test_values很麻烦!但是没有'test_value'的值,find_MAP将不会收敛到正确的点,因此我删除了test_value赋值并使用起始点(initpoint)提供find_MAP()。第二,为了简化一切,我用Half_Normals替换了Gamma超前体。步骤方法也被Metropolis取代。知道示例函数如下所示:     def样本(nHiddenUnts,X,Y):         nFeatures = X.shape 1         使用pm.Model()作为模型:

        bhoSd =  pm.HalfNormal('bhoSd',sd=100**2)
        whoSd =  pm.HalfNormal('whoSd',sd=100**2)
        bihSd =  pm.HalfNormal('bihSd',sd=100**2)
        wihSd =  pm.HalfNormal('wihSd',sd=100**2)
        noiseSd = pm.HalfNormal('noiseSd',sd=0.001)



        #priors
        bho = pm.Normal('bho',mu=0,sd=bhoSd)
        who = pm.Normal('who',mu=0,sd=whoSd,shape=(nHiddenUnts,1) )
        bih = pm.Normal('bih',mu=0,sd=bihSd ,shape=nHiddenUnts)
        wih= pm.Normal('wih',mu=0,sd=wihSd ,shape= (nFeatures,nHiddenUnts))


        netOut=T.dot( T.nnet.sigmoid( T.dot( X , wih ) + bih ) , who ) + bho

        #likelihood
        likelihood = pm.Normal('likelihood',mu=netOut,sd=noiseSd,observed= Y)

        #========================================================
        initpoint = {'bho':1,
                   'who':np.random.normal(size=nHiddenUnts,loc=0,scale=1).reshape(nHiddenUnts,1),
                   'bih':np.random.normal(size=nHiddenUnts,loc=0,scale=1),
                   'wih':np.random.normal(size=nFeatures*nHiddenUnts,loc=0,scale=1).reshape(nFeatures,nHiddenUnts),
                   'bhoSd':100,
                   'bihSd':100,
                   'whoSd':100,
                   'wihSd':100,
                   'noiseSd':0.1
                   }

        start1 = pm.find_MAP(start=initpoint,fmin=scipy.optimize.fmin_l_bfgs_b, vars=[bho,who,bih,wih],model=model)
        step = pm.Metropolis(tune=True,tune_interval=10000)
        trace = pm.sample(numSamples,step,start=start1,progressbar=True,random_seed=1234)[10000::5]
        #========================================================

        return trace,start1

绘制15000个样本后的结果如下:enter image description here 只有当我将initpoint中的NoiseSd超参数和'noisSd'的标准偏差(find_MAP的起点)增加到0.1时,结果才会变为如下所示:enter image description here 然而,这种高水平的噪声是不可取的。

1 个答案:

答案 0 :(得分:1)

该模型如何与标准的Metropolis采样器相比?这应该给出一些关于问题是与算法有关还是在其他地方的指示。 MAP和NUTS估计值具有可比性这一事实似乎暗示了后者。