修改感知器成为梯度下降

时间:2015-03-07 08:48:14

标签: java machine-learning

根据this视频,感知器和梯度下降算法之间的实质性差异非常小。他们基本上将其指定为:

感知器:Δw i =η(y - ŷ)x i

梯度下降:Δw i =η(y - α)x i

我已经实现了感知器算法的工作版本,但我不明白我需要更改哪些部分才能将其转换为渐变下降。

下面是我的感知器代码的承载部分,我想这些是我需要修改的组件。但是哪里?我需要改变什么?我不明白。

这是出于教学原因,我有点想到了这一点,但仍然对渐变感到困惑,请参阅下面的 更新

      iteration = 0;
      do 
      {
          iteration++;
          globalError = 0;
          //loop through all instances (complete one epoch)
          for (p = 0; p < number_of_files__train; p++) 
          {
              // calculate predicted class
              output = calculateOutput( theta, weights, feature_matrix__train, p, globo_dict_size );
              // difference between predicted and actual class values
              localError = outputs__train[p] - output;
              //update weights and bias
              for (int i = 0; i < globo_dict_size; i++) 
              {
                  weights[i] += ( LEARNING_RATE * localError * feature_matrix__train[p][i] );
              }
              weights[ globo_dict_size ] += ( LEARNING_RATE * localError );

              //summation of squared error (error value for all instances)
              globalError += (localError*localError);
          }

          /* Root Mean Squared Error */
          if (iteration < 10) 
              System.out.println("Iteration 0" + iteration + " : RMSE = " + Math.sqrt( globalError/number_of_files__train ) );
          else
              System.out.println("Iteration " + iteration + " : RMSE = " + Math.sqrt( globalError/number_of_files__train ) );
      } 
      while(globalError != 0 && iteration<=MAX_ITER);

这是我的感知器的关键所在:

  static int calculateOutput( int theta, double weights[], double[][] feature_matrix, int file_index, int globo_dict_size )
  {
     //double sum = x * weights[0] + y * weights[1] + z * weights[2] + weights[3];
     double sum = 0;

     for (int i = 0; i < globo_dict_size; i++) 
     {
         sum += ( weights[i] * feature_matrix[file_index][i] );
     }
     //bias
     sum += weights[ globo_dict_size ];

     return (sum >= theta) ? 1 : 0;
  }

只是我用这样的东西替换caculateOutput方法:

public static double [] gradientDescent(final double [] theta_in, final double alpha, final int num_iters, double[][] data ) 
{
    final double m = data.length;   
    double [] theta = theta_in;
    double theta0 = 0;
    double theta1 = 0;
    for (int i = 0; i < num_iters; i++) 
    {                        
        final double sum0 = gradientDescentSumScalar0(theta, alpha, data );
        final double sum1 = gradientDescentSumScalar1(theta, alpha, data);                                   
        theta0 = theta[0] - ( (alpha / m) * sum0 ); 
        theta1 = theta[1] - ( (alpha / m) * sum1 );                        
        theta = new double [] { theta0, theta1 };
    }
    return theta;
}

更新编辑


此时我觉得我很亲密。

我理解如何计算假设,我认为我已经正确地做到了这一点,但是,这段代码仍然存在严重错误。我很确定它与我对gradient的计算有关。当我运行它时,错误会大幅波动,然后转到infinity,然后转到NaaN

  double cost, error, hypothesis;
  double[] gradient;
  int p, iteration;

  iteration = 0;
  do 
  {
    iteration++;
    error = 0.0;
    cost = 0.0;

    //loop through all instances (complete one epoch)
    for (p = 0; p < number_of_files__train; p++) 
    {

      // 1. Calculate the hypothesis h = X * theta
      hypothesis = calculateHypothesis( theta, feature_matrix__train, p, globo_dict_size );

      // 2. Calculate the loss = h - y and maybe the squared cost (loss^2)/2m
      cost = hypothesis - outputs__train[p];

      // 3. Calculate the gradient = X' * loss / m
      gradient = calculateGradent( theta, feature_matrix__train, p, globo_dict_size, cost, number_of_files__train);

      // 4. Update the parameters theta = theta - alpha * gradient
      for (int i = 0; i < globo_dict_size; i++) 
      {
          theta[i] = theta[i] - LEARNING_RATE * gradient[i];
      }

    }

    //summation of squared error (error value for all instances)
    error += (cost*cost);       

  /* Root Mean Squared Error */
  if (iteration < 10) 
      System.out.println("Iteration 0" + iteration + " : RMSE = " + Math.sqrt(  error/number_of_files__train  ) );
  else
      System.out.println("Iteration " + iteration + " : RMSE = " + Math.sqrt( error/number_of_files__train ) );
  //System.out.println( Arrays.toString( weights ) );

  } 
  while(cost != 0 && iteration<=MAX_ITER);


}

static double calculateHypothesis( double[] theta, double[][] feature_matrix, int file_index, int globo_dict_size )
{
    double hypothesis = 0.0;

     for (int i = 0; i < globo_dict_size; i++) 
     {
         hypothesis += ( theta[i] * feature_matrix[file_index][i] );
     }
     //bias
     hypothesis += theta[ globo_dict_size ];

     return hypothesis;
}

static double[] calculateGradent( double theta[], double[][] feature_matrix, int file_index, int globo_dict_size, double cost, int number_of_files__train)
{
    double m = number_of_files__train;

    double[] gradient = new double[ globo_dict_size];//one for bias?

    for (int i = 0; i < gradient.length; i++) 
    {
        gradient[i] = (1.0/m) * cost * feature_matrix[ file_index ][ i ] ;
    }

    return gradient;
}

1 个答案:

答案 0 :(得分:1)

当您具有(sum >= theta) ? 1 : 0等不可微分的激活函数时,感知器规则只是梯度下降的近似值。正如他们在视频结尾处所要求的那样,你不能在那里使用渐变,因为这个阈值函数是不可微分的(好吧,它的渐变没有为x = 0定义,渐变在其他地方都是零)。如果你有一个像sigmoid这样的平滑函数,而不是这个阈值,你可以计算实际的渐变。

在这种情况下,您的体重更新将为LEARNING_RATE * localError * feature_matrix__train[p][i] * output_gradient[i]。对于sigmoid的情况,我发送给您的链接还会显示如何计算output_gradient

总之,要从感知器到渐变下降,你必须

  1. 使用其衍生(渐变)不为零的激活函数 到处。
  2. 应用链规则以定义新的更新规则