word2vec算法出乎意料的结果

时间:2018-01-20 19:43:57

标签: c++ algorithm c++11 word2vec

我在c ++中实现了word2vec。 我发现原始语法不清楚,所以我想我会重新实现它,使用c ++的所有好处(std :: map,std :: vector等)

这是每次训练样本时实际调用的方法(l1表示第一个单词的索引,l2表示第二个单词的索引,标签表示它是正样本还是负样本,并且neu1e充当梯度的累加器)

void train(int l1, int l2, double label, std::vector<double>& neu1e)
{
        // Calculate the dot-product between the input words weights (in 
        // syn0) and the output word's weights (in syn1neg).
        auto f = 0.0;

        for (int c = 0; c < m__numberOfFeatures; c++) 
            f += syn0[l1][c] * syn1neg[l2][c];

      // This block does two things:
      //   1. Calculates the output of the network for this training
      //      pair, using the expTable to evaluate the output layer
      //      activation function.
      //   2. Calculate the error at the output, stored in 'g', by
      //      subtracting the network output from the desired output, 
      //      and finally multiply this by the learning rate.
      auto z = 1.0 / (1.0 + exp(-f));
      auto g = m_learningRate * (label - z);

      // Multiply the error by the output layer weights.
      // (I think this is the gradient calculation?)
      // Accumulate these gradients over all of the negative samples.
      for (int c = 0; c < m__numberOfFeatures; c++) 
        neu1e[c] += (g * syn1neg[l2][c]);    

      // Update the output layer weights by multiplying the output error
      // by the hidden layer weights.
      for (int c = 0; c < m__numberOfFeatures; c++) 
        syn1neg[l2][c] += g * syn0[l1][c];         
}

此方法由

调用
void train(const std::string& s0, const std::string& s1, bool isPositive, std::vector<double>& neu1e)
    {
        auto l1 = m_wordIDs.find(s0) != m_wordIDs.end() ? m_wordIDs[s0] : -1;
        auto l2 = m_wordIDs.find(s1) != m_wordIDs.end() ? m_wordIDs[s1] : -1;
        if(l1 == -1 || l2 == -1)
            return;

        train(l1, l2, isPositive ? 1 : 0, neu1e);
    }

反过来由主要训练方法调用。

可以在

找到完整的代码

https://github.com/jorisschellekens/ml/tree/master/word2vec

完整的例子

https://github.com/jorisschellekens/ml/blob/master/main/example_8.hpp

当我运行此算法时,与father“最接近”的前10个单词是:

  

父亲
  汗
  沙阿
  健忘
  迈阿密
  皮疹
  症状
  葬礼
  印第安纳波利斯
  印象深刻

这是计算最近单词的方法:

std::vector<std::string> nearest(const std::string& s, int k) const
    {
        // calculate distance
        std::vector<std::tuple<std::string, double>> tmp;
        for(auto &t : m_unigramFrequency)
        {
            tmp.push_back(std::make_tuple(t.first, distance(t.first, s)));
        }

        // sort
        std::sort(tmp.begin(), tmp.end(), [](const std::tuple<std::string, double>& t0, const std::tuple<std::string, double>& t1)
        {
            return std::get<1>(t0) < std::get<1>(t1);
        });

        // take top k
        std::vector<std::string> out;
        for(int i=0; i<k; i++)
        {
            out.push_back(std::get<0>(tmp[tmp.size() - 1 - i]));
        }

        // return
        return out;
    }

这看起来很奇怪。 我的算法有问题吗?

1 个答案:

答案 0 :(得分:0)

你确定,你得到“最近”的单词(不是最差的)吗?

        ...
        // take top k
        std::vector<std::string> out;
        for(int i=0; i<k; i++)
        {
            out.push_back(std::get<0>(tmp[tmp.size() - 1 - i]));
        }
        ...
相关问题