BERT微调:训练损失减少,但准确性保持不变

时间:2020-06-21 07:57:18

标签: python deep-learning nlp pytorch bert-language-model

一段时间以来,我一直在bert上训练自定义数据集。在训练过程中,我观察到,尽管训练损失在减少,val精度仍保持为0.5。我已经尝试了Huggingface中所有可用的所有BERT模型,但不知道该怎么做。这是训练期间的输出:

step:  0 of total steps:  250
step:  25 of total steps:  250
step:  50 of total steps:  250
step:  75 of total steps:  250
step:  100 of total steps:  250
step:  125 of total steps:  250
step:  150 of total steps:  250
step:  175 of total steps:  250
step:  200 of total steps:  250
step:  225 of total steps:  250
Average loss for epoch:  0.4582762689590454
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:6: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
  
  Accuracy: 0.50
 Eval loss:  0.40
Epoch:  1
step:  0 of total steps:  250
step:  25 of total steps:  250
step:  50 of total steps:  250
step:  75 of total steps:  250
step:  100 of total steps:  250
step:  125 of total steps:  250
step:  150 of total steps:  250
step:  175 of total steps:  250
step:  200 of total steps:  250
step:  225 of total steps:  250
Average loss for epoch:  0.3786401364207268
  Accuracy: 0.50
 Eval loss:  0.39```


Can this be overfitting by any chance?

0 个答案:

没有答案