BLEU: a Method for Automatic Evaluation of Machine Translation
Published:
This post covers paper BLEU: a Method for Automatic Evaluation of Machine Translation
- Why BLEU Score for Machine Translation
- Human evaluations are expensive and time-consuming
- BLEU Score is language independent, inexpensive and correlates with human
- Comparing Reference Translation (Human Translation) with Candidate Translation (Machine Translation)
- https://towardsdatascience.com/bleu-bilingual-evaluation-understudy-2b4eab9bcfd1
- https://leimao.github.io/blog/BLEU-Score/
- https://machinelearningmastery.com/calculate-bleu-score-for-text-python/