Quality Estimation

Abstract

In this talk, I discuss the Quality Estimation (QE) for Machine Translation (MT). I discuss the basics of the QE task, and the QE shared task organized every year. Further, the discussion get into how current MT systems achieve very good results on a growing variety of language pairs and datasets. However, they are known to produce fluent translation outputs that can contain important meaning errors, thus undermining their reliability in practice. Quality Estimation (QE) is the task of automatically assessing the performance of MT systems at test time. Thus, in order to be useful, QE systems should be able to detect such errors. However, this ability is yet to be tested in the current evaluation practices, where QE systems are assessed only in terms of their correlation with human judgements. In this talk, we will discuss how we attempted to bridge this gap by proposing a general methodology for the adversarial testing of QE for MT. In this discussion, we first see that despite a high correlation with human judgements achieved by the recent SOTA, certain types of meaning errors are still problematic for QE to detect. We also discuss how the ability of a given model to discriminate between meaning-preserving and meaning-altering perturbations is predictive of its overall performance, thus potentially allowing us to compare the QE systems without relying on manual quality annotation.

Date
Location
University of Surrey, United Kingdom (Online)