Search Descriptions

General

Neural machine Translation

Statistical Machine Translation

Search Publications


author

title

other

year

Domain Adaptation

Often, there is a mismatch between the domain for which training data are available and the target domain of a machine translation system. Different domains may vary by topic or text style.

Domain Adaptation is the main subject of 143 publications. 28 are discussed here.

Publications

Machine translation systems are often built for very specific domains, such as movie and television subtitles (Volk and Harder, 2007). A translation model may be trained only from sentences in a parallel corpus that are similar to the input sentence (Hildebrand et al., 2005), which may be determined by language model perplexity (Yasuda et al., 2008). Also, a domain-specific language model may be obtained by including only sentences that are similar to the ones in the target domain (Sethy et al., 2006). We may detect such sentences using the tf/idf method common in information retrieval and then boost the count of these sentences (Lü et al., 2007). Better still is the cross-entropy difference method of (Moore and Lewis, 2010) which compares in-doman and out-of-domain language model score. Axelrod et al. (2011); Mansour et al. (2011); Axelrod et al. (2012) apply this method for parallel corpora. Duh et al. (2013) use a neural network language model in this approach. Bicici and Yuret (2011) sub-sample sentence pairs whose source has most overlap with the test set. In a more graded approach, sentences may be weighted by how much match the target domain (Bach et al., 2008).
Instead of discarding some of the training data, the mixture-model approach weights domain-specific models trained on data from different domains and combines them (Foster and Kuhn, 2007). Mixture models may also used to automatically cluster domains (Civera and Juan, 2007). Such domain specific models may be combined as components in the standard log-linear model (Koehn and Schroeder, 2007) and may be build both for language models and translation models (Schwenk and Koehn, 2008). Sennrich (2011) builds the in-domain translation model by still basing the scoring on statistics over the full corpus. When dealing with multiple domain-specific translation models, input sentences need to be classified to the correct domain, which may done using a language model or information retrieval approach (Xu et al., 2007). If the classifier returns proportions on how well the input falls into some of the classes, this may be used to dynamically weight the domain models (Finch and Sumita, 2008). Wu et al. (2008) present methods that exploit domain-specific monolingual corpora and dictionaries.
Schwenk (2008); Schwenk and Senellart (2009) propose to generate in-domain training data by translating in-domain monolingual data, filtering the created artificial parallel corpus, and adding it to the baseline parallel data. Huck et al. (2011) confirm the effectiveness of the method. Follow-up work indicates that creating the data in the inverse direction (from monolingual target side data) gives better results (Lambert et al., 2011), especially for morphologically rich languages (Bojar and Tamchyna, 2011).
Related to domain adaptation is the problem of adapting machine translation systems to different regional language uses (Cheng et al., 2004).
Ceauşfu et al. (2011) compare different domain adaptation methods to different subjects matters in patent translation and observe small gaines over the baseline.

Benchmarks

Discussion

Related Topics

New Publications

  • Wang et al. (2013)
  • Hoang and Sima'an (2014)
  • Rosa et al. (2016)
  • Durrani et al. (2015)
  • Thurmair (2015)
  • Ehara et al. (2015)
  • Biçici (2015)
  • Banerjee et al. (2015)
  • Eetemadi et al. (2015)
  • Axelrod et al. (2015)
  • Lewis et al. (2015)
  • Rabinovich et al. (2017)
  • Cuong et al. (2016)
  • Ture and Boschee (2016)
  • Durrani et al. (2016)
  • Wang et al. (2016)
  • Servan and Dymetman (2015)
  • Cuong and Sima'an (2014)
  • Cuong and Sima'an (2014)
  • Cuong and Sima'an (2014)
  • Cuong and Sima'an (2015)
  • Mirkin and Meunier (2015)
  • Wees et al. (2015)
  • Haddow (2013)
  • Hsieh et al. (2013)
  • Toral (2013)
  • Bicici (2013)
  • Weller et al. (2014)
  • Bertoldi et al. (2014)
  • Cettolo et al. (2014)
  • Hoang and Sima'an (2014)
  • Mathur et al. (2014)
  • Lu et al. (2014)
  • Mansour and Ney (2014)
  • Hasler et al. (2014)
  • Bicici et al. (2014)
  • Carpuat et al. (2014)
  • Mediani et al. (2014)
  • Wang et al. (2014)
  • Cui et al. (2013)
  • Hasler et al. (2014)
  • Louis and Webber (2014)
  • Hoang and Sima'an (2014)
  • Kirchhoff and Bilmes (2014)
  • Carpuat et al. (2013)
  • Sennrich et al. (2013)
  • Zhu et al. (2013)
  • Zhang and Zong (2013)
  • Mansour et al. (2014)
  • Liu et al. (2014)
  • Jeblee et al. (2014)
  • Cettolo et al. (2014)
  • Chen et al. (2014)
  • Hasler et al. (2014)
  • Mirkin and Besacier (2014)
  • Sudoh et al. (2014)
  • Wuebker et al. (2014)
  • Zhang et al. (2014)