Search Descriptions


Neural machine Translation

Statistical Machine Translation

Search Publications





Inversion Transduction Grammars

Early work on synchronous context free grammars for machine translation made a distinction between lexical mapping rules and purely non-terminal binary rules to make reordering decisions.

Inversion Transduction Grammars is the main subject of 28 publications. 19 are discussed here.


Rooted in a finite state machine approach, head automata have been developed that allow for tree representation during the translation process (Alshawi, 1996; Alshawi et al., 1997; Alshawi et al., 1998; Alshawi et al., 2000). Wang and Waibel (1998) also induce hierarchical structures in their statistical machine translation model.
The use of synchronous grammars for statistical machine translation has been pioneered by Wu (1995); Wu (1996); Wu (1997); Wu and Wong (1998) with their work on inversion transduction grammars (ITG) and stochastic bracketing transduction grammars (SBTG). A different formal model are multitext grammars (Melamed, 2003; Melamed, 2004; Melamed et al., 2004) and range concatenation grammars (Søgaard, 2008).
Zhang and Gildea (2005) extend the ITG formalism by lexicalizing grammar rules and apply it to a small-scale word alignment task. Armed with an efficient A* algorithm for this formalism (Zhang and Gildea, 2006), they extend the formalism with rules that are lexicalized in both input and output (Zhang and Gildea, 2006).
Word alignments generated with ITG may also be used to extract phrase for phrase-based models (Sánchez and Benedí, 2006; Sánchez and Benedí, 2006b). Zhang et al. (2008) present an efficient ITG phrase alignment algorithm that uses Bayesian learning with priors.



Related Topics

New Publications

  • Saers and Wu (2013)
  • Saers and Wu (2013)
  • S\ogaard (2011)
  • Saers et al. (2012)
  • Liu et al. (2010)
  • Li et al. (2012)
  • Addanki et al. (2012)
  • Saers et al. (2010)
  • Haghighi et al. (2009)