Search Descriptions

General

Neural machine Translation

Statistical Machine Translation

Search Publications


author

title

other

year

Symmetrization

The birth defect of the IBM Models is the restriction to one-to-many alignments. Training models in both directions, and merging the outcome of the training overcomes this.

Symmetrization is the main subject of 12 publications. 10 are discussed here.

Publications

Symmetrizing IBM model alignments was first proposed by Och and Ney (2003) and may be improved by already symmetrizing during the IBM Model training (Matusov et al., 2004), or by explicitly modeling the agreement between the two alignments and optimizing it during with EM training (Liang et al., 2006). Different word alignments obtained with various IBM Models and symmetrization methods may also be combined using a maximum entropy approach (Ayan and Dorr, 2006; Ganchev et al., 2008). Crego and Habash (2008) use constraints over syntactic chunks to guide symmetrization.
Starting an word alignment resulting from IBM models, additional features may be defined to assess each alignment points. Fraser and Marcu (2006) use additional features during symmetrization, either to be used in re-ranking or integrated into the search. Such features may form the basis for a classifier that adds one alignment points at at time (Ren et al., 2007), possibly based on a skeleton of highly likely alignment points (Ma et al., 2008), or deletes alignment points one at a time from the symmetrized union alignment (Fossum et al., 2008).

Benchmarks

Discussion

Related Topics

New Publications

  • Liu et al. (2015)
  • Brown et al. (2005)

Actions

Download

Contribute