Search Descriptions

General

Neural machine Translation

Statistical Machine Translation

Search Publications


author

title

other

year

Stack Decoding

An important principle for efficient search is the organization of the search space to take advantage of dynamic programming and the early pruning of comparable hypothesis (that are placed in the same stack).

Stack Decoding is the main subject of 15 publications. 11 are discussed here.

Publications

The stack decoding algorithm has it roots in work by Tillmann et al. (1997), who describe a dynamic programming algorithm, similar to techniques for hidden Markov models, for monotone decoding of word-based models. Allowing for reordering makes decoding more complex.
Efficient A* search makes the translation of short sentences possible (Och et al., 2001). However, longer sentences require pruning Wang and Waibel (1997); Nießen et al. (1998) or restrictions on reordering (Tillmann and Ney, 2000) or both (Tillmann and Ney, 2003).
Ortiz-Martínez et al. (2006) compare different type of stack decoding with varying number of stacks. Delaney et al. (2006) speed up stack decoding with A* pruning. Moore and Quirk (2007) stress the importance of threshold pruning and the avoidance of expanding doomed hypotheses (Moore and Quirk, 2007).
The rest cost (or future cost) estimation may be improved by taking expected reordering costs into account (Green et al., 2010).
Additional efficiency of dynamic programming is gained by collapsing contexts that are invariant to the language model (Li and Khudanpur, 2008).

Benchmarks

Discussion

Related Topics

New Publications

  • Heafield et al. (2014)
  • Wuebker et al. (2012)
  • Zens and Ney (2008)
  • Bach et al. (2009)

Actions

Download

Contribute