Moses
statistical
machine translation
system

Roadmap

The Moses decoder follows a very modular design, and we are continuously extending its capabilities. See below for a diagram that illustrates the different input and output modalities, as well as different decoding algorithms supported by different implementations of language models and translation models.

The boxes with red text are currently planned extensions:

  • Depth-first decoding: to provide anytime algorithms for decoding
  • Forced decoding: to compute scores for provided output
  • Suffix-array translation models: an alternative way to store large rule-sets without the need to translate them
  • Maximum entropy translation models: translation models that incorporate additional source-side and context information for scoring translation rules.

Since most of this work is carried out in academic settings, it is hard to plan when it will be completed, but we expect that most of the capabilities mentioned above will be integrated in 2009.

Notable additions in 2010

  • Experimental Management System

Notable additions in 2009

  • Tree-based models
  • Multi-threaded decoding
  • Language model server

Notable additions in 2008

  • Specification of reordering constraints with XML
  • Early discarding pruning
  • Randomized language models
  • Output of search graph
  • Cube pruning
Edit - History - Print
Page last modified on November 06, 2011, at 01:55 PM