Online Markov Decoding: Lower Bounds and Near-Optimal Approximation Algorithms

NeurIPS 2019  ·  Vikas K. Garg, Tamar Pichkhadze ·

We resolve the fundamental problem of online decoding with general $n^{th}$ order ergodic Markov chain models. Specifically, we provide deterministic and randomized algorithms whose performance is close to that of the optimal offline algorithm even when latency is small. Our algorithms admit efficient implementation via dynamic programs, and readily extend to (adversarial) non-stationary or time-varying settings. We also establish lower bounds for online methods under latency constraints in both deterministic and randomized settings, and show that no online algorithm can perform significantly better than our algorithms. Empirically, just with latency one, our algorithm outperforms the online step algorithm by over 30\% in terms of decoding agreement with the optimal algorithm on genome sequence data.

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here