1 code implementation • 4 Nov 2024 • Stephen P. Boyd, Tetiana Parshakova, Ernest K. Ryu, Jaewook J. Suh
We present a novel methodology for convex optimization algorithm design using ideas from electric RLC circuits.
1 code implementation • 18 Sep 2024 • Tetiana Parshakova, Trevor Hastie, Stephen Boyd
We show that the inverse of an invertible PSD MLR matrix is also an MLR matrix with the same sparsity in factors, and we use the recursive Sherman-Morrison-Woodbury matrix identity to obtain the factors of the inverse.
2 code implementations • 30 Oct 2023 • Tetiana Parshakova, Trevor Hastie, Eric Darve, Stephen Boyd
The second is rank allocation, where we choose the ranks of the blocks in each level, subject to the total rank having a given value, which preserves the total storage needed for the MLR matrix.
1 code implementation • 2 Feb 2023 • Krzysztof Choromanski, Arijit Sehanobish, Han Lin, Yunfan Zhao, Eli Berger, Tetiana Parshakova, Alvin Pan, David Watkins, Tianyi Zhang, Valerii Likhosherstov, Somnath Basu Roy Chowdhury, Avinava Dubey, Deepali Jain, Tamas Sarlos, Snigdha Chaturvedi, Adrian Weller
We present two new classes of algorithms for efficient field integration on graphs encoding point clouds.
1 code implementation • 18 Dec 2019 • Tetiana Parshakova, Jean-Marc Andreoli, Marc Dymetman
Global Autoregressive Models (GAMs) are a recent proposal [Parshakova et al., CoNLL 2019] for exploiting global properties of sequences for data-efficient learning of seq2seq models.
Distributional Reinforcement Learning
reinforcement-learning
+2
1 code implementation • CONLL 2019 • Tetiana Parshakova, Jean-Marc Andreoli, Marc Dymetman
In the second step, we use this GAM to train (by distillation) a second autoregressive model that approximates the \emph{normalized} distribution associated with the GAM, and can be used for fast inference and evaluation.