1 code implementation • 18 Aug 2023 • Tal Kadosh, Niranjan Hasabnis, Vy A. Vo, Nadav Schneider, Neva Krien, Abdul Wasay, Nesreen Ahmed, Ted Willke, Guy Tamir, Yuval Pinter, Timothy Mattson, Gal Oren
With easier access to powerful compute resources, there is a growing trend in the field of AI for software development to develop larger and larger language models (LLMs) to address a variety of programming tasks.
no code implementations • 4 Oct 2022 • Omri Raccah, Phoebe Chen, Ted L. Willke, David Poeppel, Vy A. Vo
The computational complexity of the self-attention mechanism in Transformer models significantly limits their ability to generalize over long temporal durations.
no code implementations • 22 Sep 2022 • Guixiang Ma, Vy A. Vo, Theodore Willke, Nesreen K. Ahmed
We provide a comprehensive review of the existing literature on memory-augmented GNNs.
no code implementations • 12 May 2021 • Hsiang-Yun Sherry Chien, Javier S. Turek, Nicole Beckage, Vy A. Vo, Christopher J. Honey, Ted L. Willke
Altogether, we found that LSTM with the proposed forget gate can learn long-term dependencies, outperforming other recurrent networks in multiple domains; such gating mechanism can be integrated into other architectures for improving the learning of long timescale information in recurrent neural networks.
no code implementations • ICLR 2021 • Shivangi Mahto, Vy A. Vo, Javier S. Turek, Alexander G. Huth
Earlier work has demonstrated that dependencies in natural language tend to decay with distance between words according to a power law.