no code implementations • 17 Feb 2024 • Pragya Srivastava, Manuj Malik, Vivek Gupta, Tanuja Ganu, Dan Roth
Large Language Models (LLMs), excel in natural language understanding, but their capability for complex mathematical reasoning with an amalgamation of structured tables and unstructured text is uncertain.
no code implementations • 9 Feb 2024 • Pragya Srivastava, Satvik Golechha, Amit Deshpande, Amit Sharma
Recent work shows that in-context learning and optimization of in-context examples (ICE) can significantly improve the accuracy of large language models (LLMs) on a wide range of tasks, leading to an apparent consensus that ICE optimization is crucial for better performance.
no code implementations • 31 Oct 2022 • Pragya Srivastava, Tanuja Ganu, Saikat Guha
We present very early results on using GPT-3 to perform question answering on tabular data.
1 code implementation • 26 Oct 2021 • Leon Weninger, Pragya Srivastava, Dale Zhou, Jason Z. Kim, Eli J. Cornblath, Maxwell A. Bertolero, Ute Habel, Dorit Merhof, Dani S. Bassett
These activity patterns define global brain states and contain information in accordance with their expected probability of occurrence.
no code implementations • 15 Mar 2021 • Pragya Srivastava, Peter J. Mucha, Emily Falk, Fabio Pasqualetti, Danielle S. Bassett
For this purpose, we calculate the exact expression of optimal control energy in terms of layer spectra and the relative alignment between the eigenmodes of the input layer and the deeper target layer.