no code implementations • 3 Mar 2025 • Jacob Morrison, Clara Na, Jared Fernandez, Tim Dettmers, Emma Strubell, Jesse Dodge
As the performance of artificial intelligence systems has dramatically increased, so too has the environmental impact of creating these systems.
no code implementations • 20 Nov 2024 • Jared Fernandez, Luca Wehrstedt, Leonid Shamis, Mostafa Elhoushi, Kalyan Saladi, Yonatan Bisk, Emma Strubell, Jacob Kahn
Dramatic increases in the capabilities of neural network models in recent years are driven by scaling model size, training data, and corresponding computational resources.
no code implementations • 7 Nov 2024 • Jared Fernandez, Yonatan Bisk, Emma Strubell
Large Language Models (LLMs) trained on web-scale text corpora have been shown to capture world knowledge in their parameters.
no code implementations • 19 Jul 2023 • Hao Peng, Qingqing Cao, Jesse Dodge, Matthew E. Peters, Jared Fernandez, Tom Sherborne, Kyle Lo, Sam Skjonsberg, Emma Strubell, Darrell Plessas, Iz Beltagy, Evan Pete Walsh, Noah A. Smith, Hannaneh Hajishirzi
In response, we introduce Pentathlon, a benchmark for holistic and realistic evaluation of model efficiency.
1 code implementation • 13 Feb 2023 • Jared Fernandez, Jacob Kahn, Clara Na, Yonatan Bisk, Emma Strubell
In this work, we examine this phenomenon through a series of case studies analyzing the effects of model design decisions, framework paradigms, and hardware platforms on total model latency.
1 code implementation • 20 Aug 2021 • Xiaopeng Lu, Lynnette Ng, Jared Fernandez, Hao Zhu
Instead of generating an image based on text as in text-image generation, this task requires the generation of an image from a textual description and an image prompt.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Yiben Yang, Chaitanya Malaviya, Jared Fernandez, Swabha Swayamdipta, Ronan Le Bras, Ji-Ping Wang, Chandra Bhagavatula, Yejin Choi, Doug Downey
Recent advances in commonsense reasoning depend on large-scale human-annotated training data to achieve peak performance.
Ranked #1 on
Question Answering
on CODAH
2 code implementations • 8 Apr 2019 • Michael Chen, Mike D'Arcy, Alisa Liu, Jared Fernandez, Doug Downey
To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems.
Ranked #1 on
Common Sense Reasoning
on CODAH
(using extra training data)