no code implementations • 5 Feb 2025 • Martin Mundt, Anaelia Ovalle, Felix Friedrich, A Pranav, Subarnaduti Paul, Manuel Brack, Kristian Kersting, William Agnew
In a widely popular analogy by Turing Award Laureate Yann LeCun, machine intelligence has been compared to cake - where unsupervised learning forms the base, supervised learning adds the icing, and reinforcement learning is the cherry on top.
no code implementations • 17 Oct 2024 • William Agnew, Julia Barnett, Annie Chu, Rachel Hong, Michael Feffer, Robin Netzorg, Harry H. Jiang, Ezra Awumey, Sauvik Das
Generative audio models are rapidly advancing in both capabilities and public utilization -- several powerful generative audio models have readily available open weights, and some tech companies have released high quality generative audio products.
1 code implementation • 17 Oct 2024 • William Agnew, Harry H. Jiang, Cella Sum, Maarten Sap, Sauvik Das
Large language models excel at performing inference over text to extract information, summarize information, or generate additional text.
no code implementations • 28 Sep 2024 • Shivani Kapania, William Agnew, Motahhare Eslami, Hoda Heidari, Sarah Fox
The recent excitement around generative models has sparked a wave of proposals suggesting the replacement of human participation and labor in research and development--e. g., through surveys, experiments, and interviews--with synthetic research data generated by large language models (LLMs).
1 code implementation • 13 May 2024 • Rachel Hong, William Agnew, Tadayoshi Kohno, Jamie Morgenstern
As training datasets become increasingly drawn from unstructured, uncontrolled environments such as the web, researchers and industry practitioners have increasingly relied upon data filtering techniques to "filter out the noise" of web-scraped data.
no code implementations • 26 Sep 2023 • Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, Abeba Birhane
Moreover, the majority of these technologies specifically enable extracting data about human bodies and body parts.
no code implementations • 15 Jul 2023 • Organizers Of QueerInAI, Nathan Dennler, Anaelia Ovalle, Ashwin Singh, Luca Soldaini, Arjun Subramonian, Huy Tu, William Agnew, Avijit Ghosh, Kyra Yee, Irene Font Peradejordi, Zeerak Talat, Mayra Russo, Jess de Jesus de Pinho Pinhal
However, these auditing processes have been criticized for their failure to integrate the knowledge of marginalized communities and consider the power dynamics between auditors and the communities.
no code implementations • 9 Jun 2023 • Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Canyu Chen, Hal Daumé III, Jesse Dodge, Isabella Duan, Ellie Evans, Felix Friedrich, Avijit Ghosh, Usman Gohar, Sara Hooker, Yacine Jernite, Ria Kalluri, Alberto Lusoli, Alina Leidinger, Michelle Lin, Xiuzhu Lin, Sasha Luccioni, Jennifer Mickel, Margaret Mitchell, Jessica Newman, Anaelia Ovalle, Marie-Therese Png, Shubham Singh, Andrew Strait, Lukas Struppek, Arjun Subramonian
Generative AI systems across modalities, ranging from text (including code), image, audio, and video, have broad social impacts, but there is no official standard for means of evaluating those impacts or for which impacts should be evaluated.
no code implementations • 29 Mar 2023 • Organizers Of QueerInAI, :, Anaelia Ovalle, Arjun Subramonian, Ashwin Singh, Claas Voelcker, Danica J. Sutherland, Davide Locatelli, Eva Breznik, Filip Klubička, Hang Yuan, Hetvi J, huan zhang, Jaidev Shriram, Kruno Lehman, Luca Soldaini, Maarten Sap, Marc Peter Deisenroth, Maria Leonor Pacheco, Maria Ryskina, Martin Mundt, Milind Agarwal, Nyx McLean, Pan Xu, A Pranav, Raj Korpan, Ruchira Ray, Sarah Mathew, Sarthak Arora, ST John, Tanvi Anand, Vishakha Agrawal, William Agnew, Yanan Long, Zijie J. Wang, Zeerak Talat, Avijit Ghosh, Nathaniel Dennler, Michael Noseworthy, Sharvani Jha, Emi Baylor, Aditya Joshi, Natalia Y. Bilenko, Andrew McNamara, Raphael Gontijo-Lopes, Alex Markham, Evyn Dǒng, Jackie Kay, Manu Saraswat, Nikhil Vytla, Luke Stark
We present Queer in AI as a case study for community-led participatory design in AI.
no code implementations • 23 Jul 2022 • Andrew Hundt, William Agnew, Vicky Zeng, Severin Kacianka, Matthew Gombolay
Stereotypes, bias, and discrimination have been extensively documented in Machine Learning (ML) methods such as Computer Vision (CV) [18, 80], Natural Language Processing (NLP) [6], or both, in the case of large image and caption models such as OpenAI CLIP [14].
no code implementations • 21 Sep 2021 • Ashwin, William Agnew, Umut Pajaro, Hetvi Jethwani, Arjun Subramonian
Trustworthy artificial intelligence (AI) has become an important topic because trust in AI systems and their creators has been lost.
1 code implementation • NeurIPS 2021 • Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, Michelle Bao
We present extensive textual evidence and identify key themes in the definitions and operationalization of these values.
no code implementations • EMNLP 2021 • Jesse Dodge, Maarten Sap, Ana Marasović, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, Matt Gardner
Finally, we conclude with some recommendations for how to created and document web-scale datasets from a scrape of the internet.
1 code implementation • 28 Sep 2020 • William Agnew, Christopher Xie, Aaron Walsman, Octavian Murad, Caelen Wang, Pedro Domingos, Siddhartha Srinivasa
By using these priors over the physical properties of objects, our system improves reconstruction quality not just by standard visual metrics, but also performance of model-based control on a variety of robotics manipulation tasks in challenging, cluttered environments.
no code implementations • 3 Mar 2020 • William Agnew, Pedro Domingos
Current deep reinforcement learning (RL) approaches incorporate minimal prior knowledge about the environment, limiting computational and sample efficiency.