Search Results for author: Megha Srivastava

Found 11 papers, 7 papers with code

Question Generation for Adaptive Education

1 code implementation ACL 2021 Megha Srivastava, Noah Goodman

Intelligent and adaptive online education systems aim to make high-quality education available for a diverse range of students.

Knowledge Tracing Question Generation +2

LILA: Language-Informed Latent Actions

1 code implementation5 Nov 2021 Siddharth Karamcheti, Megha Srivastava, Percy Liang, Dorsa Sadigh

We introduce Language-Informed Latent Actions (LILA), a framework for learning natural language interfaces in the context of human-robot collaboration.

Imitation Learning

Evaluating Human-Language Model Interaction

1 code implementation19 Dec 2022 Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E. Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael Bernstein, Percy Liang

To evaluate human-LM interaction, we develop a new framework, Human-AI Language-based Interaction Evaluation (HALIE), that defines the components of interactive systems and dimensions to consider when designing evaluation metrics.

Language Modelling Question Answering

Assistive Teaching of Motor Control Tasks to Humans

1 code implementation25 Nov 2022 Megha Srivastava, Erdem Biyik, Suvir Mirchandani, Noah Goodman, Dorsa Sadigh

In this paper, we focus on the problem of assistive teaching of motor control tasks such as parking a car or landing an aircraft.

Reinforcement Learning (RL)

Generating Language Corrections for Teaching Physical Control Tasks

1 code implementation12 Jun 2023 Megha Srivastava, Noah Goodman, Dorsa Sadigh

AI assistance continues to help advance applications in education, from language learning to intelligent tutoring systems, yet current methods for providing students feedback are still quite limited.

valid

Fairness Without Demographics in Repeated Loss Minimization

1 code implementation ICML 2018 Tatsunori B. Hashimoto, Megha Srivastava, Hongseok Namkoong, Percy Liang

Machine learning models (e. g., speech recognizers) are usually trained to minimize average loss, which results in representation disparity---minority groups (e. g., non-native speakers) contribute less to the training objective and thus tend to suffer higher loss.

Fairness

The Effect of Learning Strategy versus Inherent Architecture Properties on the Ability of Convolutional Neural Networks to Develop Transformation Invariance

no code implementations31 Oct 2018 Megha Srivastava, Kalanit Grill-Spector

Because training artificial neural networks from scratch is similar to showing novel objects to humans, we seek to understand the factors influencing the tolerance of CNNs to spatial transformations.

Object Recognition

Robustness to Spurious Correlations via Human Annotations

1 code implementation ICML 2020 Megha Srivastava, Tatsunori Hashimoto, Percy Liang

The reliability of machine learning systems critically assumes that the associations between features and labels remain similar between training and test distributions.

Common Sense Reasoning

An Empirical Analysis of Backward Compatibility in Machine Learning Systems

no code implementations11 Aug 2020 Megha Srivastava, Besmira Nushi, Ece Kamar, Shital Shah, Eric Horvitz

In many applications of machine learning (ML), updates are performed with the goal of enhancing model performance.

BIG-bench Machine Learning

Machine learning for modular multiplication

no code implementations29 Feb 2024 Kristin Lauter, Cathy Yuanchen Li, Krystal Maughan, Rachel Newton, Megha Srivastava

Motivated by cryptographic applications, we investigate two machine learning approaches to modular multiplication: namely circular regression and a sequence-to-sequence transformer model.

regression

Optimistic Verifiable Training by Controlling Hardware Nondeterminism

no code implementations14 Mar 2024 Megha Srivastava, Simran Arora, Dan Boneh

The increasing compute demands of AI systems has led to the emergence of services that train models on behalf of clients lacking necessary resources.

Data Poisoning

Cannot find the paper you are looking for? You can Submit a new open access paper.