no code implementations • ICML 2020 • Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, University of California Moritz Hardt
We introduce a general approach, called test-time training, for improving the performance of predictive models when training and test data come from different distributions.
no code implementations • RANLP 2021 • John Miller, Emanuel Pariasca, Cesar Beltran Castañon
From there we experiment with several different models and approaches including a lexical donor model with augmented wordlist.
no code implementations • NAACL (AmericasNLP) 2021 • Adriano Ingunza Torres, John Miller, Arturo Oncevay, Roberto Zariquiey Biondi
We represent the complexity of Yine (Arawak) morphology with a finite state transducer (FST) based morphological analyzer.
no code implementations • 30 Oct 2021 • John Miller, Elahe Soltanaghai, Raewyn Duvall, Jeff Chen, Vikram Bhat, Nuno Pereira, Anthony Rowe
In this paper, we present LocAR, an infrastructure-free 6-degrees-of-freedom (6DoF) localization system for AR applications that uses motion estimates and range measurements between users to establish an accurate relative coordinate system.
1 code implementation • NeurIPS 2021 • Frances Ding, Moritz Hardt, John Miller, Ludwig Schmidt
Our primary contribution is a suite of new datasets derived from US Census surveys that extend the existing data ecosystem for research on fair machine learning.
1 code implementation • 20 Jul 2021 • Mohammadhossein Toutiaee, Xiaochuan Li, Yogesh Chaudhari, Shophine Sivaraja, Aishwarya Venkataraj, Indrajeet Javeri, Yuan Ke, Ismailcem Arpinar, Nicole Lazar, John Miller
We demonstrate significant enhancement in the forecasting accuracy for a COVID-19 dataset, with a maximum improvement in forecasting accuracy by 64. 58% and 59. 18% (on average) over the GCN-LSTM model in the national level data, and 58. 79% and 52. 40% (on average) over the GCN-LSTM model in the state level data.
1 code implementation • 9 Jul 2021 • John Miller, Rohan Taori, aditi raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, Ludwig Schmidt
For machine learning systems to be reliable, we must understand their performance in unseen, out-of-distribution environments.
no code implementations • 17 Feb 2021 • John Miller, Juan C. Perdomo, Tijana Zrnic
In performative prediction, predictions guide decision-making and hence can influence the distribution of future data.
no code implementations • 4 Jan 2021 • Mohammadhossein Toutiaee, John Miller
We utilize a Gaussian process as a surrogate to capture the response surface of a complex model, in which we incorporate two parts in the process: interpolated values that are modeled by a stationary Gaussian process Z governed by a prior covariance function, and a mean function mu that captures the known trends in the underlying model.
no code implementations • ICML 2020 • John Miller, Karl Krauth, Benjamin Recht, Ludwig Schmidt
We build four new test sets for the Stanford Question Answering Dataset (SQuAD) and evaluate the ability of question-answering systems to generalize to new data.
no code implementations • NeurIPS 2019 • Rebecca Roelofs, Vaishaal Shankar, Benjamin Recht, Sara Fridovich-Keil, Moritz Hardt, John Miller, Ludwig Schmidt
By systematically comparing the public ranking with the final ranking, we assess how much participants adapted to the holdout set over the course of a competition.
no code implementations • ICML 2020 • John Miller, Smitha Milli, Moritz Hardt
Moreover, we show a similar result holds for designing cost functions that satisfy the requirements of previous work.
3 code implementations • 29 Sep 2019 • Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A. Efros, Moritz Hardt
In this paper, we propose Test-Time Training, a general approach for improving the performance of predictive models when training and test data come from different distributions.
no code implementations • 25 Sep 2019 • Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A. Efros, Moritz Hardt
We introduce a general approach, called test-time training, for improving the performance of predictive models when test and training data come from different distributions.
no code implementations • NeurIPS 2019 • Horia Mania, John Miller, Ludwig Schmidt, Moritz Hardt, Benjamin Recht
Excessive reuse of test data has become commonplace in today's machine learning workflows.
no code implementations • WS 2018 • Alonso Vasquez, Renzo Ego Aguirre, C Angulo, y, John Miller, Claudia Villanueva, {\v{Z}}eljko Agi{\'c}, Roberto Zariquiey, Arturo Oncevay
We present an initial version of the Universal Dependencies (UD) treebank for Shipibo-Konibo, the first South American, Amazonian, Panoan and Peruvian language with a resource built under UD.
no code implementations • 25 Aug 2018 • Smitha Milli, John Miller, Anca D. Dragan, Moritz Hardt
Consequential decision-making typically incentivizes individuals to behave strategically, tailoring their behavior to the specifics of the decision rule.
no code implementations • ICLR 2019 • John Miller, Moritz Hardt
Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks.
7 code implementations • ICLR 2018 • Wei Ping, Kainan Peng, Andrew Gibiansky, Sercan O. Arik, Ajay Kannan, Sharan Narang, Jonathan Raiman, John Miller
We present Deep Voice 3, a fully-convolutional attention-based neural text-to-speech (TTS) system.
1 code implementation • EMNLP 2017 • Jonathan Raiman, John Miller
Rapid progress has been made towards question answering (QA) systems that can extract answers from text.
no code implementations • WS 2017 • John Miller, Kathleen Mccoy
We envisioned responsive generic hierarchical text summarization with summaries organized by section and paragraph based on hierarchical structure topic models.
1 code implementation • NeurIPS 2017 • Sercan Arik, Gregory Diamos, Andrew Gibiansky, John Miller, Kainan Peng, Wei Ping, Jonathan Raiman, Yanqi Zhou
We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1.
3 code implementations • ICML 2017 • Sercan O. Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xi-An Li, John Miller, Andrew Ng, Jonathan Raiman, Shubho Sengupta, Mohammad Shoeybi
We present Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks.
1 code implementation • EMNLP 2015 • Kelvin Guu, John Miller, Percy Liang
Path queries on a knowledge graph can be used to answer compositional questions such as "What languages are spoken by people living in Lisbon?".