no code implementations • ICML 2020 • Gaurush Hiranandani, Warut Vijitbenjaronk, Sanmi Koyejo, Prateek Jain
Modern recommendation and notification systems must be robust to data imbalance, limitations on the number of recommendations/notifications, and heterogeneous engagement profiles across users.
no code implementations • 20 Jul 2023 • Dhruv Pai, Andres Carranza, Rylan Schaeffer, Arnuv Tandon, Sanmi Koyejo
We present FACADE, a novel probabilistic and geometric framework designed for unsupervised mechanistic anomaly detection in deep neural networks.
no code implementations • 20 Jul 2023 • Andres Carranza, Dhruv Pai, Rylan Schaeffer, Arnuv Tandon, Sanmi Koyejo
As the capabilities of large machine learning models continue to grow, and as the autonomy afforded to such models continues to expand, the spectre of a new adversary looms: the models themselves.
no code implementations • 20 Jul 2023 • Rylan Schaeffer, Kateryna Pistunova, Samar Khanna, Sarthak Consul, Sanmi Koyejo
We find that the logically \textit{invalid} reasoning prompts do indeed achieve similar performance gains on BBH tasks as logically valid reasoning prompts.
no code implementations • 24 Jun 2023 • Alycia Lee, Brando Miranda, Sudharsan Sundar, Sanmi Koyejo
Current trends to pre-train capable Large Language Models (LLMs) mostly focus on scaling of model and dataset size.
no code implementations • 24 Jun 2023 • Brando Miranda, Patrick Yu, Saumya Goyal, Yu-Xiong Wang, Sanmi Koyejo
Using this analysis, we demonstrate the following: 1. when the formal diversity of a data set is low, PT beats MAML on average and 2. when the formal diversity is high, MAML beats PT on average.
no code implementations • 22 Jun 2023 • Berivan Isik, Francesco Pase, Deniz Gunduz, Sanmi Koyejo, Tsachy Weissman, Michele Zorzi
The high communication cost of sending model updates from the clients to the server is a significant bottleneck for scalable federated learning (FL).
no code implementations • 20 Jun 2023 • Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in capabilities, capturing the interest of practitioners and the public alike.
no code implementations • 1 Jun 2023 • Boxiang Lyu, Zhe Feng, Zachary Robertson, Sanmi Koyejo
We study the design of loss functions for click-through rates (CTR) to optimize (social) welfare in advertising auctions.
1 code implementation • 28 Apr 2023 • Rylan Schaeffer, Brando Miranda, Sanmi Koyejo
Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models.
no code implementations • 5 Apr 2023 • Mercy Nyamewaa Asiedu, Awa Dieng, Abigail Oppong, Maria Nagawa, Sanmi Koyejo, Katherine Heller
With growing machine learning (ML) applications in healthcare, there have been calls for fairness in ML to understand and mitigate ethical concerns these systems may pose.
no code implementations • 1 Mar 2023 • Pedro Cisneros-Velarde, Sanmi Koyejo
Nash Q-learning may be considered one of the first and most known algorithms in multi-agent reinforcement learning (MARL) for learning policies that constitute a Nash equilibrium of an underlying general-sum Markov game.
no code implementations • 21 Dec 2022 • Ibrahim Alabdulmohsin, Nicole Chiou, Alexander D'Amour, Arthur Gretton, Sanmi Koyejo, Matt J. Kusner, Stephen R. Pfohl, Olawale Salaudeen, Jessica Schrouff, Katherine Tsai
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain, and unlabeled data from the target.
no code implementations • 31 Oct 2022 • Katherine Tsai, Boxin Zhao, Sanmi Koyejo, Mladen Kolar
Joint multimodal functional data acquisition, where functional data from multiple modes are measured simultaneously from the same subject, has emerged as an exciting modern approach enabled by recent engineering breakthroughs in the neurological and biological sciences.
no code implementations • 4 Oct 2022 • Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, Shruti Tople
Empirical results on three datasets with different modalities and varying numbers of clients further demonstrate that our approach mitigates a broad class of backdoor attacks with a negligible cost on the model utility.
no code implementations • 2 Aug 2022 • Brando Miranda, Patrick Yu, Yu-Xiong Wang, Sanmi Koyejo
This novel insight contextualizes claims that transfer learning solutions are better than meta-learned solutions in the regime of low diversity under a fair comparison.
no code implementations • 31 May 2022 • Pedro Cisneros-Velarde, Boxiang Lyu, Sanmi Koyejo, Mladen Kolar
Although parallelism has been extensively used in reinforcement learning (RL), the quantitative effects of parallel exploration are not well understood theoretically.
1 code implementation • NAACL 2022 • Yong Xie, Dakuo Wang, Pin-Yu Chen, JinJun Xiong, Sijia Liu, Sanmi Koyejo
More and more investors and machine learning models rely on social media (e. g., Twitter and Reddit) to gather real-time information and sentiment to predict stock price movements.
1 code implementation • 31 Jan 2022 • Alexander Soen, Ibrahim Alabdulmohsin, Sanmi Koyejo, Yishay Mansour, Nyalleng Moorosi, Richard Nock, Ke Sun, Lexing Xie
We introduce a new family of techniques to post-process ("wrap") a black-box classifier in order to reduce its bias.
1 code implementation • 24 Dec 2021 • Brando Miranda, Yu-Xiong Wang, Sanmi Koyejo
Recent work has suggested that a good embedding is all we need to solve many few-shot learning benchmarks.
no code implementations • 24 Dec 2021 • Brando Miranda, Yu-Xiong Wang, Sanmi Koyejo
We hypothesize that the diversity coefficient of the few-shot learning benchmark is predictive of whether meta-learning solutions will succeed or not.
2 code implementations • 14 Jul 2021 • Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horvath, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecny, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtarik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection.
no code implementations • 10 Jul 2020 • Siddharth Biswal, Peiye Zhuang, Ayis Pyrros, Nasir Siddiqui, Sanmi Koyejo, Jimeng Sun
EMIXER is an conditional generative adversarial model by 1) generating an image based on a label, 2) encoding the image to a hidden embedding, 3) producing the corresponding text via a hierarchical decoder from the image embedding, and 4) a joint discriminator for assessing both the image and the corresponding text.
no code implementations • 24 Jun 2020 • Forest Yang, Moustapha Cisse, Sanmi Koyejo
In algorithmically fair prediction problems, a standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously.
8 code implementations • 10 Dec 2019 • Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
no code implementations • 13 Jul 2019 • Peiye Zhuang, Alexander G. Schwing, Sanmi Koyejo
Thus, our results suggest that data augmentation via synthesis is a promising approach to address the limited availability of fMRI data, and to improve the quality of predictive fMRI models.
no code implementations • CVPR 2019 • Ishan Deshpande, Yuan-Ting Hu, Ruoyu Sun, Ayis Pyrros, Nasir Siddiqui, Sanmi Koyejo, Zhizhen Zhao, David Forsyth, Alexander Schwing
Generative adversarial nets (GANs) and variational auto-encoders have significantly improved our distribution modeling capabilities, showing promise for dataset augmentation, image-to-image translation and feature learning.
1 code implementation • ICML 2020 • Cong Xie, Sanmi Koyejo, Indranil Gupta
We propose Zeno++, a new robust asynchronous Stochastic Gradient Descent~(SGD) procedure which tolerates Byzantine failures of the workers.
no code implementations • 16 Mar 2019 • Cong Xie, Sanmi Koyejo, Indranil Gupta
We consider distributed on-device learning with limited communication and security requirements.
3 code implementations • 10 Mar 2019 • Cong Xie, Sanmi Koyejo, Indranil Gupta
Recently, new defense techniques have been developed to tolerate Byzantine failures for distributed machine learning.
1 code implementation • 10 Mar 2019 • Cong Xie, Sanmi Koyejo, Indranil Gupta
Federated learning enables training on a massive number of edge devices.
no code implementations • ICML 2020 • Forest Yang, Sanmi Koyejo
Our analysis continues by showing previously proposed hinge-like top-$k$ surrogate losses are not top-$k$ calibrated and suggests no convex hinge loss is top-$k$ calibrated.
no code implementations • 1 Sep 2018 • Yogatheesan Varatharajah, Brent Berry, Sanmi Koyejo, Ravishankar Iyer
However, those approaches have failed to account for the variability among participants that is becoming increasingly evident as a result of recent clinical-trial-based studies.