Search Results for author: Colin White

Found 16 papers, 8 papers with code

NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy

1 code implementation ICLR 2022 Yash Mehta, Colin White, Arber Zela, Arjun Krishnakumar, Guri Zabergja, Shakiba Moradian, Mahmoud Safari, Kaicheng Yu, Frank Hutter

The release of tabular benchmarks, such as NAS-Bench-101 and NAS-Bench-201, has significantly lowered the computational overhead for conducting scientific research in neural architecture search (NAS).

Image Classification Neural Architecture Search +2

NAS-Bench-x11 and the Power of Learning Curves

1 code implementation NeurIPS 2021 Shen Yan, Colin White, Yash Savani, Frank Hutter

While early research in neural architecture search (NAS) required extreme computational resources, the recent releases of tabular and surrogate benchmarks have greatly increased the speed and reproducibility of NAS research.

Neural Architecture Search

Synthetic Benchmarks for Scientific Research in Explainable Machine Learning

1 code implementation23 Jun 2021 Yang Liu, Sujay Khandagale, Colin White, Willie Neiswanger

In this work, we address this issue by releasing XAI-Bench: a suite of synthetic datasets along with a library for benchmarking feature attribution algorithms.

How Powerful are Performance Predictors in Neural Architecture Search?

1 code implementation NeurIPS 2021 Colin White, Arber Zela, Binxin Ru, Yang Liu, Frank Hutter

Early methods in the rapidly developing field of neural architecture search (NAS) required fully training thousands of neural networks.

Neural Architecture Search

A Study on Encodings for Neural Architecture Search

2 code implementations NeurIPS 2020 Colin White, Willie Neiswanger, Sam Nolen, Yash Savani

First we formally define architecture encodings and give a theoretical characterization on the scalability of the encodings we study Then we identify the main encoding-dependent subroutines which NAS algorithms employ, running experiments to show which encodings work best with each subroutine for many popular algorithms.

Neural Architecture Search

Intra-Processing Methods for Debiasing Neural Networks

2 code implementations NeurIPS 2020 Yash Savani, Colin White, Naveen Sundar Govindarajulu

Intra-processing methods are designed specifically to debias large models which have been trained on a generic dataset and fine-tuned on a more specific task.

Face Recognition Fairness

Exploring the Loss Landscape in Neural Architecture Search

2 code implementations6 May 2020 Colin White, Sam Nolen, Yash Savani

In this work, we show that (1) the simplest hill-climbing algorithm is a powerful baseline for NAS, and (2), when the noise in popular NAS benchmark datasets is reduced to a minimum, hill-climbing to outperforms many popular state-of-the-art algorithms.

Combinatorial Optimization Denoising +3

BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search

3 code implementations25 Oct 2019 Colin White, Willie Neiswanger, Yash Savani

Bayesian optimization (BO), which has long had success in hyperparameter optimization, has recently emerged as a very promising strategy for NAS when it is coupled with a neural predictor.

Hyperparameter Optimization Neural Architecture Search

BANANAS: Bayesian Optimization with Neural Networks for Neural Architecture Search

no code implementations25 Sep 2019 Colin White, Willie Neiswanger, Yash Savani

We develop a path-based encoding scheme to featurize the neural architectures that are used to train the neural network model.

Neural Architecture Search reinforcement-learning

Clustering under Local Stability: Bridging the Gap between Worst-Case and Beyond Worst-Case Analysis

no code implementations19 May 2017 Maria-Florina Balcan, Colin White

The typical idea is to design a clustering algorithm that outputs a near-optimal solution, provided the data satisfy a natural stability notion.

Robust Communication-Optimal Distributed Clustering Algorithms

no code implementations2 Mar 2017 Pranjal Awasthi, Ainesh Bakshi, Maria-Florina Balcan, Colin White, David Woodruff

In this work, we study the $k$-median and $k$-means clustering problems when the data is distributed across many servers and can contain outliers.

Learning-Theoretic Foundations of Algorithm Configuration for Combinatorial Partitioning Problems

no code implementations14 Nov 2016 Maria-Florina Balcan, Vaishnavh Nagarajan, Ellen Vitercik, Colin White

We address this problem for clustering, max-cut, and other partitioning problems, such as integer quadratic programming, by designing computationally efficient and sample efficient learning algorithms which receive samples from an application-specific distribution over problem instances and learn a partitioning algorithm with high expected performance.

Learning Theory

Learning Combinatorial Functions from Pairwise Comparisons

no code implementations30 May 2016 Maria-Florina Balcan, Ellen Vitercik, Colin White

However, for real-valued functions, cardinal labels might not be accessible, or it may be difficult for an expert to consistently assign real-valued labels over the entire set of examples.

Data Driven Resource Allocation for Distributed Learning

no code implementations15 Dec 2015 Travis Dick, Mu Li, Venkata Krishna Pillutla, Colin White, Maria Florina Balcan, Alex Smola

In distributed machine learning, data is dispatched to multiple machines for processing.

$k$-center Clustering under Perturbation Resilience

no code implementations14 May 2015 Maria-Florina Balcan, Nika Haghtalab, Colin White

In this work, we take this approach and provide strong positive results both for the asymmetric and symmetric $k$-center problems under a natural input stability (promise) condition called $\alpha$-perturbation resilience [Bilu and Linia 2012], which states that the optimal solution does not change under any alpha-factor perturbation to the input distances.

Cannot find the paper you are looking for? You can Submit a new open access paper.