Search Results for author: Binxin Ru

Found 28 papers, 16 papers with code

Bayesian Quadrature for Neural Ensemble Search

1 code implementation15 Mar 2023 Saad Hamid, Xingchen Wan, Martin Jørgensen, Binxin Ru, Michael Osborne

Ensembling can improve the performance of Neural Networks, but existing approaches struggle when the architecture likelihood surface has dispersed, narrow peaks.

Dynamic Ensemble of Low-fidelity Experts: Mitigating NAS "Cold-Start"

1 code implementation2 Feb 2023 Junbo Zhao, Xuefei Ning, Enshu Liu, Binxin Ru, Zixuan Zhou, Tianchen Zhao, Chen Chen, Jiajin Zhang, Qingmin Liao, Yu Wang

In the first step, we train different sub-predictors on different types of available low-fidelity information to extract beneficial knowledge as low-fidelity experts.

Neural Architecture Search

Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars

2 code implementations NeurIPS 2023 Simon Schrodi, Danny Stoll, Binxin Ru, Rhea Sukthanker, Thomas Brox, Frank Hutter

In this work, we introduce a unifying search space design framework based on context-free grammars that can naturally and compactly generate expressive hierarchical search spaces that are 100s of orders of magnitude larger than common spaces from the literature.

Bayesian Optimization Neural Architecture Search

Bayesian Generational Population-Based Training

2 code implementations19 Jul 2022 Xingchen Wan, Cong Lu, Jack Parker-Holder, Philip J. Ball, Vu Nguyen, Binxin Ru, Michael A. Osborne

Leveraging the new highly parallelizable Brax physics engine, we show that these innovations lead to large performance gains, significantly outperforming the tuned baseline while learning entire configurations on the fly.

Bayesian Optimization Reinforcement Learning (RL)

Learning to Identify Top Elo Ratings: A Dueling Bandits Approach

1 code implementation12 Jan 2022 Xue Yan, Yali Du, Binxin Ru, Jun Wang, Haifeng Zhang, Xu Chen

The Elo rating system is widely adopted to evaluate the skills of (chess) game and sports players.

Scheduling

DARTS without a Validation Set: Optimizing the Marginal Likelihood

no code implementations24 Dec 2021 Miroslav Fil, Binxin Ru, Clare Lyle, Yarin Gal

The success of neural architecture search (NAS) has historically been limited by excessive compute requirements.

Neural Architecture Search

Approximate Neural Architecture Search via Operation Distribution Learning

no code implementations8 Nov 2021 Xingchen Wan, Binxin Ru, Pedro M. Esperança, Fabio M. Carlucci

The standard paradigm in Neural Architecture Search (NAS) is to search for a fully deterministic architecture with specific operations and connections.

Bayesian Optimisation Neural Architecture Search

Adversarial Attacks on Graph Classification via Bayesian Optimisation

1 code implementation4 Nov 2021 Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael A. Osborne, Xiaowen Dong

While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis.

Adversarial Robustness Bayesian Optimisation +1

How Powerful are Performance Predictors in Neural Architecture Search?

1 code implementation NeurIPS 2021 Colin White, Arber Zela, Binxin Ru, Yang Liu, Frank Hutter

Early methods in the rapidly developing field of neural architecture search (NAS) required fully training thousands of neural networks.

Neural Architecture Search

DiffAutoML: Differentiable Joint Optimization for Efficient End-to-End Automated Machine Learning

no code implementations1 Jan 2021 Kaichen Zhou, Lanqing Hong, Fengwei Zhou, Binxin Ru, Zhenguo Li, Trigoni Niki, Jiashi Feng

Our method performs co-optimization of the neural architectures, training hyper-parameters and data augmentation policies in an end-to-end fashion without the need of model retraining.

BIG-bench Machine Learning Computational Efficiency +2

A Bayesian Perspective on Training Speed and Model Selection

no code implementations NeurIPS 2020 Clare Lyle, Lisa Schut, Binxin Ru, Yarin Gal, Mark van der Wilk

This provides two major insights: first, that a measure of a model's training speed can be used to estimate its marginal likelihood.

Model Selection

Revisiting the Train Loss: an Efficient Performance Estimator for Neural Architecture Search

no code implementations28 Sep 2020 Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, Yarin Gal

Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS).

Model Selection Neural Architecture Search

Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels

1 code implementation ICLR 2021 Binxin Ru, Xingchen Wan, Xiaowen Dong, Michael Osborne

Our method optimises the architecture in a highly data-efficient manner: it is capable of capturing the topological structures of the architectures and is scalable to large graphs, thus making the high-dimensional and graph-like search spaces amenable to BO.

Bayesian Optimisation Neural Architecture Search

Speedy Performance Estimation for Neural Architecture Search

2 code implementations NeurIPS 2021 Binxin Ru, Clare Lyle, Lisa Schut, Miroslav Fil, Mark van der Wilk, Yarin Gal

Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS).

Model Selection Neural Architecture Search

BayesOpt Adversarial Attack

1 code implementation ICLR 2020 Binxin Ru, Adam Cobb, Arno Blaas, Yarin Gal

Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input.

Adversarial Attack Bayesian Optimisation +2

Neural Architecture Generator Optimization

2 code implementations NeurIPS 2020 Binxin Ru, Pedro Esperanca, Fabio Carlucci

Neural Architecture Search (NAS) was first proposed to achieve state-of-the-art performance through the discovery of new architecture patterns, without human intervention.

Bayesian Optimisation Neural Architecture Search +1

Bayesian Optimisation over Multiple Continuous and Categorical Inputs

2 code implementations ICML 2020 Binxin Ru, Ahsan S. Alvi, Vu Nguyen, Michael A. Osborne, Stephen J. Roberts

Efficient optimisation of black-box problems that comprise both continuous and categorical inputs is important, yet poses significant challenges.

Bayesian Optimisation Multi-Armed Bandits

Asynchronous Batch Bayesian Optimisation with Improved Local Penalisation

1 code implementation29 Jan 2019 Ahsan S. Alvi, Binxin Ru, Jan Calliess, Stephen J. Roberts, Michael A. Osborne

Batch Bayesian optimisation (BO) has been successfully applied to hyperparameter tuning using parallel computing, but it is wasteful of resources: workers that complete jobs ahead of others are left idle.

Bayesian Optimisation

Entropic Spectral Learning for Large-Scale Graphs

no code implementations18 Apr 2018 Diego Granziol, Binxin Ru, Stefan Zohren, Xiaowen Dong, Michael Osborne, Stephen Roberts

Graph spectra have been successfully used to classify network types, compute the similarity between graphs, and determine the number of communities in a network.

Community Detection

Fast Information-theoretic Bayesian Optimisation

1 code implementation ICML 2018 Binxin Ru, Mark McLeod, Diego Granziol, Michael A. Osborne

Information-theoretic Bayesian optimisation techniques have demonstrated state-of-the-art performance in tackling important global optimisation problems.

Bayesian Optimisation

Cannot find the paper you are looking for? You can Submit a new open access paper.