1 code implementation • 15 Mar 2023 • Saad Hamid, Xingchen Wan, Martin Jørgensen, Binxin Ru, Michael Osborne
Ensembling can improve the performance of Neural Networks, but existing approaches struggle when the architecture likelihood surface has dispersed, narrow peaks.
1 code implementation • 2 Feb 2023 • Junbo Zhao, Xuefei Ning, Enshu Liu, Binxin Ru, Zixuan Zhou, Tianchen Zhao, Chen Chen, Jiajin Zhang, Qingmin Liao, Yu Wang
In the first step, we train different sub-predictors on different types of available low-fidelity information to extract beneficial knowledge as low-fidelity experts.
1 code implementation • 20 Jan 2023 • Colin White, Mahmoud Safari, Rhea Sukthanker, Binxin Ru, Thomas Elsken, Arber Zela, Debadeepta Dey, Frank Hutter
Specialized, high-performing neural architectures are crucial to the success of deep learning in these areas.
Natural Language Understanding
Neural Architecture Search
+2
2 code implementations • NeurIPS 2023 • Simon Schrodi, Danny Stoll, Binxin Ru, Rhea Sukthanker, Thomas Brox, Frank Hutter
In this work, we introduce a unifying search space design framework based on context-free grammars that can naturally and compactly generate expressive hierarchical search spaces that are 100s of orders of magnitude larger than common spaces from the literature.
2 code implementations • 19 Jul 2022 • Xingchen Wan, Cong Lu, Jack Parker-Holder, Philip J. Ball, Vu Nguyen, Binxin Ru, Michael A. Osborne
Leveraging the new highly parallelizable Brax physics engine, we show that these innovations lead to large performance gains, significantly outperforming the tuned baseline while learning entire configurations on the fly.
2 code implementations • ICLR 2022 • Xingchen Wan, Binxin Ru, Pedro M. Esperança, Zhenguo Li
Searching for the architecture cells is a dominant paradigm in NAS.
1 code implementation • 12 Jan 2022 • Xue Yan, Yali Du, Binxin Ru, Jun Wang, Haifeng Zhang, Xu Chen
The Elo rating system is widely adopted to evaluate the skills of (chess) game and sports players.
no code implementations • 24 Dec 2021 • Miroslav Fil, Binxin Ru, Clare Lyle, Yarin Gal
The success of neural architecture search (NAS) has historically been limited by excessive compute requirements.
no code implementations • 8 Nov 2021 • Xingchen Wan, Binxin Ru, Pedro M. Esperança, Fabio M. Carlucci
The standard paradigm in Neural Architecture Search (NAS) is to search for a fully deterministic architecture with specific operations and connections.
no code implementations • 5 Nov 2021 • Roy Henha Eyono, Fabio Maria Carlucci, Pedro M Esperança, Binxin Ru, Phillip Torr
State-of-the-art results in deep learning have been improving steadily, in good part due to the use of larger models.
1 code implementation • 4 Nov 2021 • Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael A. Osborne, Xiaowen Dong
While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis.
no code implementations • 13 Sep 2021 • Kaichen Zhou, Lanqing Hong, Shoukang Hu, Fengwei Zhou, Binxin Ru, Jiashi Feng, Zhenguo Li
In view of these, we propose DHA, which achieves joint optimization of Data augmentation policy, Hyper-parameter and Architecture.
no code implementations • ICML Workshop AML 2021 • Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael Osborne, Xiaowen Dong
Graph neural networks have been shown to be vulnerable to adversarial attacks.
1 code implementation • NeurIPS 2021 • Colin White, Arber Zela, Binxin Ru, Yang Liu, Frank Hutter
Early methods in the rapidly developing field of neural architecture search (NAS) required fully training thousands of neural networks.
1 code implementation • 14 Feb 2021 • Xingchen Wan, Vu Nguyen, Huong Ha, Binxin Ru, Cong Lu, Michael A. Osborne
High-dimensional black-box optimisation remains an important yet notoriously challenging problem.
no code implementations • 1 Jan 2021 • Kaichen Zhou, Lanqing Hong, Fengwei Zhou, Binxin Ru, Zhenguo Li, Trigoni Niki, Jiashi Feng
Our method performs co-optimization of the neural architectures, training hyper-parameters and data augmentation policies in an end-to-end fashion without the need of model retraining.
no code implementations • 1 Jan 2021 • Roy Henha Eyono, Fabio Maria Carlucci, Pedro M Esperança, Binxin Ru, Philip Torr
State-of-the-art results in deep learning have been improving steadily, in good part due to the use of larger models.
no code implementations • NeurIPS 2020 • Clare Lyle, Lisa Schut, Binxin Ru, Yarin Gal, Mark van der Wilk
This provides two major insights: first, that a measure of a model's training speed can be used to estimate its marginal likelihood.
no code implementations • 28 Sep 2020 • Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, Yarin Gal
Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS).
1 code implementation • ICLR 2021 • Binxin Ru, Xingchen Wan, Xiaowen Dong, Michael Osborne
Our method optimises the architecture in a highly data-efficient manner: it is capable of capturing the topological structures of the architectures and is scalable to large graphs, thus making the high-dimensional and graph-like search spaces amenable to BO.
2 code implementations • NeurIPS 2021 • Binxin Ru, Clare Lyle, Lisa Schut, Miroslav Fil, Mark van der Wilk, Yarin Gal
Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS).
1 code implementation • ICLR 2020 • Binxin Ru, Adam Cobb, Arno Blaas, Yarin Gal
Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input.
2 code implementations • NeurIPS 2020 • Binxin Ru, Pedro Esperanca, Fabio Carlucci
Neural Architecture Search (NAS) was first proposed to achieve state-of-the-art performance through the discovery of new architecture patterns, without human intervention.
2 code implementations • ICML 2020 • Binxin Ru, Ahsan S. Alvi, Vu Nguyen, Michael A. Osborne, Stephen J. Roberts
Efficient optimisation of black-box problems that comprise both continuous and categorical inputs is important, yet poses significant challenges.
no code implementations • 3 Jun 2019 • Diego Granziol, Binxin Ru, Stefan Zohren, Xiaowen Doing, Michael Osborne, Stephen Roberts
Efficient approximation lies at the heart of large-scale machine learning problems.
1 code implementation • 29 Jan 2019 • Ahsan S. Alvi, Binxin Ru, Jan Calliess, Stephen J. Roberts, Michael A. Osborne
Batch Bayesian optimisation (BO) has been successfully applied to hyperparameter tuning using parallel computing, but it is wasteful of resources: workers that complete jobs ahead of others are left idle.
no code implementations • 18 Apr 2018 • Diego Granziol, Binxin Ru, Stefan Zohren, Xiaowen Dong, Michael Osborne, Stephen Roberts
Graph spectra have been successfully used to classify network types, compute the similarity between graphs, and determine the number of communities in a network.
1 code implementation • ICML 2018 • Binxin Ru, Mark McLeod, Diego Granziol, Michael A. Osborne
Information-theoretic Bayesian optimisation techniques have demonstrated state-of-the-art performance in tackling important global optimisation problems.