no code implementations • EACL (AdaptNLP) 2021 • Jezabel Garcia, Federica Freddi, Jamie McGowan, Tim Nieradzik, Feng-Ting Liao, Ye Tian, Da-Shan Shiu, Alberto Bernacchia
In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related.
Cross-Lingual Natural Language Inference Cross-Lingual Transfer +1
1 code implementation • 31 Oct 2023 • Guoxuan Xia, Duolikun Danier, Ayan Das, Stathi Fotiadis, Farhang Nabiei, Ushnish Sengupta, Alberto Bernacchia
As a simple fix, we propose to instead reparameterise the score (at inference) by dividing it by the average absolute value of previous score estimates at that time step collected from offline high NFE generations.
no code implementations • 10 Aug 2023 • Ushnish Sengupta, Chinkuo Jao, Alberto Bernacchia, Sattar Vakili, Da-Shan Shiu
In this paper, we propose a diffusion model based channel sampling approach for rapidly synthesizing channel realizations from limited data.
1 code implementation • 1 Jun 2023 • Ayan Das, Stathi Fotiadis, Anil Batra, Farhang Nabiei, FengTing Liao, Sattar Vakili, Da-Shan Shiu, Alberto Bernacchia
We compute the shortest path according to this metric, and we show that it corresponds to a combination of image sharpening, rather than blurring, and noise deblurring.
no code implementations • 1 Feb 2023 • Sing-Yuan Yeh, Fu-Chieh Chang, Chang-Wei Yueh, Pei-Yuan Wu, Alberto Bernacchia, Sattar Vakili
To the best of our knowledge, this is the first result showing a finite sample complexity under such a general model.
no code implementations • 1 Feb 2023 • Sattar Vakili, Danyal Ahmed, Alberto Bernacchia, Ciara Pike-Burke
An abstraction of the problem can be formulated as a kernel based bandit problem (also known as Bayesian optimisation), where a learner aims at optimising a kernelized function through sequential noisy observations.
no code implementations • 8 Feb 2022 • Sattar Vakili, Jonathan Scarlett, Da-Shan Shiu, Alberto Bernacchia
Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications for regression and optimization.
no code implementations • 13 Sep 2021 • Sattar Vakili, Michael Bromberg, Jezabel Garcia, Da-Shan Shiu, Alberto Bernacchia
As a byproduct of our results, we show the equivalence between the RKHS corresponding to the NT kernel and its counterpart corresponding to the Mat\'ern family of kernels, showing the NT kernels induce a very general class of models.
no code implementations • NeurIPS 2021 • Sattar Vakili, Nacime Bouziani, Sepehr Jalali, Alberto Bernacchia, Da-Shan Shiu
Consider the sequential optimization of a continuous, possibly non-convex, and expensive to evaluate objective function $f$.
1 code implementation • NeurIPS 2021 • Ta-Chu Kao, Kristopher T. Jensen, Gido M. van de Ven, Alberto Bernacchia, Guillaume Hennequin
In contrast, artificial agents are prone to 'catastrophic forgetting' whereby performance on previous tasks deteriorates rapidly as new ones are acquired.
no code implementations • 15 Mar 2021 • Alexandru Cioba, Michael Bromberg, Qian Wang, Ritwik Niyogi, Georgios Batzolis, Jezabel Garcia, Da-Shan Shiu, Alberto Bernacchia
We show that: 1) If tasks are homogeneous, there is a uniform optimal allocation, whereby all tasks get the same amount of data; 2) At fixed budget, there is a trade-off between number of tasks and number of data points per task, with a unique solution for the optimum; 3) When trained separately, harder task should get more data, at the cost of a smaller number of tasks; 4) When training on a mixture of easy and hard tasks, more data should be allocated to easy tasks.
no code implementations • 8 Mar 2021 • Jezabel R. Garcia, Federica Freddi, Feng-Ting Liao, Jamie McGowan, Tim Nieradzik, Da-Shan Shiu, Ye Tian, Alberto Bernacchia
We show that TreeMAML improves the state of the art results for cross-lingual Natural Language Inference.
Cross-Lingual Natural Language Inference Cross-Lingual Transfer +2
no code implementations • ICLR 2021 • Alberto Bernacchia
We study the performance of MAML as a function of the learning rate of the inner loop, where zero learning rate implies that there is no inner loop.
no code implementations • 1 Jan 2021 • Georgios Batzolis, Alberto Bernacchia, Da-Shan Shiu, Michael Bromberg, Alexandru Cioba
They are tested on benchmarks with a fixed number of data-points for each training task, and this number is usually arbitrary, for example, 5 instances per class in few-shot classification.
no code implementations • 1 Jan 2021 • Jezabel Garcia, Federica Freddi, Jamie McGowan, Tim Nieradzik, Da-Shan Shiu, Ye Tian, Alberto Bernacchia
In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related, and sharing information between unrelated tasks might hurt performance.
no code implementations • NeurIPS Workshop SVRHM 2021 • Federica Freddi, Jezabel R Garcia, Michael Bromberg, Sepehr Jalali, Da-Shan Shiu, Alvin Chua, Alberto Bernacchia
We propose a novel architecture that allows flexible information flow between features $z$ and locations $(x, y)$ across the entire image with a small number of layers.
no code implementations • NeurIPS 2020 • Virginia Rutten, Alberto Bernacchia, Maneesh Sahani, Guillaume Hennequin
Here, we propose a new family of “dynamical” priors over trajectories, in the form of GP covariance functions that express a property shared by most dynamical systems: temporal non-reversibility.
no code implementations • NeurIPS 2018 • Alberto Bernacchia, Mate Lengyel, Guillaume Hennequin
Stochastic gradient descent (SGD) remains the method of choice for deep learning, despite the limitations arising for ill-behaved objective functions.