no code implementations • 12 Nov 2024 • Davide Buffelli, Jamie McGowan, Wangkun Xu, Alexandru Cioba, Da-Shan Shiu, Guillaume Hennequin, Alberto Bernacchia
We exploit this novel setting to study the training and generalization properties of the GN optimizer.
2 code implementations • 22 Oct 2024 • Theodore Brown, Alexandru Cioba, Ilija Bogunovic
We derive a bound on the maximum information gain of these invariant kernels, and provide novel upper and lower bounds on the number of observations required for invariance-aware BO algorithms to achieve $\epsilon$-optimality.
no code implementations • 13 Apr 2022 • Fu-Chieh Chang, Yu-Wei Tseng, Ya-Wen Yu, Ssu-Rui Lee, Alexandru Cioba, I-Lun Tseng, Da-Shan Shiu, Jhih-Wei Hsu, Cheng-Yuan Wang, Chien-Yi Yang, Ren-Chu Wang, Yao-Wen Chang, Tai-Chen Chen, Tung-Chieh Chen
Recently, successful applications of reinforcement learning to chip placement have emerged.
no code implementations • 15 Mar 2021 • Alexandru Cioba, Michael Bromberg, Qian Wang, Ritwik Niyogi, Georgios Batzolis, Jezabel Garcia, Da-Shan Shiu, Alberto Bernacchia
We show that: 1) If tasks are homogeneous, there is a uniform optimal allocation, whereby all tasks get the same amount of data; 2) At fixed budget, there is a trade-off between number of tasks and number of data points per task, with a unique solution for the optimum; 3) When trained separately, harder task should get more data, at the cost of a smaller number of tasks; 4) When training on a mixture of easy and hard tasks, more data should be allocated to easy tasks.
no code implementations • 1 Jan 2021 • Georgios Batzolis, Alberto Bernacchia, Da-Shan Shiu, Michael Bromberg, Alexandru Cioba
They are tested on benchmarks with a fixed number of data-points for each training task, and this number is usually arbitrary, for example, 5 instances per class in few-shot classification.