1 code implementation • 14 Feb 2024 • Jack Miller, Patrick Gleeson, Charles O'Neill, Thang Bui, Noam Levi
Neural networks sometimes exhibit grokking, a phenomenon where perfect or near-perfect performance is achieved on a validation set well after the same performance has been obtained on the corresponding training set.
1 code implementation • 26 Oct 2023 • Jack Miller, Charles O'Neill, Thang Bui
In some settings neural networks exhibit a phenomenon known as \textit{grokking}, where they achieve perfect or near-perfect accuracy on the validation set long after the same performance has been achieved on the training set.
no code implementations • 26 Aug 2023 • Charles O'Neill, Jack Miller, Ioana Ciuca, Yuan-Sen Ting, Thang Bui
The performance of our approach is evaluated through classification accuracy on a dataset consisting of problematic prompts not detected by GPT-4, as well as a selection of contentious but unproblematic prompts.
no code implementations • 15 Aug 2023 • Charles O'Neill, Yuan-Sen Ting, Ioana Ciuca, Jack Miller, Thang Bui
Large Language Models (LLMs) hold immense potential to generate synthetic data of high quality and utility, which has numerous applications from downstream model training to practical data utilisation.
no code implementations • 15 Mar 2023 • Charles O'Neill
Rice is a staple food in the world's diet, and yet huge percentages of crop yields are lost each year to disease.
no code implementations • 23 Dec 2022 • Jack W. Miller, Charles O'Neill, Navid C. Constantinou, Omri Azencot
In addition, we suggest the "eigenloss" penalty scheme that penalises the eigenvalues of the Koopman operator during training.