Search Results for author: Jack Miller

Found 7 papers, 2 papers with code

Measuring Sharpness in Grokking

1 code implementation14 Feb 2024 Jack Miller, Patrick Gleeson, Charles O'Neill, Thang Bui, Noam Levi

Neural networks sometimes exhibit grokking, a phenomenon where perfect or near-perfect performance is achieved on a validation set well after the same performance has been obtained on the corresponding training set.

Grokking Beyond Neural Networks: An Empirical Exploration with Model Complexity

1 code implementation26 Oct 2023 Jack Miller, Charles O'Neill, Thang Bui

In some settings neural networks exhibit a phenomenon known as \textit{grokking}, where they achieve perfect or near-perfect accuracy on the validation set long after the same performance has been achieved on the training set.

regression

Adversarial Fine-Tuning of Language Models: An Iterative Optimisation Approach for the Generation and Detection of Problematic Content

no code implementations26 Aug 2023 Charles O'Neill, Jack Miller, Ioana Ciuca, Yuan-Sen Ting, Thang Bui

The performance of our approach is evaluated through classification accuracy on a dataset consisting of problematic prompts not detected by GPT-4, as well as a selection of contentious but unproblematic prompts.

Steering Language Generation: Harnessing Contrastive Expert Guidance and Negative Prompting for Coherent and Diverse Synthetic Data Generation

no code implementations15 Aug 2023 Charles O'Neill, Yuan-Sen Ting, Ioana Ciuca, Jack Miller, Thang Bui

Large Language Models (LLMs) hold immense potential to generate synthetic data of high quality and utility, which has numerous applications from downstream model training to practical data utilisation.

Comment Generation Synthetic Data Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.