1 code implementation • 19 Jan 2024 • Adib Hasan, Ileana Rugina, Alex Wang
This paper investigates the impact of model compression on the way Large Language Models (LLMs) process prompts, particularly concerning jailbreak resistance.
no code implementations • 22 Dec 2021 • Ileana Rugina, Rumen Dangovski, Mark Veillette, Pooya Khorrami, Brian Cheung, Olga Simek, Marin Soljačić
In recent years, emerging fields such as meta-learning or self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of machine learning by extending deep-learning to the semi-supervised and few-shot domains.
2 code implementations • 20 Nov 2020 • Ileana Rugina, Rumen Dangovski, Li Jing, Preslav Nakov, Marin Soljačić
Attention mechanisms play a crucial role in the neural revolution of Natural Language Processing (NLP).