Search Results for author: Vyas Raina

Found 14 papers, 7 papers with code

LLM Task Interference: An Initial Study on the Impact of Task-Switch in Conversational History

1 code implementation28 Feb 2024 Akash Gupta, Ivaxi Sheth, Vyas Raina, Mark Gales, Mario Fritz

With the recent emergence of powerful instruction-tuned large language models (LLMs), various helpful conversational Artificial Intelligence (AI) systems have been deployed across many applications.

Extreme Miscalibration and the Illusion of Adversarial Robustness

no code implementations27 Feb 2024 Vyas Raina, Samson Tan, Volkan Cevher, Aditya Rawal, Sheng Zha, George Karypis

Deep learning-based Natural Language Processing (NLP) models are vulnerable to adversarial attacks, where small perturbations can cause a model to misclassify.

Adversarial Attack Adversarial Robustness

Is LLM-as-a-Judge Robust? Investigating Universal Adversarial Attacks on Zero-shot LLM Assessment

no code implementations21 Feb 2024 Vyas Raina, Adian Liusie, Mark Gales

Large Language Models (LLMs) are powerful zero-shot assessors and are increasingly used in real-world situations such as for written exams or benchmarking systems.

Adversarial Robustness Benchmarking

Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM

no code implementations4 Jan 2024 Xiaoding Lu, Zongyi Liu, Adian Liusie, Vyas Raina, Vineet Mudupalli, Yuwen Zhang, William Beauchamp

In conversational AI research, there's a noticeable trend towards developing models with a larger number of parameters, exemplified by models like ChatGPT.

Minimum Bayes' Risk Decoding for System Combination of Grammatical Error Correction Systems

1 code implementation12 Sep 2023 Vyas Raina, Mark Gales

Minimum Bayes' Risk (MBR) decoding can be used to combine system outputs in a manner that encourages better alignment with the final assessment criterion.

Grammatical Error Correction

Sample Attackability in Natural Language Adversarial Attacks

1 code implementation21 Jun 2023 Vyas Raina, Mark Gales

Adversarial attack research in natural language processing (NLP) has made significant progress in designing powerful attack methods and defence approaches.

Adversarial Attack

CUED at ProbSum 2023: Hierarchical Ensemble of Summarization Models

1 code implementation8 Jun 2023 Potsawee Manakul, Yassir Fathullah, Adian Liusie, Vyas Raina, Vatsal Raina, Mark Gales

In this paper, we consider the challenge of summarizing patients' medical progress notes in a limited data setting.

Sentiment Perception Adversarial Attacks on Neural Machine Translation Systems

no code implementations2 May 2023 Vyas Raina, Mark Gales

In this work, adversarial attacks for NMT systems are explored from an output perception perspective.

Machine Translation NMT +1

Rewarding Chatbots for Real-World Engagement with Millions of Users

no code implementations10 Mar 2023 Robert Irvine, Douglas Boubert, Vyas Raina, Adian Liusie, Ziyi Zhu, Vineet Mudupalli, Aliaksei Korshuk, Zongyi Liu, Fritz Cremer, Valentin Assassi, Christie-Carol Beauchamp, Xiaoding Lu, Thomas Rialan, William Beauchamp

The proposed approach uses automatic pseudo-labels collected from user interactions to train a reward model that can be used to reject low-scoring sample responses generated by the chatbot model at inference time.

Chatbot Language Modelling

Identifying Adversarially Attackable and Robust Samples

1 code implementation30 Jan 2023 Vyas Raina, Mark Gales

We propose a deep-learning-based detector to identify the adversarially attackable and robust samples in an unseen dataset for an unseen target model.

Active Learning Adversarial Attack +1

L2 proficiency assessment using self-supervised speech representations

no code implementations16 Nov 2022 Stefano Bannò, Kate M. Knill, Marco Matassoni, Vyas Raina, Mark J. F. Gales

Though the wav2vec 2. 0 based system is found to be sensitive to the nature of the response, it can be configured to yield comparable performance to systems requiring a speech transcription, and yields gains when appropriately combined with standard approaches.

speech-recognition Speech Recognition

Gender Bias and Universal Substitution Adversarial Attacks on Grammatical Error Correction Systems for Automated Assessment

no code implementations19 Aug 2022 Vyas Raina, Mark Gales

When considering the application of GEC systems to automated language assessment, the aim of an adversary could be to cheat by making a small change to a grammatically incorrect input sentence that conceals the errors from a GEC system, such that no edits are found and the candidate is unjustly awarded a perfect fluency score.

Adversarial Attack Grammatical Error Correction +1

Residue-Based Natural Language Adversarial Attack Detection

1 code implementation NAACL 2022 Vyas Raina, Mark Gales

Many popular image adversarial detection approaches are able to identify adversarial examples from embedding feature spaces, whilst in the NLP domain existing state of the art detection approaches solely focus on input text features, without consideration of model embedding spaces.

Adversarial Attack Detection Sentence +2

Shifts: A Dataset of Real Distributional Shift Across Multiple Large-Scale Tasks

3 code implementations15 Jul 2021 Andrey Malinin, Neil Band, Ganshin, Alexander, German Chesnokov, Yarin Gal, Mark J. F. Gales, Alexey Noskov, Andrey Ploskonosov, Liudmila Prokhorenkova, Ivan Provilkov, Vatsal Raina, Vyas Raina, Roginskiy, Denis, Mariya Shmatova, Panos Tigas, Boris Yangel

However, many tasks of practical interest have different modalities, such as tabular data, audio, text, or sensor data, which offer significant challenges involving regression and discrete or continuous structured prediction.

Image Classification Machine Translation +5

Cannot find the paper you are looking for? You can Submit a new open access paper.