Search Results for author: CJ Barberan

Found 6 papers, 1 papers with code

Can LLMs be Fooled? Investigating Vulnerabilities in LLMs

no code implementations30 Jul 2024 Sara Abdali, Jia He, CJ Barberan, Richard Anarfi

The advent of Large Language Models (LLMs) has garnered significant popularity and wielded immense power across various domains within Natural Language Processing (NLP).

Model Editing

Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices

no code implementations19 Mar 2024 Sara Abdali, Richard Anarfi, CJ Barberan, Jia He

Large language models (LLMs) have significantly transformed the landscape of Natural Language Processing (NLP).

Management

Decoding the AI Pen: Techniques and Challenges in Detecting AI-Generated Text

no code implementations9 Mar 2024 Sara Abdali, Richard Anarfi, CJ Barberan, Jia He

Large Language Models (LLMs) have revolutionized the field of Natural Language Generation (NLG) by demonstrating an impressive ability to generate human-like text.

Text Generation

NeuroView-RNN: It's About Time

no code implementations23 Feb 2022 CJ Barberan, Sina AlEMohammad, Naiming Liu, Randall Balestriero, Richard G. Baraniuk

A key interpretability issue with RNNs is that it is not clear how each hidden state per time step contributes to the decision-making process in a quantitative manner.

Decision Making Time Series +1

NeuroView: Explainable Deep Network Decision Making

no code implementations15 Oct 2021 CJ Barberan, Randall Balestriero, Richard G. Baraniuk

Each member of the family is derived from a standard DN architecture by vector quantizing the unit output values and feeding them into a global linear classifier.

Decision Making

NFT-K: Non-Fungible Tangent Kernels

1 code implementation11 Oct 2021 Sina AlEMohammad, Hossein Babaei, CJ Barberan, Naiming Liu, Lorenzo Luzi, Blake Mason, Richard G. Baraniuk

To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel.

Cannot find the paper you are looking for? You can Submit a new open access paper.