no code implementations • 26 Jan 2023 • Matt Fredrikson, Kaiji Lu, Saranya Vijayakumar, Somesh Jha, Vijay Ganesh, Zifan Wang
Recent techniques that integrate \emph{solver layers} into Deep Neural Networks (DNNs) have shown promise in bridging a long-standing gap between inductive learning and symbolic reasoning techniques.
no code implementations • 1 Jun 2022 • Kaiji Lu, Anupam Datta
Previous works show that deep NLP models are not always conceptually sound: they do not always learn the correct linguistic concepts.
no code implementations • NeurIPS 2021 • Kaiji Lu, Zifan Wang, Piotr Mardziel, Anupam Datta
While attention is all you need may be proving true, we do not know why: attention-based transformer models such as BERT are superior but how information flows from input tokens to output predictions are unclear.
no code implementations • 28 Sep 2020 • Kaiji Lu, Zifan Wang, Piotr Mardziel, Anupam Datta
While “attention is all you need” may be proving true, we do not yet know why: attention-based transformer models such as BERT are superior but how they contextualize information even for simple grammatical rules such as subject-verb number agreement(SVA) is uncertain.
no code implementations • ACL 2020 • Kaiji Lu, Piotr Mardziel, Klas Leino, Matt Fedrikson, Anupam Datta
LSTM-based recurrent neural networks are the state-of-the-art for many natural language processing (NLP) tasks.
1 code implementation • 31 Jul 2018 • Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, Anupam Datta
We define a general benchmark to quantify gender bias in a variety of neural NLP tasks.