Search Results for author: Zhijing Jin

Found 41 papers, 32 papers with code

Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals

1 code implementation18 Feb 2024 Francesco Ortu, Zhijing Jin, Diego Doimo, Mrinmaya Sachan, Alberto Cazzaniga, Bernhard Schölkopf

Interpretability research aims to bridge the gap between the empirical success and our scientific understanding of the inner workings of large language models (LLMs).

CLadder: Assessing Causal Reasoning in Language Models

1 code implementation NeurIPS 2023 Zhijing Jin, Yuen Chen, Felix Leeb, Luigi Gresele, Ojasv Kamal, Zhiheng Lyu, Kevin Blin, Fernando Gonzalez Adauto, Max Kleiman-Weiner, Mrinmaya Sachan, Bernhard Schölkopf

Much of the existing work in natural language processing (NLP) focuses on evaluating commonsense causal reasoning in LLMs, thus failing to assess whether a model can perform causal inference in accordance with a set of well-defined formal rules.

Causal Inference Commonsense Causal Reasoning +1

Navigating the Ocean of Biases: Political Bias Attribution in Language Models via Causal Structures

1 code implementation15 Nov 2023 David F. Jenny, Yann Billeter, Mrinmaya Sachan, Bernhard Schölkopf, Zhijing Jin

The rapid advancement of Large Language Models (LLMs) has sparked intense debate regarding their ability to perceive and interpret complex socio-political landscapes.

Decision Making

Can Large Language Models Infer Causation from Correlation?

1 code implementation9 Jun 2023 Zhijing Jin, Jiarui Liu, Zhiheng Lyu, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona Diab, Bernhard Schölkopf

In this work, we propose the first benchmark dataset to test the pure causal inference skills of large language models (LLMs).

Causal Inference

Membership Inference Attacks against Language Models via Neighbourhood Comparison

1 code implementation29 May 2023 Justus Mattern, FatemehSadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick

To investigate whether this fragility provides a layer of safety, we propose and evaluate neighbourhood attacks, which compare model scores for a given sample to scores of synthetically generated neighbour texts and therefore eliminate the need for access to the training data distribution.

Voices of Her: Analyzing Gender Differences in the AI Publication World

1 code implementation24 May 2023 Yiwen Ding, Jiarui Liu, Zhiheng Lyu, Kun Zhang, Bernhard Schoelkopf, Zhijing Jin, Rada Mihalcea

While several previous studies have analyzed gender bias in research, we are still missing a comprehensive analysis of gender differences in the AI community, covering diverse topics and different development trends.

All Roads Lead to Rome? Exploring the Invariance of Transformers' Representations

1 code implementation23 May 2023 Yuxin Ren, Qipeng Guo, Zhijing Jin, Shauli Ravfogel, Mrinmaya Sachan, Bernhard Schölkopf, Ryan Cotterell

Transformer models bring propelling advances in various NLP tasks, thus inducing lots of interpretability research on the learned representations of the models.

When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLP

2 code implementations23 May 2023 Jingwei Ni, Zhijing Jin, Qian Wang, Mrinmaya Sachan, Markus Leippold

Due to the task difficulty and data scarcity in the Financial NLP domain, we explore when aggregating such diverse skills from multiple datasets with MTL can work.

Multi-Task Learning Open-Ended Question Answering +1

OPT-R: Exploring the Role of Explanations in Finetuning and Prompting for Reasoning Skills of Large Language Models

no code implementations19 May 2023 Badr AlKhamissi, Siddharth Verma, Ping Yu, Zhijing Jin, Asli Celikyilmaz, Mona Diab

Our study entails finetuning three different sizes of OPT on a carefully curated reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned without explanations, and OPT-RE, finetuned with explanations.

Beyond Good Intentions: Reporting the Research Landscape of NLP for Social Good

1 code implementation9 May 2023 Fernando Gonzalez, Zhijing Jin, Bernhard Schölkopf, Tom Hope, Mrinmaya Sachan, Rada Mihalcea

Using state-of-the-art NLP models, we address each of these tasks and use them on the entire ACL Anthology, resulting in a visualization workspace that gives researchers a comprehensive overview of the field of NLP4SG.

Psychologically-Inspired Causal Prompts

1 code implementation2 May 2023 Zhiheng Lyu, Zhijing Jin, Justus Mattern, Rada Mihalcea, Mrinmaya Sachan, Bernhard Schoelkopf

In this work, we take sentiment classification as an example and look into the causal relations between the review (X) and sentiment (Y).

Sentiment Analysis Sentiment Classification

Natural Language Processing for Policymaking

no code implementations7 Feb 2023 Zhijing Jin, Rada Mihalcea

This text is from Chapter 7 (pages 141-162) of the Handbook of Computational Social Science for Policy (2023).

Event Extraction text-classification +1

Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion

1 code implementation27 Jan 2023 Flavio Schneider, Ojasv Kamal, Zhijing Jin, Bernhard Schölkopf

Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another "language" of communication -- music.

Image Generation Music Generation +1

Understanding Stereotypes in Language Models: Towards Robust Measurement and Zero-Shot Debiasing

no code implementations20 Dec 2022 Justus Mattern, Zhijing Jin, Mrinmaya Sachan, Rada Mihalcea, Bernhard Schölkopf

Generated texts from large pretrained language models have been shown to exhibit a variety of harmful, human-like biases about various demographics.


Editing a Woman's Voice

1 code implementation5 Dec 2022 Anna Costello, Ekaterina Fedorova, Zhijing Jin, Rada Mihalcea

However, when we trace those early drafts to their published versions, a substantial gender gap in linguistic uncertainty arises.

Differentially Private Language Models for Secure Data Sharing

no code implementations25 Oct 2022 Justus Mattern, Zhijing Jin, Benjamin Weggenmann, Bernhard Schoelkopf, Mrinmaya Sachan

To protect the privacy of individuals whose data is being shared, it is of high importance to develop methods allowing researchers and companies to release textual data while providing formal privacy guarantees to its originators.

Language Modelling

A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models

1 code implementation21 Oct 2022 Alessandro Stolfo, Zhijing Jin, Kumar Shridhar, Bernhard Schölkopf, Mrinmaya Sachan

By grounding the behavioral analysis in a causal graph describing an intuitive reasoning process, we study the behavior of language models in terms of robustness and sensitivity to direct interventions in the input space.

Math Mathematical Reasoning

When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment

1 code implementation4 Oct 2022 Zhijing Jin, Sydney Levine, Fernando Gonzalez, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan, Rada Mihalcea, Josh Tenenbaum, Bernhard Schölkopf

Using a state-of-the-art large language model (LLM) as a basis, we propose a novel moral chain of thought (MORALCOT) prompting strategy that combines the strengths of LLMs with theories of moral reasoning developed in cognitive science to predict human moral judgments.

Language Modelling Large Language Model +1

Original or Translated? A Causal Analysis of the Impact of Translationese on Machine Translation Performance

1 code implementation NAACL 2022 Jingwei Ni, Zhijing Jin, Markus Freitag, Mrinmaya Sachan, Bernhard Schölkopf

We show that these two factors have a large causal effect on the MT performance, in addition to the test-model direction mismatch highlighted by existing work on the impact of translationese.

Machine Translation Translation

Logical Fallacy Detection

2 code implementations28 Feb 2022 Zhijing Jin, Abhinav Lalwani, Tejas Vaidhya, Xiaoyu Shen, Yiwen Ding, Zhiheng Lyu, Mrinmaya Sachan, Rada Mihalcea, Bernhard Schölkopf

In this paper, we propose the task of logical fallacy detection, and provide a new dataset (Logic) of logical fallacies generally found in text, together with an additional challenge set for detecting logical fallacies in climate change claims (LogicClimate).

Language Modelling Logical Fallacies +2

Inconsistent Few-Shot Relation Classification via Cross-Attentional Prototype Networks with Contrastive Learning

no code implementations13 Oct 2021 Hongru Wang, Zhijing Jin, Jiarun Cao, Gabriel Pui Cheong Fung, Kam-Fai Wong

However, previous works rarely investigate the effects of a different number of classes (i. e., $N$-way) and number of labeled data per class (i. e., $K$-shot) during training vs. testing.

Contrastive Learning Few-Shot Relation Classification +1

Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP

1 code implementation EMNLP 2021 Zhijing Jin, Julius von Kügelgen, Jingwei Ni, Tejas Vaidhya, Ayush Kaushal, Mrinmaya Sachan, Bernhard Schölkopf

The principle of independent causal mechanisms (ICM) states that generative processes of real world data consist of independent modules which do not influence or inform each other.

Causal Inference Domain Adaptation

How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social Impact

2 code implementations Findings (ACL) 2021 Zhijing Jin, Geeticka Chauhan, Brian Tse, Mrinmaya Sachan, Rada Mihalcea

We lay the foundations via the moral philosophy definition of social good, propose a framework to evaluate the direct and indirect real-world impact of NLP tasks, and adopt the methodology of global priorities research to identify priority causes for NLP research.


Fork or Fail: Cycle-Consistent Training with Many-to-One Mappings

1 code implementation14 Dec 2020 Qipeng Guo, Zhijing Jin, Ziyu Wang, Xipeng Qiu, Weinan Zhang, Jun Zhu, Zheng Zhang, David Wipf

Cycle-consistent training is widely used for jointly learning a forward and inverse mapping between two domains of interest without the cumbersome requirement of collecting matched pairs within each domain.

Knowledge Graphs Text Generation

Deep Learning for Text Style Transfer: A Survey

2 code implementations CL (ACL) 2022 Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, Rada Mihalcea

Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others.

Style Transfer Text Attribute Transfer +1

CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training

2 code implementations ACL (WebNLG, INLG) 2020 Qipeng Guo, Zhijing Jin, Xipeng Qiu, Wei-Nan Zhang, David Wipf, Zheng Zhang

Due to the difficulty and high cost of data collection, the supervised data available in the two fields are usually on the magnitude of tens of thousands, for example, 18K in the WebNLG~2017 dataset after preprocessing, which is far fewer than the millions of data for other tasks such as machine translation.

Graph Generation Knowledge Graphs +2

Hooks in the Headline: Learning to Generate Headlines with Controlled Styles

1 code implementation ACL 2020 Di Jin, Zhijing Jin, Joey Tianyi Zhou, Lisa Orii, Peter Szolovits

Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure.

Headline Generation

A Simple Baseline to Semi-Supervised Domain Adaptation for Machine Translation

1 code implementation22 Jan 2020 Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits

State-of-the-art neural machine translation (NMT) systems are data-hungry and perform poorly on new domains with no supervised data.

Language Modelling Machine Translation +4

Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment

6 code implementations27 Jul 2019 Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits

Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models.

Adversarial Text General Classification +2

IMaT: Unsupervised Text Attribute Transfer via Iterative Matching and Translation

3 code implementations IJCNLP 2019 Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, Enrico Santus

Text attribute transfer aims to automatically rewrite sentences such that they possess certain linguistic attributes, while simultaneously preserving their semantic content.

Attribute Style Transfer +3

GraphIE: A Graph-Based Framework for Information Extraction

2 code implementations NAACL 2019 Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, Regina Barzilay

Most modern Information Extraction (IE) systems are implemented as sequential taggers and only model local dependencies.

Cannot find the paper you are looking for? You can Submit a new open access paper.