Search Results for author: Shaoliang Nie

Found 15 papers, 4 papers with code

High Resolution and Fast Face Completion via Progressively Attentive GANs

no code implementations ICLR 2019 Zeyuan Chen, Shaoliang Nie, Tianfu Wu, Christopher G. Healey

Face completion is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of "holes" and the controllable attributes of filled-in fragments.

Facial Inpainting Vocal Bursts Intensity Prediction

Modality-specific Distillation

no code implementations NAACL (maiworkshop) 2021 Woojeong Jin, Maziar Sanjabi, Shaoliang Nie, Liang Tan, Xiang Ren, Hamed Firooz

In this paper, we propose modality-specific distillation (MSD) to effectively transfer knowledge from a teacher on multimodal datasets.

Knowledge Distillation Meta-Learning

On the Equivalence of Graph Convolution and Mixup

no code implementations29 Sep 2023 Xiaotian Han, Hanqing Zeng, Yu Chen, Shaoliang Nie, Jingzhou Liu, Kanika Narang, Zahra Shakeri, Karthik Abinav Sankararaman, Song Jiang, Madian Khabsa, Qifan Wang, Xia Hu

We establish this equivalence mathematically by demonstrating that graph convolution networks (GCN) and simplified graph convolution (SGC) can be expressed as a form of Mixup.

Data Augmentation

Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

1 code implementation11 May 2023 Brihi Joshi, Ziyi Liu, Sahana Ramnath, Aaron Chan, Zhewei Tong, Shaoliang Nie, Qifan Wang, Yejin Choi, Xiang Ren

Existing metrics like task performance of the LM generating the rationales, or similarity between generated and gold rationales are not good indicators of their human utility.

COFFEE: Counterfactual Fairness for Personalized Text Generation in Explainable Recommendation

no code implementations14 Oct 2022 Nan Wang, Qifan Wang, Yi-Chia Wang, Maziar Sanjabi, Jingzhou Liu, Hamed Firooz, Hongning Wang, Shaoliang Nie

However, the bias inherent in user written text, often used for PTG model training, can inadvertently associate different levels of linguistic quality with users' protected attributes.

counterfactual Counterfactual Inference +4

AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning

1 code implementation12 Oct 2022 Tao Yang, Jinghao Deng, Xiaojun Quan, Qifan Wang, Shaoliang Nie

Fine-tuning large pre-trained language models on downstream tasks is apt to suffer from overfitting when limited training data is available.

Language Modelling

FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text Rationales

no code implementations2 Jul 2022 Aaron Chan, Shaoliang Nie, Liang Tan, Xiaochang Peng, Hamed Firooz, Maziar Sanjabi, Xiang Ren

Following how humans communicate, free-text rationales aim to use natural language to explain neural language model (LM) behavior.

Hallucination Language Modelling +2

Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem

no code implementations Findings (ACL) 2022 Khalil Mrini, Shaoliang Nie, Jiatao Gu, Sinong Wang, Maziar Sanjabi, Hamed Firooz

Without the use of a knowledge base or candidate sets, our model sets a new state of the art in two benchmark datasets of entity linking: COMETA in the biomedical domain, and AIDA-CoNLL in the news domain.

Entity Linking Re-Ranking

BARACK: Partially Supervised Group Robustness With Guarantees

no code implementations31 Dec 2021 Nimit S. Sohoni, Maziar Sanjabi, Nicolas Ballas, Aditya Grover, Shaoliang Nie, Hamed Firooz, Christopher Ré

Theoretically, we provide generalization bounds for our approach in terms of the worst-group performance, which scale with respect to both the total number of training points and the number of training points with group labels.

Fairness Generalization Bounds

UNIREX: A Unified Learning Framework for Language Model Rationale Extraction

1 code implementation BigScience (ACL) 2022 Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, Hamed Firooz

An extractive rationale explains a language model's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction.

Language Modelling text-classification +1

Towards Controllable and Interpretable Face Completion via Structure-Aware and Frequency-Oriented Attentive GANs

no code implementations25 Sep 2019 Zeyuan Chen, Shaoliang Nie, Tianfu Wu, Christopher G. Healey

The proposed frequency-oriented attentive module (FOAM) encourages GANs to attend to only finer details in the coarse-to-fine progressive training, thus enabling progressive attention to face structures.

Facial Inpainting

High Resolution Face Completion with Multiple Controllable Attributes via Fully End-to-End Progressive Generative Adversarial Networks

no code implementations23 Jan 2018 Zeyuan Chen, Shaoliang Nie, Tianfu Wu, Christopher G. Healey

It is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of "holes" and the controllable attributes of filled-in fragments.

Facial Inpainting

Cannot find the paper you are looking for? You can Submit a new open access paper.