1 code implementation • COLING 2022 • Minghao Xu, Daling Wang, Shi Feng, Zhenfei Yang, Yifei Zhang
Moreover, to verify the generality of the model, we also conduct experiments on two common sentiment analysis datasets.
no code implementations • NAACL (ACL) 2022 • Jordan Boyd-Graber, Samuel Carton, Shi Feng, Q. Vera Liao, Tania Lombrozo, Alison Smith-Renner, Chenhao Tan
The NLP community are increasingly interested in providing explanations for NLP models to help people make sense of model behavior and potentially improve human interaction with models.
no code implementations • COLING 2022 • Dongshi Ju, Shi Feng, Pengcheng Lv, Daling Wang, Yifei Zhang
In an open-domain dialogue system, the consistent persona is a key factor to generate real and coherent dialogues.
no code implementations • 24 May 2023 • Yongkang Liu, Shi Feng, Daling Wang, Yifei Zhang, Hinrich Schütze
There are risks in using eference-free evaluators based on LLMs to evaluate the quality of dialogue responses.
1 code implementation • 22 May 2023 • Chenglei Si, Dan Friedman, Nitish Joshi, Shi Feng, Danqi Chen, He He
We investigate the inductive biases of ICL from the perspective of feature bias: which feature ICL is more likely to use given a set of underspecified demonstrations in which two features are equally predictive of the labels.
1 code implementation • 6 Mar 2023 • Han Liu, Yizhou Tian, Chacha Chen, Shi Feng, Yuxin Chen, Chenhao Tan
Despite the promising performance of supervised learning, representations learned by supervised models may not align well with human intuitions: what models consider as similar examples can be perceived as distinct by humans.
1 code implementation • 31 Jan 2023 • Shi Feng, Nuoya Xiong, Wei Chen
This paper studies the CCB problem without the graph structure on binary general causal models and BGLMs.
no code implementations • 18 Dec 2022 • Yongkang Liu, Shi Feng, Daling Wang, Yifei Zhang, Hinrich Schütze
We investigate response generation for multi-turn dialogue in generative-based chatbots.
1 code implementation • 12 Nov 2022 • Xiaocui Yang, Shi Feng, Daling Wang, Pengfei Hong, Soujanya Poria
To improve the robustness of our model, we then leverage multiple diverse prompts for each input and propose a probabilistic method to fuse the output predictions.
1 code implementation • 8 Nov 2022 • Qian Li, Shafiq Joty, Daling Wang, Shi Feng, Yifei Zhang
Sparsity of formal knowledge and roughness of non-ontological construction make sparsity problem particularly prominent in Open Knowledge Graphs (OpenKGs).
1 code implementation • 8 Nov 2022 • Yiming Zhang, Shi Feng, Chenhao Tan
For GPT-2, our learned policies demonstrate strong abilities of generalizing to unseen tasks in training, with a $5. 8\%$ improvement on average.
no code implementations • 25 Oct 2022 • Yongkang Liu, Shi Feng, Wei Gao, Daling Wang, Yifei Zhang
Current end-to-end retrieval-based dialogue systems are mainly based on Recurrent Neural Networks or Transformers with attention mechanisms.
1 code implementation • COLING 2022 • Yongkang Liu, Shi Feng, Daling Wang, Yifei Zhang
Building dialogue generation systems in a zero-shot scenario remains a huge challenge, since the typical zero-shot approaches in dialogue generation rely heavily on large-scale pre-trained language generation models such as GPT-3 and T5.
1 code implementation • 4 Jun 2022 • Shi Feng, Wei Chen
For the special case of linear models with hidden variables, we apply causal inference techniques such as the do-calculus to convert the original model into a Markovian model, and then show that our BGLM-OFU algorithm and another algorithm based on the linear regression both solve such linear models with hidden variables.
1 code implementation • 8 Feb 2022 • Chacha Chen, Shi Feng, Amit Sharma, Chenhao Tan
Our key result is that without assumptions about task-specific intuitions, explanations may potentially improve human understanding of model decision boundary, but they cannot improve human understanding of task decision boundary or model error.
1 code implementation • ACL 2021 • Xiaocui Yang, Shi Feng, Yifei Zhang, Daling Wang
In this paper, we propose Multi-channel Graph Neural Networks with Sentiment-awareness (MGNNS) for image-text sentiment detection.
3 code implementations • 19 Feb 2021 • Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh
We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the training examples can cause accuracy to vary from near chance to near state-of-the-art.
no code implementations • 21 Dec 2020 • Yongkang Liu, Shi Feng, Daling Wang, Kaisong Song, Feiliang Ren, Yifei Zhang
We investigate response selection for multi-turn conversation in retrieval-based chatbots.
no code implementations • NAACL 2021 • Eric Wallace, Tony Z. Zhao, Shi Feng, Sameer Singh
In this work, we develop a new data poisoning attack that allows an adversary to control model predictions whenever a desired trigger phrase is present in the input.
no code implementations • IJCNLP 2019 • Weichao Wang, Shi Feng, Daling Wang, Yifei Zhang
We observe that the answer has strong semantic coherence to its question and post, which can be used to guide question generation.
no code implementations • WS 2019 • Qian Li, Hui Su, Cheng Niu, Daling Wang, Zekang Li, Shi Feng, Yifei Zhang
Moreover, pretraining is essential in reinforcement learning models, so we provide a high-quality annotated dataset for question reformulation by sampling a part of QuAC dataset.
no code implementations • WS 2019 • Pranav Goel, Shi Feng, Jordan Boyd-Graber
One type of common sense is how two objects compare on physical properties such as size and weight: e. g., {`}is a house bigger than a person?{'}.
1 code implementation • IJCNLP 2019 • Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh
We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset.
no code implementations • ACL 2019 • Shi Feng, Eric Wallace, Jordan Boyd-Graber
Recent work establishes dataset difficulty and removes annotation artifacts via partial-input baselines (e. g., hypothesis-only models for SNLI or question-only models for VQA).
no code implementations • 9 Apr 2019 • Pedro Rodriguez, Shi Feng, Mohit Iyyer, He He, Jordan Boyd-Graber
Throughout this paper, we show that collaborations with the vibrant trivia community have contributed to the quality of our dataset, spawned new research directions, and doubled as an exciting way to engage the public with research in machine learning and natural language processing.
1 code implementation • 1 Feb 2019 • Sahil Singla, Eric Wallace, Shi Feng, Soheil Feizi
Second, we compute the importance of group-features in deep learning interpretation by introducing a sparsity regularization term.
no code implementations • 23 Oct 2018 • Shi Feng, Jordan Boyd-Graber
Machine learning is an important tool for decision making, but its ethical and responsible application requires rigorous vetting of its interpretability and utility: an understudied problem, particularly for natural language processing models.
no code implementations • EMNLP 2018 • Weichao Wang, Shi Feng, Wei Gao, Daling Wang, Yifei Zhang
Then the attention-based CNN model is incorporated into a novel adversarial cross-lingual learning framework, in which with the help of user properties as bridge between languages, we can extract the language-specific features and language-independent features to enrich the user post representation so as to alleviate the data insufficiency problem.
no code implementations • EMNLP 2018 • Xiangju Li, Kaisong Song, Shi Feng, Daling Wang, Yifei Zhang
Emotion cause analysis has been a key topic in natural language processing.
1 code implementation • WS 2018 • Eric Wallace, Shi Feng, Jordan Boyd-Graber
However, the confidence of neural networks is not a robust measure of model uncertainty.
1 code implementation • TACL 2019 • Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Yamada, Jordan Boyd-Graber
We propose human-in-the-loop adversarial generation, where human authors are guided to break models.
no code implementations • EMNLP 2018 • Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, Jordan Boyd-Graber
In existing interpretation methods for NLP, a word's importance is determined by either input perturbation---measuring the decrease in model confidence when that word is removed---or by the gradient with respect to that word.
no code implementations • WS 2017 • Amr Sharaf, Shi Feng, Khanh Nguyen, Kianté Brantley, Hal Daumé III
We describe the University of Maryland machine translation systems submitted to the WMT17 German-English Bandit Learning Task.
no code implementations • COLING 2016 • Shi Feng, Shujie Liu, Nan Yang, Mu Li, Ming Zhou, Kenny Q. Zhu
In neural machine translation, the attention mechanism facilitates the translation process by producing a soft alignment between the source sentence and the target sentence.
no code implementations • 13 Jan 2016 • Shi Feng, Shujie Liu, Mu Li, Ming Zhou
Aiming to resolve these problems, we propose new variations of attention-based encoder-decoder and compare them with other models on machine translation.