no code implementations • 20 Feb 2024 • An Liu, Zonghan Yang, Zhenhe Zhang, Qingyuan Hu, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu
While Large language models (LLMs) have demonstrated considerable capabilities across various natural language tasks, they often fall short of the performance achieved by domain-specific state-of-the-art models.
no code implementations • 12 Feb 2024 • Zonghan Yang, An Liu, Zijun Liu, Kaiming Liu, Fangzhou Xiong, Yile Wang, Zeyuan Yang, Qingyuan Hu, Xinrui Chen, Zhenhe Zhang, Fuwen Luo, Zhicheng Guo, Peng Li, Yang Liu
We also conduct proof-of-concept studies by introducing realistic features to WebShop, including user profiles to demonstrate intentions, personalized reranking for complex environmental dynamics, and runtime cost statistics to reflect self-constraints.
1 code implementation • 2 Nov 2023 • Te-Lin Wu, Zi-Yi Dou, Qingyuan Hu, Yu Hou, Nischal Reddy Chandra, Marjorie Freedman, Ralph M. Weischedel, Nanyun Peng
Multimodal counterfactual reasoning is a vital yet challenging ability for AI systems.
no code implementations • 25 May 2022 • Te-Lin Wu, Caiqi Zhang, Qingyuan Hu, Alex Spangher, Nanyun Peng
The ability to infer pre- and postconditions of an action is vital for comprehending complex instructions, and is essential for applications such as autonomous instruction-guided agents and assistive AI that supports humans to perform physical tasks.
no code implementations • 21 Jan 2022 • Guangxuan Xu, Qingyuan Hu
Model compression techniques are receiving increasing attention; however, the effect of compression on model fairness is still under explored.
1 code implementation • 16 Apr 2021 • Xiaonan Jing, Yi Zhang, Qingyuan Hu, Julia Taylor Rayz
Twitter can be viewed as a data source for Natural Language Processing (NLP) tasks.
1 code implementation • 16 Apr 2021 • Xiaonan Jing, Qingyuan Hu, Yi Zhang, Julia Taylor Rayz
Twitter serves as a data source for many Natural Language Processing (NLP) tasks.
no code implementations • 19 Jan 2021 • Qingyuan Hu, Yi Zhang, Kanishka Misra, Julia Rayz
Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) is the task of predicting the entailment relation between a pair of sentences (premise and hypothesis).
Natural Language Inference Natural Language Understanding +1
no code implementations • 27 Jun 2020 • Qian Li, Qingyuan Hu, Yong Qi, Saiyu Qi, Jie Ma, Jian Zhang
SBA stochastically decides whether to augment at iterations controlled by the batch scheduler and in which a ''distilled'' dynamic soft label regularization is introduced by incorporating the similarity in the vicinity distribution respect to raw samples.