no code implementations • Findings (EMNLP) 2021 • Ping Yu, Yang Zhao, Chunyuan Li, Changyou Chen
To overcome this issue, we propose a graph-based method to extract attribute content and attribute-independent content from input sentences in the YELP dataset and IMDB dataset.
no code implementations • 30 Jan 2025 • Jack Lanchantin, Angelica Chen, Shehzaad Dhuliawala, Ping Yu, Jason Weston, Sainbayar Sukhbaatar, Ilia Kulikov
In this work we introduce Diverse Preference Optimization (DivPO), an optimization method which learns to generate much more diverse responses than standard pipelines, while maintaining the quality of the generations.
no code implementations • 30 Jan 2025 • Ping Yu, Weizhe Yuan, Olga Golovneva, Tianhao Wu, Sainbayar Sukhbaatar, Jason Weston, Jing Xu
Using Llama 3. 1-8B-Instruct, RIP improves AlpacaEval2 LC Win Rate by 9. 4%, Arena-Hard by 8. 7%, and WildBench by 9. 9%.
no code implementations • 14 Nov 2024 • Shehzaad Dhuliawala, Ilia Kulikov, Ping Yu, Asli Celikyilmaz, Jason Weston, Sainbayar Sukhbaatar, Jack Lanchantin
During language model decoding, it is known that using higher temperature sampling gives more creative responses, while lower temperatures are more factually accurate.
no code implementations • 5 Aug 2024 • Tianlu Wang, Ilia Kulikov, Olga Golovneva, Ping Yu, Weizhe Yuan, Jane Dwivedi-Yu, Richard Yuanzhe Pang, Maryam Fazel-Zarandi, Jason Weston, Xian Li
Model-based evaluation is at the heart of successful model development -- as a reward model for training, and as a replacement for human evaluation.
no code implementations • 8 Jul 2024 • Ping Yu, Jing Xu, Jason Weston, Ilia Kulikov
Large language models (LLMs) can spend extra compute during inference to generate intermediate thoughts, which helps to produce better final responses.
no code implementations • 25 Jun 2024 • Weizhe Yuan, Ilia Kulikov, Ping Yu, Kyunghyun Cho, Sainbayar Sukhbaatar, Jason Weston, Jing Xu
Aligned instruction following models can better fulfill user requests than their unaligned counterparts.
no code implementations • 7 Jun 2024 • Ping Yu, Kaitao Song, Fengchen He, Ming Chen, Jianfeng Lu
Moreover, we also analyze the robustness of current LLMs in solving TCM QA tasks by introducing randomness.
no code implementations • 30 Jan 2024 • Silin Gao, Jane Dwivedi-Yu, Ping Yu, Xiaoqing Ellen Tan, Ramakanth Pasunuru, Olga Golovneva, Koustuv Sinha, Asli Celikyilmaz, Antoine Bosselut, Tianlu Wang
LLM agents trained with our method also show more efficient tool use, with inference speed being on average ~1. 4x faster than baseline tool-augmented LLMs.
no code implementations • 22 Dec 2023 • Yin Luo, Qingchao Kong, Nan Xu, Jia Cao, Bao Hao, Baoyu Qu, Bo Chen, Chao Zhu, Chenyang Zhao, Donglei Zhang, Fan Feng, Feifei Zhao, Hailong Sun, Hanxuan Yang, Haojun Pan, Hongyu Liu, Jianbin Guo, Jiangtao Du, Jingyi Wang, Junfeng Li, Lei Sun, Liduo Liu, Lifeng Dong, Lili Liu, Lin Wang, Liwen Zhang, Minzheng Wang, Pin Wang, Ping Yu, Qingxiao Li, Rui Yan, Rui Zou, Ruiqun Li, Taiwen Huang, Xiaodong Wang, Xiaofei Wu, Xin Peng, Xina Zhang, Xing Fang, Xinglin Xiao, Yanni Hao, Yao Dong, Yigang Wang, Ying Liu, Yongyu Jiang, Yungan Wang, Yuqi Wang, Zhangsheng Wang, Zhaoxin Yu, Zhen Luo, Wenji Mao, Lei Wang, Dajun Zeng
As the latest advancements in natural language processing, large language models (LLMs) have achieved human-level language understanding and generation abilities in many real-world tasks, and even have been regarded as a potential path to the artificial general intelligence.
no code implementations • 14 Nov 2023 • Kumar Shridhar, Koustuv Sinha, Andrew Cohen, Tianlu Wang, Ping Yu, Ram Pasunuru, Mrinmaya Sachan, Jason Weston, Asli Celikyilmaz
In recent years, Large Language Models (LLMs) have demonstrated remarkable generative abilities, but can they judge the quality of their own generations?
Ranked #54 on
Arithmetic Reasoning
on GSM8K
2 code implementations • 11 Aug 2023 • Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, Mike Lewis
We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions.
1 code implementation • 8 Aug 2023 • Tianlu Wang, Ping Yu, Xiaoqing Ellen Tan, Sean O'Brien, Ramakanth Pasunuru, Jane Dwivedi-Yu, Olga Golovneva, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz
As large language models improve, there is increasing interest in techniques that leverage these models' capabilities to refine their own outputs.
no code implementations • 19 May 2023 • Badr AlKhamissi, Siddharth Verma, Ping Yu, Zhijing Jin, Asli Celikyilmaz, Mona Diab
Our study entails finetuning three different sizes of OPT on a carefully curated reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned without explanations, and OPT-RE, finetuned with explanations.
5 code implementations • NeurIPS 2023 • Chunting Zhou, PengFei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy
Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences.
no code implementations • 8 May 2023 • Yanna Jiang, Baihe Ma, Xu Wang, Ping Yu, Guangsheng Yu, Zhe Wang, Wei Ni, Ren Ping Liu
The demand for intelligent industries and smart services based on big data is rising rapidly with the increasing digitization and intelligence of the modern world.
1 code implementation • 7 Jan 2023 • Guangsheng Yu, Xu Wang, Caijun Sun, Qin Wang, Ping Yu, Wei Ni, Ren Ping Liu, Xiwei Xu
Federated learning (FL) provides an effective machine learning (ML) architecture to protect data privacy in a distributed manner.
1 code implementation • 22 Dec 2022 • Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O'Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, Ves Stoyanov
To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks.
Ranked #26 on
Natural Language Inference
on RTE
no code implementations • 16 Dec 2022 • Ping Yu, Tianlu Wang, Olga Golovneva, Badr Alkhamissy, Gargi Ghosh, Mona Diab, Asli Celikyilmaz
Current large language models can perform reasonably well on complex tasks that require step-by-step reasoning with few-shot learning.
no code implementations • 18 Jul 2022 • Ping Yu, Wei Wang, Chunyuan Li, Ruiyi Zhang, Zhanpeng Jin, Changyou Chen
Significantly, it can even outperform the time- and resource-consuming fine-tuning method on sentiment classification tasks.
no code implementations • 14 Mar 2022 • Ping Yu, Mikel Artetxe, Myle Ott, Sam Shleifer, Hongyu Gong, Ves Stoyanov, Xian Li
All-MLP architectures have attracted increasing interest as an alternative to attention-based models.
Ranked #17 on
Question Answering
on StoryCloze
no code implementations • 2 Jan 2021 • Ping Yu, Ruiyi Zhang, Yang Zhao, Yizhe Zhang, Chunyuan Li, Changyou Chen
Data augmentation has been widely used to improve deep neural networks in many research fields, such as computer vision.
no code implementations • 2 Dec 2020 • Yang Zhao, Chunyuan Li, Ping Yu, Changyou Chen
Few-shot learning features the capability of generalizing from a few examples.
1 code implementation • ECCV 2020 • Ping Yu, Yang Zhao, Chunyuan Li, Junsong Yuan, Changyou Chen
Generating long-range skeleton-based human actions has been a challenging problem since small deviations of one frame can cause a malformed action sequence.
Ranked #2 on
Human action generation
on NTU RGB+D 2D
1 code implementation • ICLR 2020 • Zhenyi Wang, Yang Zhao, Ping Yu, Ruiyi Zhang, Changyou Chen
Specifically, we propose a Bayesian meta sampling framework consisting of two main components: a meta sampler and a sample adapter.
no code implementations • 22 Apr 2020 • Yang Zhao, Ping Yu, Suchismit Mahapatra, Qinliang Su, Changyou Chen
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
2 code implementations • ICML 2020 • Yang Zhao, Chunyuan Li, Ping Yu, Jianfeng Gao, Changyou Chen
The instability in GAN training has been a long-standing problem despite remarkable research efforts.
Ranked #1 on
Image-to-Image Translation
on anime-to-selfie
1 code implementation • AAAI 2019 • Zhenyi Wang, Ping Yu, Yang Zhao, Ruiyi Zhang, Yufan Zhou, Junsong Yuan, Changyou Chen
In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality.
Ranked #4 on
Human action generation
on NTU RGB+D 2D
no code implementations • 18 Mar 2019 • Ping Yu, Kaitao Song, Jianfeng Lu
Recently, deep neural networks have significant progress and successful application in various fields, but they are found vulnerable to attack instances, e. g., adversarial examples.
no code implementations • 16 Jan 2019 • Daniel Smilkov, Nikhil Thorat, Yannick Assogba, Ann Yuan, Nick Kreeger, Ping Yu, Kangyi Zhang, Shanqing Cai, Eric Nielsen, David Soergel, Stan Bileschi, Michael Terry, Charles Nicholson, Sandeep N. Gupta, Sarah Sirajuddin, D. Sculley, Rajat Monga, Greg Corrado, Fernanda B. Viégas, Martin Wattenberg
TensorFlow. js is a library for building and executing machine learning algorithms in JavaScript.