no code implementations • EMNLP 2020 • Jes{\'u}s Calvillo, Le Fang, Jeremy Cole, David Reitter
We also found sentence length to be a significant predictor, which has been related to sentence complexity.
no code implementations • 5 Jul 2024 • Shiquan Zhang, Ying Ma, Le Fang, Hong Jia, Simon D'Alfonso, Vassilis Kostakos
To the best of our knowledge, this is the first framework to provide on-device LLMs personalization with smartphone sensing.
1 code implementation • 3 Sep 2022 • Zhenyi Wang, Li Shen, Le Fang, Qiuling Suo, Donglin Zhan, Tiehang Duan, Mingchen Gao
Two key challenges arise in this more realistic setting: (i) how to use unlabeled data in the presence of a large amount of unlabeled out-of-distribution (OOD) data; and (ii) how to prevent catastrophic forgetting on previously learned task distributions due to the task distribution shift.
1 code implementation • 15 Jul 2022 • Zhenyi Wang, Li Shen, Le Fang, Qiuling Suo, Tiehang Duan, Mingchen Gao
To address these problems, for the first time, we propose a principled memory evolution framework to dynamically evolve the memory data distribution by making the memory buffer gradually harder to be memorized with distributionally robust optimization (DRO).
1 code implementation • CVPR 2022 • Zhenyi Wang, Li Shen, Tiehang Duan, Donglin Zhan, Le Fang, Mingchen Gao
We propose a domain shift detection technique to capture latent domain change and equip the meta optimizer with it to work in this setting.
1 code implementation • ICCV 2021 • Zhenyi Wang, Tiehang Duan, Le Fang, Qiuling Suo, Mingchen Gao
In this paper, we explore a more practical and challenging setting where task distribution changes over time with domain shift.
2 code implementations • 4 Jan 2021 • Le Fang, Tao Zeng, Chaochun Liu, Liefeng Bo, Wen Dong, Changyou Chen
In this paper, we advocate to revive latent variable modeling, essentially the power of representation learning, in the era of Transformers to enhance controllability without hurting state-of-the-art generation effectiveness.
1 code implementation • 4 Jan 2021 • Le Fang, Tao Zeng, Chaochun Liu, Liefeng Bo, Wen Dong, Changyou Chen
Our paper is among the first ones by our knowledge to propose a model and to create datasets for the task of "outline to story".
1 code implementation • IJCNLP 2019 • Le Fang, Chunyuan Li, Jianfeng Gao, Wen Dong, Changyou Chen
Deep latent variable models (LVM) such as variational auto-encoder (VAE) have recently played an important role in text generation.
1 code implementation • NeurIPS 2017 • Le Fang, Fan Yang, Wen Dong, Tong Guan, Chunming Qiao
Technological breakthroughs allow us to collect data with increasing spatio-temporal resolution from complex interaction systems.