no code implementations • 22 Feb 2024 • Zhaoyi Li, Gangwei Jiang, Hong Xie, Linqi Song, Defu Lian, Ying WEI
LLMs have marked a revolutonary shift, yet they falter when faced with compositional reasoning tasks.
no code implementations • 1 Nov 2023 • You Zhou, Xiujing Lin, Xiang Zhang, Maolin Wang, Gangwei Jiang, Huakang Lu, Yupeng Wu, Kai Zhang, Zhe Yang, Kehang Wang, Yongduo Sui, Fengwei Jia, Zuoli Tang, Yao Zhao, Hongxuan Zhang, Tiannuo Yang, Weibo Chen, Yunong Mao, Yi Li, De Bao, Yu Li, Hongrui Liao, Ting Liu, Jingwen Liu, Jinchi Guo, Xiangyu Zhao, Ying WEI, Hong Qian, Qi Liu, Xiang Wang, Wai Kin, Chan, Chenliang Li, Yusen Li, Shiyu Yang, Jining Yan, Chao Mou, Shuai Han, Wuxia Jin, Guannan Zhang, Xiaodong Zeng
To tackle the challenges of computing resources and environmental impact of AI, Green Computing has become a hot research topic.
1 code implementation • 19 Oct 2023 • Gangwei Jiang, Caigao Jiang, Siqiao Xue, James Y. Zhang, Jun Zhou, Defu Lian, Ying WEI
In this work, we first investigate such anytime fine-tuning effectiveness of existing continual pre-training approaches, concluding with unanimously decreased performance on unseen domains.
no code implementations • 11 Aug 2023 • Qi Liu, Zhilong Zhou, Gangwei Jiang, Tiezheng Ge, Defu Lian
In this paper, we focus on the bottom representation learning of MTL in RS and propose the Deep Task-specific Bottom Representation Network (DTRN) to alleviate the negative transfer problem.
no code implementations • 31 Jul 2023 • Jin Chen, Zheng Liu, Xu Huang, Chenwang Wu, Qi Liu, Gangwei Jiang, Yuanhao Pu, Yuxuan Lei, Xiaolong Chen, Xingmei Wang, Defu Lian, Enhong Chen
The advent of large language models marks a revolutionary breakthrough in artificial intelligence.
no code implementations • 29 Jul 2023 • Hongyan Hao, Zhixuan Chu, Shiyi Zhu, Gangwei Jiang, Yan Wang, Caigao Jiang, James Zhang, Wei Jiang, Siqiao Xue, Jun Zhou
In order to surmount this challenge and effectively integrate new sample distribution, we propose a density-based sample selection strategy that utilizes kernel density estimation to calculate sample density as a reference to compute sample weight, and employs weight sampling to construct a new memory set.
no code implementations • 27 Apr 2022 • Gangwei Jiang, Shiyao Wang, Tiezheng Ge, Yuning Jiang, Ying WEI, Defu Lian
The synthetic training images with erasure ground-truth are then fed to train a coarse-to-fine erasing network.
1 code implementation • 2 Mar 2021 • Jin Chen, Tiezheng Ge, Gangwei Jiang, Zhiqiang Zhang, Defu Lian, Kai Zheng
Based on the tree structure, Thompson sampling is adapted with dynamic programming, leading to efficient exploration for potential ad creatives with the largest CTR.
1 code implementation • 28 Feb 2021 • Jin Chen, Ju Xu, Gangwei Jiang, Tiezheng Ge, Zhiqiang Zhang, Defu Lian, Kai Zheng
However, interactions between creative elements may be more complex than the inner product, and the FM-estimated CTR may be of high variance due to limited feedback.