no code implementations • CCL 2020 • Ting Jiang, Bing Xu, Tiejun Zhao, Sheng Li
In the first layer, in order to extract textual features of utterances, we propose a convolutional self-attention network(CAN).
no code implementations • 15 Nov 2024 • Xinyi Zhou, Danlan Huang, Zhixin Qi, Liang Zhang, Ting Jiang
Deep joint source-channel coding (DeepJSCC) has shown promise in wireless transmission of text, speech, and images within the realm of semantic communication.
no code implementations • 23 Aug 2024 • Zheng Gao, Ting Jiang, Mingming Zhang, Hao Wu, Ming Tang
By encoding semantically similar symbols to adjacent frequencies, the system's noise tolerance is effectively improved, facilitating accurate sentiment analysis.
no code implementations • 8 Aug 2024 • Xiaole Zhao, Linze Li, Chengxing Xie, XiaoMing Zhang, Ting Jiang, Wenjie Lin, Shuaicheng Liu, Tianrui Li
Transformer-based deep models for single image super-resolution (SISR) have greatly improved the performance of lightweight SISR tasks in recent years.
1 code implementation • 23 Jul 2024 • Jinting Luo, Ru Li, Chengzhi Jiang, XiaoMing Zhang, Mingyan Han, Ting Jiang, Haoqiang Fan, Shuaicheng Liu
Specifically, we propose a parallel UNets architecture: 1) the local branch performs the patch-based noise estimation in the diffusion process, and 2) the global branch recovers the low-resolution shadow-free images.
1 code implementation • 17 Jul 2024 • Ting Jiang, Minghui Song, Zihan Zhang, Haizhen Huang, Weiwei Deng, Feng Sun, Qi Zhang, Deqing Wang, Fuzhen Zhuang
We propose a single modality training approach for E5-V, where the model is trained exclusively on text pairs.
1 code implementation • 20 May 2024 • Ting Jiang, Shaohan Huang, Shengyue Luo, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang, Deqing Wang, Fuzhen Zhuang
Low-rank adaptation is a popular parameter-efficient fine-tuning method for large language models.
no code implementations • 16 Apr 2024 • Wenjie Lin, Zhen Liu, Chengzhi Jiang, Mingyan Han, Ting Jiang, Shuaicheng Liu
In this paper, we address the Bracket Image Restoration and Enhancement (BracketIRE) task using a novel framework, which requires restoring a high-quality high dynamic range (HDR) image from a sequence of noisy, blurred, and low dynamic range (LDR) multi-exposure RAW inputs.
1 code implementation • 14 Jan 2024 • Ting Jiang, Shaohan Huang, Shengyue Luo, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, Qi Zhang, Deqing Wang, Fuzhen Zhuang
To enhance the domain-specific capabilities of large language models, continued pre-training on a domain-specific corpus is a prevalent method.
1 code implementation • ICCV 2023 • Ting Jiang, Chuan Wang, Xinpeng Li, Ru Li, Haoqiang Fan, Shuaicheng Liu
In this paper, we introduce a new approach for high-quality multi-exposure image fusion (MEF).
no code implementations • 25 Aug 2023 • Ting Jiang, Zheng Gao, Yizhao Chen, Zihe Hu, Ming Tang
To comprehensively assess optical fiber communication system conditions, it is essential to implement joint estimation of the following four critical impairments: nonlinear signal-to-noise ratio (SNRNL), optical signal-to-noise ratio (OSNR), chromatic dispersion (CD) and differential group delay (DGD).
1 code implementation • 31 Jul 2023 • Ting Jiang, Shaohan Huang, Zhongzhi Luan, Deqing Wang, Fuzhen Zhuang
We also fine-tune LLMs with current contrastive learning approach, and the 2. 7B OPT model, incorporating our prompt-based method, surpasses the performance of 4. 8B ST5, achieving the new state-of-the-art results on STS tasks.
Ranked #1 on
Semantic Textual Similarity
on STS12
1 code implementation • 10 Jul 2023 • Xinpeng Li, Ting Jiang, Haoqiang Fan, Shuaicheng Liu
Our experiments confirm the powerful feature extraction capabilities of Segment Anything and highlight the value of combining spatial-domain and frequency-domain features in IQA tasks.
no code implementations • 27 May 2023 • Hao Geng, Deqing Wang, Fuzhen Zhuang, Xuehua Ming, Chenguang Du, Ting Jiang, Haolong Guo, Rui Liu
To cope with this problem, we propose a Dynamic heterogeneous Graph and Node Importance network (DGNI) learning framework, which fully leverages the dynamic heterogeneous graph and node importance information to predict future citation trends of newly published papers.
1 code implementation • 23 May 2023 • Qi Wu, Mingyan Han, Ting Jiang, Chengzhi Jiang, Jinting Luo, Man Jiang, Haoqiang Fan, Shuaicheng Liu
Deep denoising models require extensive real-world training data, which is challenging to acquire.
no code implementations • 14 Apr 2023 • Lei Yu, Xinpeng Li, Youwei Li, Ting Jiang, Qi Wu, Haoqiang Fan, Shuaicheng Liu
To address this issue, we propose a novel multi-stage lightweight network boosting method, which can enable lightweight networks to achieve outstanding performance.
1 code implementation • 20 Nov 2022 • Ziming Wan, Deqing Wang, Xuehua Ming, Fuzhen Zhuang, Chenguang Du, Ting Jiang, Zhengyang Zhao
To address these problems, we propose a novel Relation-aware Heterogeneous Graph Neural Network with Contrastive Learning (RHCO) for large-scale heterogeneous graph representation learning.
1 code implementation • 12 Oct 2022 • Ting Jiang, Deqing Wang, Fuzhen Zhuang, Ruobing Xie, Feng Xia
These methods, such as movement pruning, use first-order information to prune PLMs while fine-tuning the remaining weights.
no code implementations • 25 May 2022 • Eduardo Pérez-Pellitero, Sibi Catley-Chandar, Richard Shaw, Aleš Leonardis, Radu Timofte, Zexin Zhang, Cen Liu, Yunbo Peng, Yue Lin, Gaocheng Yu, Jin Zhang, Zhe Ma, Hongbin Wang, Xiangyu Chen, Xintao Wang, Haiwei Wu, Lin Liu, Chao Dong, Jiantao Zhou, Qingsen Yan, Song Zhang, Weiye Chen, Yuhang Liu, Zhen Zhang, Yanning Zhang, Javen Qinfeng Shi, Dong Gong, Dan Zhu, Mengdi Sun, Guannan Chen, Yang Hu, Haowei Li, Baozhu Zou, Zhen Liu, Wenjie Lin, Ting Jiang, Chengzhi Jiang, Xinpeng Li, Mingyan Han, Haoqiang Fan, Jian Sun, Shuaicheng Liu, Juan Marín-Vega, Michael Sloth, Peter Schneider-Kamp, Richard Röttger, Chunyang Li, Long Bao, Gang He, Ziyao Xu, Li Xu, Gen Zhan, Ming Sun, Xing Wen, Junlin Li, Shuang Feng, Fei Lei, Rui Liu, Junxiang Ruan, Tianhong Dai, Wei Li, Zhan Lu, Hengyan Liu, Peian Huang, Guangyu Ren, Yonglin Luo, Chang Liu, Qiang Tu, Fangya Li, Ruipeng Gang, Chenghua Li, Jinjing Li, Sai Ma, Chenming Liu, Yizhen Cao, Steven Tel, Barthelemy Heyrman, Dominique Ginhac, Chul Lee, Gahyeon Kim, Seonghyun Park, An Gia Vien, Truong Thanh Nhat Mai, Howoon Yoon, Tu Vo, Alexander Holston, Sheir Zaheer, Chan Y. Park
The challenge is composed of two tracks with an emphasis on fidelity and complexity constraints: In Track 1, participants are asked to optimize objective fidelity scores while imposing a low-complexity constraint (i. e. solutions can not exceed a given number of operations).
1 code implementation • 5 May 2022 • Ting Jiang, Deqing Wang, Leilei Sun, Zhongzhi Chen, Fuzhen Zhuang, Qinghong Yang
Existing methods encode label hierarchy in a global view, where label hierarchy is treated as the static hierarchical structure containing all labels.
Multi Label Text Classification
Multi-Label Text Classification
+1
1 code implementation • 12 Jan 2022 • Ting Jiang, Jian Jiao, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Denvy Deng, Qi Zhang
We propose PromptBERT, a novel contrastive learning method for learning better sentence representation.
1 code implementation • 21 Oct 2021 • Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, Qi Zhang
While pre-trained language models have achieved great success on various natural language understanding tasks, how to effectively leverage them into non-autoregressive generation tasks remains a challenge.
8 code implementations • 22 May 2021 • Zhen Liu, Wenjie Lin, Xinpeng Li, Qing Rao, Ting Jiang, Mingyan Han, Haoqiang Fan, Jian Sun, Shuaicheng Liu
In this paper, we present an attention-guided deformable convolutional network for hand-held multi-frame high dynamic range (HDR) imaging, namely ADNet.
Ranked #5 on
Face Alignment
on WFW (Extra Data)
1 code implementation • 9 Jan 2021 • Ting Jiang, Deqing Wang, Leilei Sun, Huayi Yang, Zhengyang Zhao, Fuzhen Zhuang
In LightXML, we use generative cooperative networks to recall and rank labels, in which label recalling part generates negative and positive labels, and label ranking part distinguishes positive labels from these labels.