Search Results for author: Yi Chang

Found 139 papers, 64 papers with code

HiTRANS: A Hierarchical Transformer Network for Nested Named Entity Recognition

no code implementations Findings (EMNLP) 2021 Zhiwei Yang, Jing Ma, Hechang Chen, Yunke Zhang, Yi Chang

Specifically, we first utilize a two-phase module to generate span representations by aggregating context information based on a bottom-up and top-down transformer network.

named-entity-recognition Named Entity Recognition +3

FedAWA: Adaptive Optimization of Aggregation Weights in Federated Learning Using Client Vectors

no code implementations20 Mar 2025 Changlong Shi, He Zhao, Bingjie Zhang, Mingyuan Zhou, Dandan Guo, Yi Chang

However, adaptively adjusting aggregation weights while ensuring data security-without requiring additional proxy data-remains a significant challenge.

Federated Learning global-optimization

FedLWS: Federated Learning with Adaptive Layer-wise Weight Shrinking

1 code implementation19 Mar 2025 Changlong Shi, Jinmeng Li, He Zhao, Dan dan Guo, Yi Chang

In Federated Learning (FL), weighted aggregation of local models is conducted to generate a new global model, and the aggregation weights are typically normalized to 1.

Federated Learning

Bridge Frame and Event: Common Spatiotemporal Fusion for High-Dynamic Scene Optical Flow

no code implementations10 Mar 2025 Hanyu Zhou, Haonan Wang, Haoyue Liu, Yuxing Duan, Yi Chang, Luxin Yan

In this work, we propose a novel common spatiotemporal fusion between frame and event modalities for high-dynamic scene optical flow, including visual boundary localization and motion correlation fusion.

Optical Flow Estimation

Mixup Model Merge: Enhancing Model Merging Performance through Randomized Linear Interpolation

1 code implementation21 Feb 2025 Yue Zhou, Yi Chang, Yuan Wu

In conclusion, M$^3$ is a simple yet effective model merging method that significantly enhances the performance of the merged model by randomly generating contribution ratios for two fine-tuned LLMs.

Adversarial Robustness Data Augmentation +1

R-LoRA: Random Initialization of Multi-Head LoRA for Multi-Task Learning

1 code implementation21 Feb 2025 Jinda Liu, Yi Chang, Yuan Wu

Fine-tuning large language models (LLMs) is prohibitively expensive in terms of computational and memory costs.

Multi-Task Learning parameter-efficient fine-tuning

Transfer-Prompting: Enhancing Cross-Task Adaptation in Large Language Models via Dual-Stage Prompts Optimization

1 code implementation20 Feb 2025 Yupeng Chang, Yi Chang, Yuan Wu

These candidate prompts are refined iteratively, while a scorer LLM evaluates their effectiveness using the multi-dimensional metrics designed in the objective prompts evaluator-a novel contribution in this work that provides a holistic evaluation of prompt quality and task performance.

A Survey on Data Contamination for Large Language Models

1 code implementation20 Feb 2025 Yuxing Cheng, Yi Chang, Yuan Wu

However, the reliability of performance evaluation has come under scrutiny due to data contamination-the unintended overlap between training and test datasets.

Survey Text Generation

NLoRA: Nyström-Initiated Low-Rank Adaptation for Large Language Models

1 code implementation20 Feb 2025 Chenlu Guo, Yuan Wu, Yi Chang

We first introduce StructuredLoRA (SLoRA), which investigates adding a small intermediate matrix between the low-rank matrices A and B. Secondly, we propose Nystr\"omLoRA (NLoRA), which leverages Nystr\"om-based initialization for SLoRA to improve its effectiveness and efficiency.

GSM8K Natural Language Understanding +2

StructFlowBench: A Structured Flow Benchmark for Multi-turn Instruction Following

1 code implementation20 Feb 2025 Jinnan Li, Jinzhe Li, Yue Wang, Yi Chang, Yuan Wu

This structural dependency not only reflects user intent but also establishes a second dimension for instruction following evaluation beyond constraint satisfaction.

Instruction Following

LoRA-GGPO: Mitigating Double Descent in LoRA Fine-Tuning via Gradient-Guided Perturbation Optimization

1 code implementation20 Feb 2025 Yupeng Chang, Chenlu Guo, Yi Chang, Yuan Wu

By optimizing the sharpness of the loss landscape, LoRA-GGPO guides the model toward flatter minima, mitigating the double descent problem and improving generalization.

Natural Language Understanding parameter-efficient fine-tuning

Length-Controlled Margin-Based Preference Optimization without Reference Model

1 code implementation20 Feb 2025 Gengxu Li, Tingyu Xia, Yi Chang, Yuan Wu

A key innovation of LMPO lies in its Length-Controlled Margin-Based loss function, integrated within the Bradley-Terry framework.

SegRet: An Efficient Design for Semantic Segmentation with Retentive Network

1 code implementation19 Feb 2025 Zhiyuan Li, Yi Chang, Yuan Wu

With the ongoing advancement of autonomous driving technology and intelligent transportation systems, research into semantic segmentation has become increasingly pivotal.

Autonomous Driving Computational Efficiency +2

A Survey of Graph Retrieval-Augmented Generation for Customized Large Language Models

1 code implementation21 Jan 2025 Qinggang Zhang, Shengyuan Chen, Yuanchen Bei, Zheng Yuan, Huachi Zhou, Zijin Hong, Junnan Dong, Hao Chen, Yi Chang, Xiao Huang

Large language models (LLMs) have demonstrated remarkable capabilities in a wide range of tasks, yet their application to specialized domains remains challenging due to the need for deep expertise.

RAG Text Retrieval

An archaeological Catalog Collection Method Based on Large Vision-Language Models

no code implementations28 Dec 2024 Honglin Pang, Yi Chang, Tianjing Duan, Xi Yang

Archaeological catalogs, containing key elements such as artifact images, morphological descriptions, and excavation information, are essential for studying artifact evolution and cultural inheritance.

Learning Monocular Depth from Events via Egomotion Compensation

no code implementations26 Dec 2024 Haitao Meng, Chonghao Zhong, Sheng Tang, Lian JunJia, Wenwei Lin, Zhenshan Bing, Yi Chang, Gang Chen, Alois Knoll

To achieve this, we propose a Focus Cost Discrimination (FCD) module that measures the clarity of edges as an essential indicator of focus level and integrates spatial surroundings to facilitate cost estimation.

Monocular Depth Estimation Motion Compensation

A Survey of RWKV

1 code implementation19 Dec 2024 Zhiyuan Li, Tingyu Xia, Yi Chang, Yuan Wu

The Receptance Weighted Key Value (RWKV) model offers a novel alternative to the Transformer architecture, merging the benefits of recurrent and attention-based systems.

Natural Language Understanding Survey +1

Solving Continual Offline RL through Selective Weights Activation on Aligned Spaces

no code implementations21 Oct 2024 Jifeng Hu, Sili Huang, Li Shen, Zhejian Yang, Shengchao Hu, Shisong Tang, Hechang Chen, Yi Chang, DaCheng Tao, Lichao Sun

In the quantized spaces alignment, we leverage vector quantization to align the different state and action spaces of various tasks, facilitating continual training in the same space.

Continual Learning Offline RL +1

Large Language Model Evaluation via Matrix Nuclear-Norm

1 code implementation14 Oct 2024 Yahan Li, Tingyu Xia, Yi Chang, Yuan Wu

While traditional metrics like Matrix Entropy offer valuable insights, they are computationally intensive for large-scale models due to their \( O(n^3) \) time complexity with Singular Value Decomposition (SVD).

Computational Efficiency Data Compression +4

Rethinking Data Selection at Scale: Random Selection is Almost All You Need

1 code implementation12 Oct 2024 Tingyu Xia, Bowen Yu, Kai Dang, An Yang, Yuan Wu, Yuan Tian, Yi Chang, Junyang Lin

Supervised fine-tuning (SFT) is crucial for aligning Large Language Models (LLMs) with human instructions.

All

Towards Next-Generation LLM-based Recommender Systems: A Survey and Beyond

1 code implementation10 Oct 2024 Qi Wang, Jindong Li, Shiqi Wang, Qianli Xing, Runliang Niu, He Kong, Rui Li, Guodong Long, Yi Chang, Chengqi Zhang

Large language models (LLMs) have not only revolutionized the field of natural language processing (NLP) but also have the potential to bring a paradigm shift in many other fields due to their remarkable abilities of language understanding, as well as impressive generalization capabilities and reasoning skills.

Large Language Model Recommendation Systems

Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms

no code implementations3 Oct 2024 Bin Gu, Xiyuan Wei, Hualin Zhang, Yi Chang, Heng Huang

While the random ZO estimator introduces bigger error and makes convergence analysis more challenging compared to coordinated ZO estimator, it requires only $\mathcal{O}(1)$ computation, which is significantly less than $\mathcal{O}(d)$ computation of the coordinated ZO estimator, with $d$ being dimension of the problem space.

Adverse Weather Optical Flow: Cumulative Homogeneous-Heterogeneous Adaptation

no code implementations25 Sep 2024 Hanyu Zhou, Yi Chang, Zhiwei Shi, Wending Yan, Gang Chen, Yonghong Tian, Luxin Yan

Under this unified framework, the proposed method can progressively and explicitly transfer knowledge from clean scenes to real adverse weather.

Domain Adaptation Knowledge Distillation +1

XTRUST: On the Multilingual Trustworthiness of Large Language Models

1 code implementation24 Sep 2024 Yahan Li, Yi Wang, Yi Chang, Yuan Wu

Large language models (LLMs) have demonstrated remarkable capabilities across a range of natural language processing (NLP) tasks, capturing the attention of both practitioners and the broader public.

Ethics Fairness +2

CHBench: A Chinese Dataset for Evaluating Health in Large Language Models

1 code implementation24 Sep 2024 Chenlu Guo, Nuo Xu, Yi Chang, Yuan Wu

With the rapid development of large language models (LLMs), assessing their performance on health-related inquiries has become increasingly essential.

Misinformation

Continual Diffuser (CoD): Mastering Continual Offline Reinforcement Learning with Experience Rehearsal

1 code implementation4 Sep 2024 Jifeng Hu, Li Shen, Sili Huang, Zhejian Yang, Hechang Chen, Lichao Sun, Yi Chang, DaCheng Tao

Artificial neural networks, especially recent diffusion-based models, have shown remarkable superiority in gaming, control, and QA systems, where the training tasks' datasets are usually static.

Reinforcement Learning (RL)

SIGMA: Selective Gated Mamba for Sequential Recommendation

2 code implementations21 Aug 2024 Ziwei Liu, Qidong Liu, Yejing Wang, Wanyu Wang, Pengyue Jia, Maolin Wang, Zitao Liu, Yi Chang, Xiangyu Zhao

In various domains, Sequential Recommender Systems (SRS) have become essential due to their superior capability to discern intricate user preferences.

Mamba Sequential Recommendation +1

An Empirical Examination of Balancing Strategy for Counterfactual Estimation on Time Series

no code implementations16 Aug 2024 Qiang Huang, Chuizheng Meng, Defu Cao, Biwei Huang, Yi Chang, Yan Liu

Counterfactual estimation from observations represents a critical endeavor in numerous application fields, such as healthcare and finance, with the primary challenge being the mitigation of treatment bias.

counterfactual Time Series

CoSEC: A Coaxial Stereo Event Camera Dataset for Autonomous Driving

no code implementations16 Aug 2024 Shihan Peng, Hanyu Zhou, Hao Dong, Zhiwei Shi, Haoyue Liu, Yuxing Duan, Yi Chang, Luxin Yan

In this work, we introduce hybrid coaxial event-frame devices to build the multimodal system, and propose a coaxial stereo event camera (CoSEC) dataset for autonomous driving.

Autonomous Driving Optical Flow Estimation

BA-LoRA: Bias-Alleviating Low-Rank Adaptation to Mitigate Catastrophic Inheritance in Large Language Models

1 code implementation8 Aug 2024 Yupeng Chang, Yi Chang, Yuan Wu

Large language models (LLMs) have demonstrated remarkable proficiency across various natural language processing (NLP) tasks.

Diversity Natural Language Understanding +2

Learning on Graphs with Large Language Models(LLMs): A Deep Dive into Model Robustness

no code implementations16 Jul 2024 Kai Guo, Zewen Liu, Zhikai Chen, Hongzhi Wen, Wei Jin, Jiliang Tang, Yi Chang

To address this gap, our work aims to explore the potential of LLMs in the context of adversarial attacks on graphs.

Long-range Turbulence Mitigation: A Large-scale Dataset and A Coarse-to-fine Framework

no code implementations11 Jul 2024 Shengqi Xu, Run Sun, Yi Chang, Shuning Cao, Xueyao Xiao, Luxin Yan

Long-range imaging inevitably suffers from atmospheric turbulence with severe geometric distortions due to random refraction of light.

PTaRL: Prototype-based Tabular Representation Learning via Space Calibration

1 code implementation International Conference on Learning Representations 2024 Hangting Ye, Wei Fan, Xiaozhuang Song, Shun Zheng, He Zhao, Dandan Guo, Yi Chang

With the recent success of deep learning, many tabular machine learning (ML) methods based on deep networks (e. g., Transformer, ResNet) have achieved competitive performance on tabular benchmarks.

Representation Learning

FANFOLD: Graph Normalizing Flows-driven Asymmetric Network for Unsupervised Graph-Level Anomaly Detection

1 code implementation29 Jun 2024 Rui Cao, Shijie Xue, Jindong Li, Qi Wang, Yi Chang

We introduce normalizing flows to unsupervised graph-level anomaly detection due to their successful application and superior quality in learning the underlying distribution of samples.

Knowledge Distillation Unsupervised Anomaly Detection

Double Momentum Method for Lower-Level Constrained Bilevel Optimization

no code implementations25 Jun 2024 Wanli Shi, Yi Chang, Bin Gu

Bilevel optimization (BO) has recently gained prominence in many machine learning applications due to its ability to capture the nested structure inherent in these problems.

Bilevel Optimization

LED: A Large-scale Real-world Paired Dataset for Event Camera Denoising

no code implementations CVPR 2024 Yuxing Duan, Shihan Peng, Lin Zhu, Wei zhang, Yi Chang, Sheng Zhong, Luxin Yan

Event camera has significant advantages in capturing dynamic scene information while being prone to noise interference, particularly in challenging conditions like low threshold and low illumination.

Denoising

Concept Matching with Agent for Out-of-Distribution Detection

1 code implementation27 May 2024 YuXiao Lee, Xiaofeng Cao, Jingcai Guo, Wei Ye, Qing Guo, Yi Chang

The remarkable achievements of Large Language Models (LLMs) have captivated the attention of both academia and industry, transcending their initial role in dialogue generation.

Dialogue Generation Out-of-Distribution Detection +1

Language Models can Evaluate Themselves via Probability Discrepancy

1 code implementation17 May 2024 Tingyu Xia, Bowen Yu, Yuan Wu, Yi Chang, Chang Zhou

In this paper, we initiate our discussion by demonstrating how Large Language Models (LLMs), when tasked with responding to queries, display a more even probability distribution in their answers if they are more adept, as opposed to their less skilled counterparts.

Text Generation

Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom

1 code implementation6 May 2024 Bo wang, Jing Ma, Hongzhan Lin, Zhiwei Yang, Ruichao Yang, Yuan Tian, Yi Chang

To detect fake news from a sea of diverse, crowded and even competing narratives, in this paper, we propose a novel defense-based explainable fake news detection framework.

Fake News Detection Language Modeling +2

NegativePrompt: Leveraging Psychology for Large Language Models Enhancement via Negative Emotional Stimuli

1 code implementation5 May 2024 Xu Wang, Cheng Li, Yi Chang, Jindong Wang, Yuan Wu

The results are revealing: NegativePrompt markedly enhances the performance of LLMs, evidenced by relative improvements of 12. 89% in Instruction Induction tasks and 46. 25% in BIG-Bench tasks.

Emotional Intelligence

CVTGAD: Simplified Transformer with Cross-View Attention for Unsupervised Graph-level Anomaly Detection

1 code implementation3 May 2024 Jindong Li, Qianli Xing, Qi Wang, Yi Chang

In this paper, we propose a novel Simplified Transformer with Cross-View Attention for Unsupervised Graph-level Anomaly Detection, namely, CVTGAD.

Anomaly Detection Data Augmentation +1

Seeing Motion at Nighttime with an Event Camera

1 code implementation CVPR 2024 Haoyue Liu, Shihan Peng, Lin Zhu, Yi Chang, Hanyu Zhou, Luxin Yan

In this work, we present a novel nighttime dynamic imaging method with an event camera.

NER

Bring Event into RGB and LiDAR: Hierarchical Visual-Motion Fusion for Scene Flow

no code implementations CVPR 2024 Hanyu Zhou, Yi Chang, Zhiwei Shi, Luxin Yan

In this work, we bring the event as a bridge between RGB and LiDAR, and propose a novel hierarchical visual-motion fusion framework for scene flow, which explores a homogeneous space to fuse the cross-modal complementary knowledge for physical interpretation.

JSTR: Joint Spatio-Temporal Reasoning for Event-based Moving Object Detection

no code implementations12 Mar 2024 Hanyu Zhou, Zhiwei Shi, Hao Dong, Shihan Peng, Yi Chang, Luxin Yan

In spatial reasoning stage, we project the compensated events into the same image coordinate, discretize the timestamp of events to obtain a time image that can reflect the motion confidence, and further segment the moving object through adaptive threshold on the time image.

Motion Compensation Moving Object Detection +3

DS-Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning

1 code implementation27 Feb 2024 Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, Jun Wang

In this work, we investigate the potential of large language models (LLMs) based agents to automate data science tasks, with the goal of comprehending task requirements, then building and training the best-fit machine learning models.

Code Generation

The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)

1 code implementation23 Feb 2024 Shenglai Zeng, Jiankun Zhang, Pengfei He, Yue Xing, Yiding Liu, Han Xu, Jie Ren, Shuaiqiang Wang, Dawei Yin, Yi Chang, Jiliang Tang

In this work, we conduct extensive empirical studies with novel attack methods, which demonstrate the vulnerability of RAG systems on leaking the private retrieval database.

Language Modeling Language Modelling +2

Investigating Out-of-Distribution Generalization of GNNs: An Architecture Perspective

no code implementations13 Feb 2024 Kai Guo, Hongzhi Wen, Wei Jin, Yaming Guo, Jiliang Tang, Yi Chang

These insights have empowered us to develop a novel GNN backbone model, DGAT, designed to harness the robust properties of both graph self-attention mechanism and the decoupled architecture.

Out-of-Distribution Generalization

ScreenAgent: A Vision Language Model-driven Computer Control Agent

1 code implementation9 Feb 2024 Runliang Niu, Jindong Li, Shiqi Wang, Yali Fu, Xiyu Hu, Xueyuan Leng, He Kong, Yi Chang, Qi Wang

Additionally, we construct the ScreenAgent Dataset, which collects screenshots and action sequences when completing a variety of daily computer tasks.

Language Modeling Language Modelling

Transductive Reward Inference on Graph

no code implementations6 Feb 2024 Bohao Qu, Xiaofeng Cao, Qing Guo, Yi Chang, Ivor W. Tsang, Chengqi Zhang

In this study, we present a transductive inference approach on that reward information propagation graph, which enables the effective estimation of rewards for unlabelled data in offline reinforcement learning.

reinforcement-learning Reinforcement Learning

Contrastive Diffuser: Planning Towards High Return States via Contrastive Learning

no code implementations5 Feb 2024 Yixiang Shan, Zhengbang Zhu, Ting Long, Qifan Liang, Yi Chang, Weinan Zhang, Liang Yin

The performance of offline reinforcement learning (RL) is sensitive to the proportion of high-return trajectories in the offline dataset.

Contrastive Learning D4RL +2

Copyright Protection in Generative AI: A Technical Perspective

no code implementations4 Feb 2024 Jie Ren, Han Xu, Pengfei He, Yingqian Cui, Shenglai Zeng, Jiankun Zhang, Hongzhi Wen, Jiayuan Ding, Pei Huang, Lingjuan Lyu, Hui Liu, Yi Chang, Jiliang Tang

We examine from two distinct viewpoints: the copyrights pertaining to the source data held by the data owners and those of the generative models maintained by the model builders.

STAA-Net: A Sparse and Transferable Adversarial Attack for Speech Emotion Recognition

no code implementations2 Feb 2024 Yi Chang, Zhao Ren, Zixing Zhang, Xin Jing, Kun Qian, Xi Shao, Bin Hu, Tanja Schultz, Björn W. Schuller

Speech contains rich information on the emotions of humans, and Speech Emotion Recognition (SER) has been an important topic in the area of human-computer interaction.

Adversarial Attack Speech Emotion Recognition

EPSD: Early Pruning with Self-Distillation for Efficient Model Compression

no code implementations31 Jan 2024 Dong Chen, Ning Liu, Yichen Zhu, Zhengping Che, Rui Ma, Fachao Zhang, Xiaofeng Mou, Yi Chang, Jian Tang

Instead of a simple combination of pruning and SD, EPSD enables the pruned network to favor SD by keeping more distillable weights before training to ensure better distillation of the pruned network.

Knowledge Distillation Network Pruning +1

Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow

no code implementations31 Jan 2024 Hanyu Zhou, Yi Chang, Haoyue Liu, Wending Yan, Yuxing Duan, Zhiwei Shi, Luxin Yan

In appearance adaptation, we employ the intrinsic image decomposition to embed the auxiliary daytime image and the nighttime image into a reflectance-aligned common space.

Domain Adaptation Intrinsic Image Decomposition +1

A Survey on Data Augmentation in Large Model Era

1 code implementation27 Jan 2024 Yue Zhou, Chenlu Guo, Xu Wang, Yi Chang, Yuan Wu

Leveraging large models, these data augmentation techniques have outperformed traditional approaches.

Audio Signal Processing Image Augmentation +2

Prospective Role of Foundation Models in Advancing Autonomous Vehicles

no code implementations8 Dec 2023 Jianhua Wu, Bingzhao Gao, Jincheng Gao, Jianhao Yu, Hongqing Chu, Qiankun Yu, Xun Gong, Yi Chang, H. Eric Tseng, Hong Chen, Jie Chen

With the development of artificial intelligence and breakthroughs in deep learning, large-scale Foundation Models (FMs), such as GPT, Sora, etc., have achieved remarkable results in many fields including natural language processing and computer vision.

Autonomous Driving Scene Understanding +1

B-Spine: Learning B-Spline Curve Representation for Robust and Interpretable Spinal Curvature Estimation

no code implementations14 Oct 2023 Hao Wang, Qiang Song, Ruofeng Yin, Rui Ma, Yizhou Yu, Yi Chang

In this paper, we propose B-Spine, a novel deep learning pipeline to learn B-spline curve representation of the spine and estimate the Cobb angles for spinal curvature estimation from low-quality X-ray images.

Image-to-Image Translation

Learning Generalizable Agents via Saliency-Guided Features Decorrelation

no code implementations NeurIPS 2023 Sili Huang, Yanchao Sun, Jifeng Hu, Siyuan Guo, Hechang Chen, Yi Chang, Lichao Sun, Bo Yang

Our experimental results demonstrate that SGFD can generalize well on a wide range of test environments and significantly outperforms state-of-the-art methods in handling both task-irrelevant variations and task-relevant variations.

Reinforcement Learning (RL)

Careful at Estimation and Bold at Exploration

no code implementations22 Aug 2023 Xing Chen, Yijun Liu, Zhaogeng Liu, Hechang Chen, Hengshuai Yao, Yi Chang

In prior work, it has been shown that policy-based exploration is beneficial for continuous action space in deterministic policy reinforcement learning(DPRL).

MuJoCo

A Survey on Evaluation of Large Language Models

1 code implementation6 Jul 2023 Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie

Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.

Ethics Survey

1st Solution Places for CVPR 2023 UG$^2$+ Challenge Track 2.2-Coded Target Restoration through Atmospheric Turbulence

1 code implementation15 Jun 2023 Shengqi Xu, Shuning Cao, Haoyue Liu, Xueyao Xiao, Yi Chang, Luxin Yan

We subsequently select the sharpest set of registered frames by employing a frame selection approach based on image sharpness, and average them to produce an image that is largely free of geometric distortion, albeit with blurriness.

Deblurring Image Registration

1st Solution Places for CVPR 2023 UG$^{\textbf{2}}$+ Challenge Track 2.1-Text Recognition through Atmospheric Turbulence

1 code implementation15 Jun 2023 Shengqi Xu, Xueyao Xiao, Shuning Cao, Yi Chang, Luxin Yan

In this technical report, we present the solution developed by our team VIELab-HUST for text recognition through atmospheric turbulence in Track 2. 1 of the CVPR 2023 UG$^{2}$+ challenge.

Image Registration Optical Flow Estimation

UADB: Unsupervised Anomaly Detection Booster

1 code implementation3 Jun 2023 Hangting Ye, Zhining Liu, Xinyi Shen, Wei Cao, Shun Zheng, Xiaofan Gui, Huishuai Zhang, Yi Chang, Jiang Bian

This is a challenging task given the heterogeneous model structures and assumptions adopted by existing UAD methods.

Unsupervised Anomaly Detection

A Two-Stage Real Image Deraining Method for GT-RAIN Challenge CVPR 2023 Workshop UG$^{\textbf{2}}$+ Track 3

1 code implementation13 May 2023 Yun Guo, Xueyao Xiao, Xiaoxiong Wang, Yi Li, Yi Chang, Luxin Yan

Secondly, a transformer-based single image deraining network Uformer is implemented to pre-train on large real rain dataset and then fine-tuned on pseudo GT to further improve image restoration.

Image Restoration Single Image Deraining +2

Unsupervised Hierarchical Domain Adaptation for Adverse Weather Optical Flow

no code implementations24 Mar 2023 Hanyu Zhou, Yi Chang, Gang Chen, Luxin Yan

In motion adaptation, we utilize the flow consistency knowledge to align the cross-domain optical flows into a motion-invariance common space, where the optical flow from clean weather is used as the guidance-knowledge to obtain a preliminary optical flow for adverse weather.

Domain Adaptation Optical Flow Estimation

Unsupervised Cumulative Domain Adaptation for Foggy Scene Optical Flow

no code implementations CVPR 2023 Hanyu Zhou, Yi Chang, Wending Yan, Luxin Yan

To handle the practical optical flow under real foggy scenes, in this work, we propose a novel unsupervised cumulative domain adaptation optical flow (UCDA-Flow) framework: depth-association motion adaptation and correlation-alignment motion adaptation.

Domain Adaptation Optical Flow Estimation

A Comprehensive Survey on Heart Sound Analysis in the Deep Learning Era

1 code implementation23 Jan 2023 Zhao Ren, Yi Chang, Thanh Tam Nguyen, Yang Tan, Kun Qian, Björn W. Schuller

This work introduces both classic machine learning and deep learning for comparison, and further offer insights about the advances and future research directions in deep learning for heart sound analysis.

Deep Learning

Both Diverse and Realism Matter: Physical Attribute and Style Alignment for Rainy Image Generation

no code implementations ICCV 2023 Changfeng Yu, Shiming Chen, Yi Chang, Yibing Song, Luxin Yan

To solve this dilemma, we propose a physical alignment and controllable generation network (PCGNet) for diverse and realistic rain generation.

Attribute Diversity +2

One-shot Machine Teaching: Cost Very Few Examples to Converge Faster

no code implementations13 Dec 2022 Chen Zhang, Xiaofeng Cao, Yi Chang, Ivor W Tsang

Then, relying on the surjective mapping from the teaching set to the parameter, we develop a design strategy of the optimal teaching set under appropriate settings, of which two popular efficiency metrics, teaching dimension and iterative teaching dimension are one.

FastClass: A Time-Efficient Approach to Weakly-Supervised Text Classification

1 code implementation11 Dec 2022 Tingyu Xia, Yue Wang, Yuan Tian, Yi Chang

Weakly-supervised text classification aims to train a classifier using only class descriptions and unlabeled data.

text-classification Text Classification +1

Learning Semantic Textual Similarity via Topic-informed Discrete Latent Variables

1 code implementation7 Nov 2022 Erxin Yu, Lan Du, Yuan Jin, Zhepei Wei, Yi Chang

Recently, discrete latent variable models have received a surge of interest in both Natural Language Processing (NLP) and Computer Vision (CV), attributed to their comparable performance to the continuous counterparts in representation learning, while being more interpretable in their predictions.

Language Modeling Language Modelling +5

Unsupervised Deraining: Where Asymmetric Contrastive Learning Meets Self-similarity

no code implementations2 Nov 2022 Yi Chang, Yun Guo, Yuntong Ye, Changfeng Yu, Lin Zhu, XiLe Zhao, Luxin Yan, Yonghong Tian

In addition, considering that the existing real rain datasets are of low quality, either small scale or downloaded from the internet, we collect a real large-scale dataset under various rainy kinds of weather that contains high-resolution rainy images.

Contrastive Learning Rain Removal

Knowledge Transfer For On-Device Speech Emotion Recognition with Neural Structured Learning

1 code implementation26 Oct 2022 Yi Chang, Zhao Ren, Thanh Tam Nguyen, Kun Qian, Björn W. Schuller

Our experiments demonstrate that training a lightweight SER model on the target dataset with speech samples and graphs can not only produce small SER models, but also enhance the model performance compared to models with speech samples only and those using classic transfer learning strategies.

Speech Emotion Recognition Transfer Learning

A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for Explainable Fake News Detection

1 code implementation COLING 2022 Zhiwei Yang, Jing Ma, Hechang Chen, Hongzhan Lin, Ziyang Luo, Yi Chang

Existing fake news detection methods aim to classify a piece of news as true or false and provide veracity explanations, achieving remarkable performances.

Fake News Detection

A Unified Collaborative Representation Learning for Neural-Network based Recommender Systems

no code implementations19 May 2022 Yuanbo Xu, En Wang, Yongjian Yang, Yi Chang

On the other hand, ME models directly employ inner products as a default loss function metric that cannot project users and items into a proper latent space, which is a methodological disadvantage.

Metric Learning Recommendation Systems +1

Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis

1 code implementation30 Mar 2022 Yi Chang, Zhao Ren, Thanh Tam Nguyen, Wolfgang Nejdl, Björn W. Schuller

Respiratory sound classification is an important tool for remote screening of respiratory-related diseases such as pneumonia, asthma, and COVID-19.

Sound Classification

Unsupervised Image Deraining: Optimization Model Driven Deep CNN

no code implementations25 Mar 2022 Changfeng Yu, Yi Chang, Yi Li, XiLe Zhao, Luxin Yan

Consequently, we design an optimization model-driven deep CNN in which the unsupervised loss function of the optimization model is enforced on the proposed network for better generalization.

model Rain Removal

Climate Change & Computer Audition: A Call to Action and Overview on Audio Intelligence to Help Save the Planet

no code implementations10 Mar 2022 Björn W. Schuller, Alican Akman, Yi Chang, Harry Coppock, Alexander Gebhard, Alexander Kathan, Esther Rituerto-González, Andreas Triantafyllopoulos, Florian B. Pokorny

We categorise potential computer audition applications according to the five elements of earth, water, air, fire, and aether, proposed by the ancient Greeks in their five element theory; this categorisation serves as a framework to discuss computer audition in relation to different ecological aspects.

Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition

no code implementations9 Mar 2022 Yi Chang, Sofiane Laridi, Zhao Ren, Gregory Palmer, Björn W. Schuller, Marco Fisichella

The proposed framework consists of i) federated learning for data privacy, and ii) adversarial training at the training stage and randomisation at the testing stage for model robustness.

Federated Learning Speech Emotion Recognition

Event-based Video Reconstruction via Potential-assisted Spiking Neural Network

1 code implementation CVPR 2022 Lin Zhu, Xiao Wang, Yi Chang, Jianing Li, Tiejun Huang, Yonghong Tian

We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN), which utilizes Leaky-Integrate-and-Fire (LIF) neuron and Membrane Potential (MP) neuron.

Computational Efficiency Event-Based Video Reconstruction +2

Physically Disentangled Intra- and Inter-Domain Adaptation for Varicolored Haze Removal

1 code implementation CVPR 2022 Yi Li, Yi Chang, Yan Gao, Changfeng Yu, Luxin Yan

Consequently, we perform inter-domain adaptation between the synthetic and real images by mutually exchanging the background and other two components.

Domain Adaptation Image Dehazing

IMBENS: Ensemble Class-imbalanced Learning in Python

1 code implementation24 Nov 2021 Zhining Liu, Jian Kang, Hanghang Tong, Yi Chang

imbalanced-ensemble, abbreviated as imbens, is an open-source Python toolbox for leveraging the power of ensemble learning to address the class imbalance problem.

Ensemble Learning

CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks

no code implementations28 Oct 2021 Haotian Xue, Kaixiong Zhou, Tianlong Chen, Kai Guo, Xia Hu, Yi Chang, Xin Wang

In this paper, we investigate GNNs from the lens of weight and feature loss landscapes, i. e., the loss changes with respect to model weights and node features, respectively.

Orthogonal Graph Neural Networks

1 code implementation23 Sep 2021 Kai Guo, Kaixiong Zhou, Xia Hu, Yu Li, Yi Chang, Xin Wang

Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.

Attribute Graph Classification

Parts2Words: Learning Joint Embedding of Point Clouds and Texts by Bidirectional Matching between Parts and Words

1 code implementation CVPR 2023 Chuan Tang, Xi Yang, Bojian Wu, Zhizhong Han, Yi Chang

Specifically, we first segment the point clouds into parts, and then leverage optimal transport method to match parts and words in an optimized feature space, where each part is represented by aggregating features of all points within it and each word is abstracted by its contextual information.

Retrieval Text Matching

Image Restoration for Remote Sensing: Overview and Toolbox

no code implementations1 Jul 2021 Benhood Rasti, Yi Chang, Emanuele Dalsasso, Loïc Denis, Pedram Ghamisi

Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community.

Image Restoration

Self-Supervised Nonlinear Transform-Based Tensor Nuclear Norm for Multi-Dimensional Image Recovery

no code implementations29 May 2021 Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yi Chang, Michael K. Ng, Chao Li

Recently, transform-based tensor nuclear norm minimization methods are considered to capture low-rank tensor structures to recover third-order tensors in multi-dimensional image processing applications.

Enhanced Doubly Robust Learning for Debiasing Post-click Conversion Rate Estimation

1 code implementation28 May 2021 Siyuan Guo, Lixin Zou, Yiding Liu, Wenwen Ye, Suqi Cheng, Shuaiqiang Wang, Hechang Chen, Dawei Yin, Yi Chang

Based on it, a more robust doubly robust (MRDR) estimator has been proposed to further reduce its variance while retaining its double robustness.

counterfactual Imputation +2

Closing the Loop: Joint Rain Generation and Removal via Disentangled Image Translation

no code implementations CVPR 2021 Yuntong Ye, Yi Chang, Hanyu Zhou, Luxin Yan

Existing deep learning-based image deraining methods have achieved promising performance for synthetic rainy images, typically rely on the pairs of sharp images and simulated rainy counterparts.

Disentanglement Rain Removal +1

Using Prior Knowledge to Guide BERT's Attention in Semantic Textual Matching Tasks

1 code implementation22 Feb 2021 Tingyu Xia, Yue Wang, Yuan Tian, Yi Chang

We study the problem of incorporating prior knowledge into a deep Transformer-based model, i. e., Bidirectional Encoder Representations from Transformers (BERT), to enhance its performance on semantic textual matching tasks.

Adversarial Active Learning based Heterogeneous Graph Neural Network for Fake News Detection

no code implementations27 Jan 2021 Yuxiang Ren, Bo wang, Jiawei Zhang, Yi Chang

AA-HGNN utilizes an active learning framework to enhance learning performance, especially when facing the paucity of labeled data.

Active Learning Fake News Detection +3

ToHRE: A Top-Down Classification Strategy with Hierarchical Bag Representation for Distantly Supervised Relation Extraction

no code implementations COLING 2020 Erxin Yu, Wenjuan Han, Yuan Tian, Yi Chang

Distantly Supervised Relation Extraction (DSRE) has proven to be effective to find relational facts from texts, but it still suffers from two main problems: the wrong labeling problem and the long-tail problem.

Classification Relation +1

MESA: Boost Ensemble Imbalanced Learning with MEta-SAmpler

2 code implementations NeurIPS 2020 Zhining Liu, Pengfei Wei, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang

This makes MESA generally applicable to most of the existing learning models and the meta-sampler can be efficiently applied to new tasks.

imbalanced classification Meta-Learning

Unsupervised Hyperspectral Mixed Noise Removal Via Spatial-Spectral Constrained Deep Image Prior

no code implementations22 Aug 2020 Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yu-Bang Zheng, Yi Chang

Recently, convolutional neural network (CNN)-based methods are proposed for hyperspectral images (HSIs) denoising.

Denoising

Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion

1 code implementation30 Apr 2020 Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Yi Chang

In experiments, we achieve state-of-the-art performance on three benchmarks and a zero-shot dataset for link prediction, with highlights of inference costs reduced by 1-2 orders of magnitude compared to a textual encoding method.

Graph Embedding Link Prediction +1

GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks

2 code implementations17 Jan 2020 Qiang Huang, Makoto Yamada, Yuan Tian, Dinesh Singh, Dawei Yin, Yi Chang

In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method.

Descriptive feature selection

Self-paced Ensemble for Highly Imbalanced Massive Data Classification

1 code implementation8 Sep 2019 Zhining Liu, Wei Cao, Zhifeng Gao, Jiang Bian, Hechang Chen, Yi Chang, Tie-Yan Liu

To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers.

Classification General Classification +1

Classical Chinese Sentence Segmentation for Tomb Biographies of Tang Dynasty

no code implementations28 Aug 2019 Chao-Lin Liu, Yi Chang

Chinese characters that are and are not followed by a punctuation mark are classified into two categories.

BIG-bench Machine Learning Sentence +1

Jointly Modeling Hierarchical and Horizontal Features for Relational Triple Extraction

no code implementations23 Aug 2019 Zhepei Wei, Yantao Jia, Yuan Tian, Mohammad Javad Hosseini, Sujian Li, Mark Steedman, Yi Chang

In this work, we first introduce the hierarchical dependency and horizontal commonality between the two levels, and then propose an entity-enhanced dual tagging framework that enables the triple extraction (TE) task to utilize such interactions with self-learned entity features through an auxiliary entity extraction (EE) task, without breaking the joint decoding of relational triples.

Decoder Entity Extraction using GAN +3

Generative Question Refinement with Deep Reinforcement Learning in Retrieval-based QA System

1 code implementation13 Aug 2019 Ye Liu, Chenwei Zhang, Xiaohui Yan, Yi Chang, Philip S. Yu

To improve the quality and retrieval performance of the generated questions, we make two major improvements: 1) To better encode the semantics of ill-formed questions, we enrich the representation of questions with character embedding and the recent proposed contextual word embedding such as BERT, besides the traditional context-free word embeddings; 2) To make it capable to generate desired questions, we train the model with deep reinforcement learning techniques that considers an appropriate wording of the generation as an immediate reward and the correlation between generated question and answer as time-delayed long-term rewards.

Deep Reinforcement Learning Question Answering +4

JIM: Joint Influence Modeling for Collective Search Behavior

no code implementations1 Mar 2019 Shubhra Kanti Karmaker Santu, Liangda Li, Yi Chang, ChengXiang Zhai

This assumption is unrealistic as there are many correlated events in the real world which influence each other and thus, would pose a joint influence on the user search behavior rather than posing influence independently.

Gradient-Coherent Strong Regularization for Deep Neural Networks

no code implementations20 Nov 2018 Dae Hoon Park, Chiu Man Ho, Yi Chang, Huaqing Zhang

However, we observe that imposing strong L1 or L2 regularization with stochastic gradient descent on deep neural networks easily fails, which limits the generalization ability of the underlying neural networks.

L2 Regularization

Adversarial Sampling and Training for Semi-Supervised Information Retrieval

no code implementations9 Nov 2018 Dae Hoon Park, Yi Chang

To solve the problems at the same time, we propose an adversarial sampling and training framework to learn ad-hoc retrieval models with implicit feedback.

Information Retrieval Question Answering +1

Rain Streak Removal for Single Image via Kernel Guided CNN

no code implementations26 Aug 2018 Ye-Tao Wang, Xi-Le Zhao, Tai-Xiang Jiang, Liang-Jian Deng, Yi Chang, Ting-Zhu Huang

Then, our framework starts with learning the motion blur kernel, which is determined by two factors including angle and length, by a plain neural network, denoted as parameter net, from a patch of the texture component.

Abstract Meaning Representation for Paraphrase Detection

no code implementations NAACL 2018 Fuad Issa, Marco Damonte, Shay B. Cohen, Xiaohui Yan, Yi Chang

Abstract Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denote only its meaning in a canonical form.

Abstract Meaning Representation AMR Parsing +1

Contextual and Position-Aware Factorization Machines for Sentiment Classification

no code implementations18 Jan 2018 Shuai Wang, Mianwei Zhou, Geli Fei, Yi Chang, Bing Liu

While existing machine learning models have achieved great success for sentiment classification, they typically do not explicitly capture sentiment-oriented word interaction, which can lead to poor results for fine-grained analysis at the snippet level (a phrase or sentence).

Classification General Classification +6

Achieving Strong Regularization for Deep Neural Networks

no code implementations ICLR 2018 Dae Hoon Park, Chiu Man Ho, Yi Chang

L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions.

L2 Regularization

Transformed Low-Rank Model for Line Pattern Noise Removal

no code implementations ICCV 2017 Yi Chang, Luxin Yan, Sheng Zhong

This paper addresses the problem of line pattern noise removal from a single image, such as rain streak, hyperspectral stripe and so on.

Weighted Low-rank Tensor Recovery for Hyperspectral Image Restoration

no code implementations1 Sep 2017 Yi Chang, Luxin Yan, Houzhang Fang, Sheng Zhong, Zhijun Zhang

To overcome these limitations, in this work, we propose a unified low-rank tensor recovery model for comprehensive HSI restoration tasks, in which non-local similarity between spectral-spatial cubic and spectral correlation are simultaneously captured by 3-order tensors.

Deblurring Denoising +3

Hyper-Laplacian Regularized Unidirectional Low-Rank Tensor Recovery for Multispectral Image Denoising

no code implementations CVPR 2017 Yi Chang, Luxin Yan, Sheng Zhong

Recent low-rank based matrix/tensor recovery methods have been widely explored in multispectral images (MSI) denoising.

Image Denoising

Attributed Network Embedding for Learning in a Dynamic Environment

no code implementations6 Jun 2017 Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, Huan Liu

To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly.

Attribute Clustering +3

Streaming Recommender Systems

no code implementations21 Jul 2016 Shiyu Chang, Yang Zhang, Jiliang Tang, Dawei Yin, Yi Chang, Mark A. Hasegawa-Johnson, Thomas S. Huang

The increasing popularity of real-world recommender systems produces data continuously and rapidly, and it becomes more realistic to study recommender systems under streaming scenarios.

Recommendation Systems

Scaling Submodular Maximization via Pruned Submodularity Graphs

no code implementations1 Jun 2016 Tianyi Zhou, Hua Ouyang, Yi Chang, Jeff Bilmes, Carlos Guestrin

We propose a new random pruning method (called "submodular sparsification (SS)") to reduce the cost of submodular maximization.

Video Summarization

A Survey of Signed Network Mining in Social Media

no code implementations24 Nov 2015 Jiliang Tang, Yi Chang, Charu Aggarwal, Huan Liu

Many real-world relations can be represented by signed networks with positive and negative links, as a result of which signed network analysis has attracted increasing attention from multiple disciplines.

Survey

Convex Factorization Machine for Regression

1 code implementation4 Jul 2015 Makoto Yamada, Wenzhao Lian, Amit Goyal, Jianhui Chen, Kishan Wimalawarne, Suleiman A. Khan, Samuel Kaski, Hiroshi Mamitsuka, Yi Chang

We propose the convex factorization machine (CFM), which is a convex variant of the widely used Factorization Machines (FMs).

regression

Consistent Collective Matrix Completion under Joint Low Rank Structure

no code implementations5 Dec 2014 Suriya Gunasekar, Makoto Yamada, Dawei Yin, Yi Chang

We address the collective matrix completion problem of jointly recovering a collection of matrices with shared structure from partial (and potentially noisy) observations.

Matrix Completion

Optimal Stochastic Strongly Convex Optimization with a Logarithmic Number of Projections

no code implementations19 Apr 2013 Jianhui Chen, Tianbao Yang, Qihang Lin, Lijun Zhang, Yi Chang

We consider stochastic strongly convex optimization with a complex inequality constraint.

Cannot find the paper you are looking for? You can Submit a new open access paper.