no code implementations • Findings (EMNLP) 2021 • Zhiwei Yang, Jing Ma, Hechang Chen, Yunke Zhang, Yi Chang
Specifically, we first utilize a two-phase module to generate span representations by aggregating context information based on a bottom-up and top-down transformer network.
no code implementations • 20 Mar 2025 • Changlong Shi, He Zhao, Bingjie Zhang, Mingyuan Zhou, Dandan Guo, Yi Chang
However, adaptively adjusting aggregation weights while ensuring data security-without requiring additional proxy data-remains a significant challenge.
1 code implementation • 19 Mar 2025 • Changlong Shi, Jinmeng Li, He Zhao, Dan dan Guo, Yi Chang
In Federated Learning (FL), weighted aggregation of local models is conducted to generate a new global model, and the aggregation weights are typically normalized to 1.
no code implementations • 10 Mar 2025 • Hanyu Zhou, Haonan Wang, Haoyue Liu, Yuxing Duan, Yi Chang, Luxin Yan
In this work, we propose a novel common spatiotemporal fusion between frame and event modalities for high-dynamic scene optical flow, including visual boundary localization and motion correlation fusion.
1 code implementation • 21 Feb 2025 • Yue Zhou, Yi Chang, Yuan Wu
In conclusion, M$^3$ is a simple yet effective model merging method that significantly enhances the performance of the merged model by randomly generating contribution ratios for two fine-tuned LLMs.
1 code implementation • 21 Feb 2025 • Jinda Liu, Yi Chang, Yuan Wu
Fine-tuning large language models (LLMs) is prohibitively expensive in terms of computational and memory costs.
1 code implementation • 20 Feb 2025 • Yupeng Chang, Yi Chang, Yuan Wu
These candidate prompts are refined iteratively, while a scorer LLM evaluates their effectiveness using the multi-dimensional metrics designed in the objective prompts evaluator-a novel contribution in this work that provides a holistic evaluation of prompt quality and task performance.
1 code implementation • 20 Feb 2025 • Yuxing Cheng, Yi Chang, Yuan Wu
However, the reliability of performance evaluation has come under scrutiny due to data contamination-the unintended overlap between training and test datasets.
1 code implementation • 20 Feb 2025 • Chenlu Guo, Yuan Wu, Yi Chang
We first introduce StructuredLoRA (SLoRA), which investigates adding a small intermediate matrix between the low-rank matrices A and B. Secondly, we propose Nystr\"omLoRA (NLoRA), which leverages Nystr\"om-based initialization for SLoRA to improve its effectiveness and efficiency.
1 code implementation • 20 Feb 2025 • Jinnan Li, Jinzhe Li, Yue Wang, Yi Chang, Yuan Wu
This structural dependency not only reflects user intent but also establishes a second dimension for instruction following evaluation beyond constraint satisfaction.
1 code implementation • 20 Feb 2025 • Yupeng Chang, Chenlu Guo, Yi Chang, Yuan Wu
By optimizing the sharpness of the loss landscape, LoRA-GGPO guides the model toward flatter minima, mitigating the double descent problem and improving generalization.
Natural Language Understanding
parameter-efficient fine-tuning
1 code implementation • 20 Feb 2025 • Gengxu Li, Tingyu Xia, Yi Chang, Yuan Wu
A key innovation of LMPO lies in its Length-Controlled Margin-Based loss function, integrated within the Bradley-Terry framework.
1 code implementation • 19 Feb 2025 • Zhiyuan Li, Yi Chang, Yuan Wu
With the ongoing advancement of autonomous driving technology and intelligent transportation systems, research into semantic segmentation has become increasingly pivotal.
1 code implementation • 21 Jan 2025 • Qinggang Zhang, Shengyuan Chen, Yuanchen Bei, Zheng Yuan, Huachi Zhou, Zijin Hong, Junnan Dong, Hao Chen, Yi Chang, Xiao Huang
Large language models (LLMs) have demonstrated remarkable capabilities in a wide range of tasks, yet their application to specialized domains remains challenging due to the need for deep expertise.
no code implementations • 12 Jan 2025 • Peng Zheng, Linzhi Huang, Yizhou Yu, Yi Chang, Yilin Wang, Rui Ma
However, the high computational cost of NeRF presents challenges for synthesizing high-resolution (HR) images.
no code implementations • 28 Dec 2024 • Honglin Pang, Yi Chang, Tianjing Duan, Xi Yang
Archaeological catalogs, containing key elements such as artifact images, morphological descriptions, and excavation information, are essential for studying artifact evolution and cultural inheritance.
no code implementations • 26 Dec 2024 • Haitao Meng, Chonghao Zhong, Sheng Tang, Lian JunJia, Wenwei Lin, Zhenshan Bing, Yi Chang, Gang Chen, Alois Knoll
To achieve this, we propose a Focus Cost Discrimination (FCD) module that measures the clarity of edges as an essential indicator of focus level and integrates spatial surroundings to facilitate cost estimation.
1 code implementation • 19 Dec 2024 • Zhiyuan Li, Tingyu Xia, Yi Chang, Yuan Wu
The Receptance Weighted Key Value (RWKV) model offers a novel alternative to the Transformer architecture, merging the benefits of recurrent and attention-based systems.
no code implementations • 21 Oct 2024 • Jifeng Hu, Sili Huang, Li Shen, Zhejian Yang, Shengchao Hu, Shisong Tang, Hechang Chen, Yi Chang, DaCheng Tao, Lichao Sun
In the quantized spaces alignment, we leverage vector quantization to align the different state and action spaces of various tasks, facilitating continual training in the same space.
1 code implementation • 14 Oct 2024 • Yahan Li, Tingyu Xia, Yi Chang, Yuan Wu
While traditional metrics like Matrix Entropy offer valuable insights, they are computationally intensive for large-scale models due to their \( O(n^3) \) time complexity with Singular Value Decomposition (SVD).
1 code implementation • 12 Oct 2024 • Tingyu Xia, Bowen Yu, Kai Dang, An Yang, Yuan Wu, Yuan Tian, Yi Chang, Junyang Lin
Supervised fine-tuning (SFT) is crucial for aligning Large Language Models (LLMs) with human instructions.
1 code implementation • 10 Oct 2024 • Qi Wang, Jindong Li, Shiqi Wang, Qianli Xing, Runliang Niu, He Kong, Rui Li, Guodong Long, Yi Chang, Chengqi Zhang
Large language models (LLMs) have not only revolutionized the field of natural language processing (NLP) but also have the potential to bring a paradigm shift in many other fields due to their remarkable abilities of language understanding, as well as impressive generalization capabilities and reasoning skills.
no code implementations • 3 Oct 2024 • Bin Gu, Xiyuan Wei, Hualin Zhang, Yi Chang, Heng Huang
While the random ZO estimator introduces bigger error and makes convergence analysis more challenging compared to coordinated ZO estimator, it requires only $\mathcal{O}(1)$ computation, which is significantly less than $\mathcal{O}(d)$ computation of the coordinated ZO estimator, with $d$ being dimension of the problem space.
no code implementations • 25 Sep 2024 • Hanyu Zhou, Yi Chang, Zhiwei Shi, Wending Yan, Gang Chen, Yonghong Tian, Luxin Yan
Under this unified framework, the proposed method can progressively and explicitly transfer knowledge from clean scenes to real adverse weather.
1 code implementation • 24 Sep 2024 • Yahan Li, Yi Wang, Yi Chang, Yuan Wu
Large language models (LLMs) have demonstrated remarkable capabilities across a range of natural language processing (NLP) tasks, capturing the attention of both practitioners and the broader public.
1 code implementation • 24 Sep 2024 • Chenlu Guo, Nuo Xu, Yi Chang, Yuan Wu
With the rapid development of large language models (LLMs), assessing their performance on health-related inquiries has become increasingly essential.
1 code implementation • 4 Sep 2024 • Jifeng Hu, Li Shen, Sili Huang, Zhejian Yang, Hechang Chen, Lichao Sun, Yi Chang, DaCheng Tao
Artificial neural networks, especially recent diffusion-based models, have shown remarkable superiority in gaming, control, and QA systems, where the training tasks' datasets are usually static.
2 code implementations • 21 Aug 2024 • Ziwei Liu, Qidong Liu, Yejing Wang, Wanyu Wang, Pengyue Jia, Maolin Wang, Zitao Liu, Yi Chang, Xiangyu Zhao
In various domains, Sequential Recommender Systems (SRS) have become essential due to their superior capability to discern intricate user preferences.
no code implementations • 16 Aug 2024 • Qiang Huang, Chuizheng Meng, Defu Cao, Biwei Huang, Yi Chang, Yan Liu
Counterfactual estimation from observations represents a critical endeavor in numerous application fields, such as healthcare and finance, with the primary challenge being the mitigation of treatment bias.
no code implementations • 16 Aug 2024 • Shihan Peng, Hanyu Zhou, Hao Dong, Zhiwei Shi, Haoyue Liu, Yuxing Duan, Yi Chang, Luxin Yan
In this work, we introduce hybrid coaxial event-frame devices to build the multimodal system, and propose a coaxial stereo event camera (CoSEC) dataset for autonomous driving.
1 code implementation • 8 Aug 2024 • Yupeng Chang, Yi Chang, Yuan Wu
Large language models (LLMs) have demonstrated remarkable proficiency across various natural language processing (NLP) tasks.
no code implementations • 16 Jul 2024 • Kai Guo, Zewen Liu, Zhikai Chen, Hongzhi Wen, Wei Jin, Jiliang Tang, Yi Chang
To address this gap, our work aims to explore the potential of LLMs in the context of adversarial attacks on graphs.
no code implementations • 11 Jul 2024 • Shengqi Xu, Run Sun, Yi Chang, Shuning Cao, Xueyao Xiao, Luxin Yan
Long-range imaging inevitably suffers from atmospheric turbulence with severe geometric distortions due to random refraction of light.
1 code implementation • International Conference on Learning Representations 2024 • Hangting Ye, Wei Fan, Xiaozhuang Song, Shun Zheng, He Zhao, Dandan Guo, Yi Chang
With the recent success of deep learning, many tabular machine learning (ML) methods based on deep networks (e. g., Transformer, ResNet) have achieved competitive performance on tabular benchmarks.
1 code implementation • 29 Jun 2024 • Rui Cao, Shijie Xue, Jindong Li, Qi Wang, Yi Chang
We introduce normalizing flows to unsupervised graph-level anomaly detection due to their successful application and superior quality in learning the underlying distribution of samples.
no code implementations • 25 Jun 2024 • Wanli Shi, Yi Chang, Bin Gu
Bilevel optimization (BO) has recently gained prominence in many machine learning applications due to its ability to capture the nested structure inherent in these problems.
no code implementations • 21 Jun 2024 • Yi Chang, Zhao Ren, Zhonghao Zhao, Thanh Tam Nguyen, Kun Qian, Tanja Schultz, Björn W. Schuller
Speech emotion recognition (SER) plays a crucial role in human-computer interaction.
no code implementations • CVPR 2024 • Yuxing Duan, Shihan Peng, Lin Zhu, Wei zhang, Yi Chang, Sheng Zhong, Luxin Yan
Event camera has significant advantages in capturing dynamic scene information while being prone to noise interference, particularly in challenging conditions like low threshold and low illumination.
1 code implementation • 27 May 2024 • YuXiao Lee, Xiaofeng Cao, Jingcai Guo, Wei Ye, Qing Guo, Yi Chang
The remarkable achievements of Large Language Models (LLMs) have captivated the attention of both academia and industry, transcending their initial role in dialogue generation.
1 code implementation • 17 May 2024 • Tingyu Xia, Bowen Yu, Yuan Wu, Yi Chang, Chang Zhou
In this paper, we initiate our discussion by demonstrating how Large Language Models (LLMs), when tasked with responding to queries, display a more even probability distribution in their answers if they are more adept, as opposed to their less skilled counterparts.
1 code implementation • 6 May 2024 • Bo wang, Jing Ma, Hongzhan Lin, Zhiwei Yang, Ruichao Yang, Yuan Tian, Yi Chang
To detect fake news from a sea of diverse, crowded and even competing narratives, in this paper, we propose a novel defense-based explainable fake news detection framework.
1 code implementation • 5 May 2024 • Xu Wang, Cheng Li, Yi Chang, Jindong Wang, Yuan Wu
The results are revealing: NegativePrompt markedly enhances the performance of LLMs, evidenced by relative improvements of 12. 89% in Instruction Induction tasks and 46. 25% in BIG-Bench tasks.
1 code implementation • 3 May 2024 • Jindong Li, Qianli Xing, Qi Wang, Yi Chang
In this paper, we propose a novel Simplified Transformer with Cross-View Attention for Unsupervised Graph-level Anomaly Detection, namely, CVTGAD.
1 code implementation • CVPR 2024 • Haoyue Liu, Shihan Peng, Lin Zhu, Yi Chang, Hanyu Zhou, Luxin Yan
In this work, we present a novel nighttime dynamic imaging method with an event camera.
no code implementations • 18 Mar 2024 • Howard Zhang, Yunhao Ba, Ethan Yang, Rishi Upadhyay, Alex Wong, Achuta Kadambi, Yun Guo, Xueyao Xiao, Xiaoxiong Wang, Yi Li, Yi Chang, Luxin Yan, Chaochao Zheng, Luping Wang, Bin Liu, Sunder Ali Khowaja, Jiseok Yoon, Ik-Hyun Lee, Zhao Zhang, Yanyan Wei, Jiahuan Ren, Suiyi Zhao, Huan Zheng
This report reviews the results of the GT-Rain challenge on single image deraining at the UG2+ workshop at CVPR 2023.
no code implementations • CVPR 2024 • Hanyu Zhou, Yi Chang, Zhiwei Shi, Luxin Yan
In this work, we bring the event as a bridge between RGB and LiDAR, and propose a novel hierarchical visual-motion fusion framework for scene flow, which explores a homogeneous space to fuse the cross-modal complementary knowledge for physical interpretation.
no code implementations • 12 Mar 2024 • Hanyu Zhou, Zhiwei Shi, Hao Dong, Shihan Peng, Yi Chang, Luxin Yan
In spatial reasoning stage, we project the compensated events into the same image coordinate, discretize the timestamp of events to obtain a time image that can reflect the motion confidence, and further segment the moving object through adaptive threshold on the time image.
1 code implementation • 27 Feb 2024 • Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, Jun Wang
In this work, we investigate the potential of large language models (LLMs) based agents to automate data science tasks, with the goal of comprehending task requirements, then building and training the best-fit machine learning models.
1 code implementation • 23 Feb 2024 • Shenglai Zeng, Jiankun Zhang, Pengfei He, Yue Xing, Yiding Liu, Han Xu, Jie Ren, Shuaiqiang Wang, Dawei Yin, Yi Chang, Jiliang Tang
In this work, we conduct extensive empirical studies with novel attack methods, which demonstrate the vulnerability of RAG systems on leaking the private retrieval database.
no code implementations • 13 Feb 2024 • Kai Guo, Hongzhi Wen, Wei Jin, Yaming Guo, Jiliang Tang, Yi Chang
These insights have empowered us to develop a novel GNN backbone model, DGAT, designed to harness the robust properties of both graph self-attention mechanism and the decoupled architecture.
1 code implementation • 9 Feb 2024 • Runliang Niu, Jindong Li, Shiqi Wang, Yali Fu, Xiyu Hu, Xueyuan Leng, He Kong, Yi Chang, Qi Wang
Additionally, we construct the ScreenAgent Dataset, which collects screenshots and action sequences when completing a variety of daily computer tasks.
no code implementations • 6 Feb 2024 • Bohao Qu, Xiaofeng Cao, Qing Guo, Yi Chang, Ivor W. Tsang, Chengqi Zhang
In this study, we present a transductive inference approach on that reward information propagation graph, which enables the effective estimation of rewards for unlabelled data in offline reinforcement learning.
no code implementations • 5 Feb 2024 • Yixiang Shan, Zhengbang Zhu, Ting Long, Qifan Liang, Yi Chang, Weinan Zhang, Liang Yin
The performance of offline reinforcement learning (RL) is sensitive to the proportion of high-return trajectories in the offline dataset.
no code implementations • 4 Feb 2024 • Jie Ren, Han Xu, Pengfei He, Yingqian Cui, Shenglai Zeng, Jiankun Zhang, Hongzhi Wen, Jiayuan Ding, Pei Huang, Lingjuan Lyu, Hui Liu, Yi Chang, Jiliang Tang
We examine from two distinct viewpoints: the copyrights pertaining to the source data held by the data owners and those of the generative models maintained by the model builders.
no code implementations • 2 Feb 2024 • Yi Chang, Zhao Ren, Zixing Zhang, Xin Jing, Kun Qian, Xi Shao, Bin Hu, Tanja Schultz, Björn W. Schuller
Speech contains rich information on the emotions of humans, and Speech Emotion Recognition (SER) has been an important topic in the area of human-computer interaction.
no code implementations • 31 Jan 2024 • Dong Chen, Ning Liu, Yichen Zhu, Zhengping Che, Rui Ma, Fachao Zhang, Xiaofeng Mou, Yi Chang, Jian Tang
Instead of a simple combination of pruning and SD, EPSD enables the pruned network to favor SD by keeping more distillable weights before training to ensure better distillation of the pruned network.
no code implementations • 31 Jan 2024 • Hanyu Zhou, Yi Chang, Haoyue Liu, Wending Yan, Yuxing Duan, Zhiwei Shi, Luxin Yan
In appearance adaptation, we employ the intrinsic image decomposition to embed the auxiliary daytime image and the nighttime image into a reflectance-aligned common space.
1 code implementation • 27 Jan 2024 • Yue Zhou, Chenlu Guo, Xu Wang, Yi Chang, Yuan Wu
Leveraging large models, these data augmentation techniques have outperformed traditional approaches.
no code implementations • 8 Dec 2023 • Jianhua Wu, Bingzhao Gao, Jincheng Gao, Jianhao Yu, Hongqing Chu, Qiankun Yu, Xun Gong, Yi Chang, H. Eric Tseng, Hong Chen, Jie Chen
With the development of artificial intelligence and breakthroughs in deep learning, large-scale Foundation Models (FMs), such as GPT, Sora, etc., have achieved remarkable results in many fields including natural language processing and computer vision.
no code implementations • 14 Oct 2023 • Hao Wang, Qiang Song, Ruofeng Yin, Rui Ma, Yizhou Yu, Yi Chang
In this paper, we propose B-Spine, a novel deep learning pipeline to learn B-spline curve representation of the spine and estimate the Cobb angles for spinal curvature estimation from low-quality X-ray images.
no code implementations • NeurIPS 2023 • Sili Huang, Yanchao Sun, Jifeng Hu, Siyuan Guo, Hechang Chen, Yi Chang, Lichao Sun, Bo Yang
Our experimental results demonstrate that SGFD can generalize well on a wide range of test environments and significantly outperforms state-of-the-art methods in handling both task-irrelevant variations and task-relevant variations.
no code implementations • 22 Aug 2023 • Xing Chen, Yijun Liu, Zhaogeng Liu, Hechang Chen, Hengshuai Yao, Yi Chang
In prior work, it has been shown that policy-based exploration is beneficial for continuous action space in deterministic policy reinforcement learning(DPRL).
1 code implementation • ICCV 2023 • Yun Guo, Xueyao Xiao, Yi Chang, Shumin Deng, Luxin Yan
Learning-based image deraining methods have made great progress.
no code implementations • 19 Jul 2023 • Qingyao Ai, Ting Bai, Zhao Cao, Yi Chang, Jiawei Chen, Zhumin Chen, Zhiyong Cheng, Shoubin Dong, Zhicheng Dou, Fuli Feng, Shen Gao, Jiafeng Guo, Xiangnan He, Yanyan Lan, Chenliang Li, Yiqun Liu, Ziyu Lyu, Weizhi Ma, Jun Ma, Zhaochun Ren, Pengjie Ren, Zhiqiang Wang, Mingwen Wang, Ji-Rong Wen, Le Wu, Xin Xin, Jun Xu, Dawei Yin, Peng Zhang, Fan Zhang, Weinan Zhang, Min Zhang, Xiaofei Zhu
The research field of Information Retrieval (IR) has evolved significantly, expanding beyond traditional search to meet diverse user information needs.
1 code implementation • 6 Jul 2023 • Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.
1 code implementation • 15 Jun 2023 • Shengqi Xu, Shuning Cao, Haoyue Liu, Xueyao Xiao, Yi Chang, Luxin Yan
We subsequently select the sharpest set of registered frames by employing a frame selection approach based on image sharpness, and average them to produce an image that is largely free of geometric distortion, albeit with blurriness.
1 code implementation • 15 Jun 2023 • Shengqi Xu, Xueyao Xiao, Shuning Cao, Yi Chang, Luxin Yan
In this technical report, we present the solution developed by our team VIELab-HUST for text recognition through atmospheric turbulence in Track 2. 1 of the CVPR 2023 UG$^{2}$+ challenge.
no code implementations • 13 Jun 2023 • Siyuan Guo, Yanchao Sun, Jifeng Hu, Sili Huang, Hechang Chen, Haiyin Piao, Lichao Sun, Yi Chang
However, constrained by the limited quality of the offline dataset, its performance is often sub-optimal.
no code implementations • 8 Jun 2023 • Jifeng Hu, Yanchao Sun, Sili Huang, Siyuan Guo, Hechang Chen, Li Shen, Lichao Sun, Yi Chang, DaCheng Tao
Recent works have shown the potential of diffusion models in computer vision and natural language processing.
1 code implementation • 3 Jun 2023 • Hangting Ye, Zhining Liu, Xinyi Shen, Wei Cao, Shun Zheng, Xiaofan Gui, Huishuai Zhang, Yi Chang, Jiang Bian
This is a challenging task given the heterogeneous model structures and assumptions adopted by existing UAD methods.
1 code implementation • 13 May 2023 • Yun Guo, Xueyao Xiao, Xiaoxiong Wang, Yi Li, Yi Chang, Luxin Yan
Secondly, a transformer-based single image deraining network Uformer is implemented to pre-train on large real rain dataset and then fine-tuned on pseudo GT to further improve image restoration.
no code implementations • 24 Mar 2023 • Hanyu Zhou, Yi Chang, Gang Chen, Luxin Yan
In motion adaptation, we utilize the flow consistency knowledge to align the cross-domain optical flows into a motion-invariance common space, where the optical flow from clean weather is used as the guidance-knowledge to obtain a preliminary optical flow for adverse weather.
no code implementations • CVPR 2023 • Hanyu Zhou, Yi Chang, Wending Yan, Luxin Yan
To handle the practical optical flow under real foggy scenes, in this work, we propose a novel unsupervised cumulative domain adaptation optical flow (UCDA-Flow) framework: depth-association motion adaptation and correlation-alignment motion adaptation.
1 code implementation • 23 Jan 2023 • Zhao Ren, Yi Chang, Thanh Tam Nguyen, Yang Tan, Kun Qian, Björn W. Schuller
This work introduces both classic machine learning and deep learning for comparison, and further offer insights about the advances and future research directions in deep learning for heart sound analysis.
no code implementations • ICCV 2023 • Changfeng Yu, Shiming Chen, Yi Chang, Yibing Song, Luxin Yan
To solve this dilemma, we propose a physical alignment and controllable generation network (PCGNet) for diverse and realistic rain generation.
no code implementations • 13 Dec 2022 • Chen Zhang, Xiaofeng Cao, Yi Chang, Ivor W Tsang
Then, relying on the surjective mapping from the teaching set to the parameter, we develop a design strategy of the optimal teaching set under appropriate settings, of which two popular efficiency metrics, teaching dimension and iterative teaching dimension are one.
1 code implementation • 11 Dec 2022 • Tingyu Xia, Yue Wang, Yuan Tian, Yi Chang
Weakly-supervised text classification aims to train a classifier using only class descriptions and unlabeled data.
1 code implementation • 7 Nov 2022 • Erxin Yu, Lan Du, Yuan Jin, Zhepei Wei, Yi Chang
Recently, discrete latent variable models have received a surge of interest in both Natural Language Processing (NLP) and Computer Vision (CV), attributed to their comparable performance to the continuous counterparts in representation learning, while being more interpretable in their predictions.
no code implementations • 2 Nov 2022 • Yi Chang, Yun Guo, Yuntong Ye, Changfeng Yu, Lin Zhu, XiLe Zhao, Luxin Yan, Yonghong Tian
In addition, considering that the existing real rain datasets are of low quality, either small scale or downloaded from the internet, we collect a real large-scale dataset under various rainy kinds of weather that contains high-resolution rainy images.
1 code implementation • 26 Oct 2022 • Yi Chang, Zhao Ren, Thanh Tam Nguyen, Kun Qian, Björn W. Schuller
Our experiments demonstrate that training a lightweight SER model on the target dataset with speech samples and graphs can not only produce small SER models, but also enhance the model performance compared to models with speech samples only and those using classic transfer learning strategies.
1 code implementation • 14 Oct 2022 • Jifeng Hu, Yanchao Sun, Hechang Chen, Sili Huang, Haiyin Piao, Yi Chang, Lichao Sun
Our main idea is to design the multi-action-branch reward estimation and policy-weighted reward aggregation for stabilized training.
Deep Reinforcement Learning
Multi-agent Reinforcement Learning
+2
1 code implementation • COLING 2022 • Zhiwei Yang, Jing Ma, Hechang Chen, Hongzhan Lin, Ziyang Luo, Yi Chang
Existing fake news detection methods aim to classify a piece of news as true or false and provide veracity explanations, achieving remarkable performances.
Ranked #3 on
Fake News Detection
on RAWFC
1 code implementation • 20 May 2022 • Xing Chen, Dongcui Diao, Hechang Chen, Hengshuai Yao, Haiyin Piao, Zhixiao Sun, Zhiwei Yang, Randy Goebel, Bei Jiang, Yi Chang
The popular Proximal Policy Optimization (PPO) algorithm approximates the solution in a clipped policy space.
no code implementations • 19 May 2022 • Yuanbo Xu, En Wang, Yongjian Yang, Yi Chang
On the other hand, ME models directly employ inner products as a default loss function metric that cannot project users and items into a proper latent space, which is a methodological disadvantage.
1 code implementation • 30 Mar 2022 • Yi Chang, Zhao Ren, Thanh Tam Nguyen, Wolfgang Nejdl, Björn W. Schuller
Respiratory sound classification is an important tool for remote screening of respiratory-related diseases such as pneumonia, asthma, and COVID-19.
no code implementations • 25 Mar 2022 • Changfeng Yu, Yi Chang, Yi Li, XiLe Zhao, Luxin Yan
Consequently, we design an optimization model-driven deep CNN in which the unsupervised loss function of the optimization model is enforced on the proposed network for better generalization.
no code implementations • 10 Mar 2022 • Björn W. Schuller, Alican Akman, Yi Chang, Harry Coppock, Alexander Gebhard, Alexander Kathan, Esther Rituerto-González, Andreas Triantafyllopoulos, Florian B. Pokorny
We categorise potential computer audition applications according to the five elements of earth, water, air, fire, and aether, proposed by the ancient Greeks in their five element theory; this categorisation serves as a framework to discuss computer audition in relation to different ecological aspects.
no code implementations • 9 Mar 2022 • Yi Chang, Sofiane Laridi, Zhao Ren, Gregory Palmer, Björn W. Schuller, Marco Fisichella
The proposed framework consists of i) federated learning for data privacy, and ii) adversarial training at the training stage and randomisation at the testing stage for model robustness.
1 code implementation • CVPR 2022 • Lin Zhu, Xiao Wang, Yi Chang, Jianing Li, Tiejun Huang, Yonghong Tian
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN), which utilizes Leaky-Integrate-and-Fire (LIF) neuron and Membrane Potential (MP) neuron.
Computational Efficiency
Event-Based Video Reconstruction
+2
1 code implementation • CVPR 2022 • Yi Li, Yi Chang, Yan Gao, Changfeng Yu, Luxin Yan
Consequently, we perform inter-domain adaptation between the synthetic and real images by mutually exchanging the background and other two components.
1 code implementation • 24 Nov 2021 • Zhining Liu, Pengfei Wei, Zhepei Wei, Boyang Yu, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang
Class-imbalance is a common problem in machine learning practice.
1 code implementation • 24 Nov 2021 • Zhining Liu, Jian Kang, Hanghang Tong, Yi Chang
imbalanced-ensemble, abbreviated as imbens, is an open-source Python toolbox for leveraging the power of ensemble learning to address the class imbalance problem.
no code implementations • 28 Oct 2021 • Haotian Xue, Kaixiong Zhou, Tianlong Chen, Kai Guo, Xia Hu, Yi Chang, Xin Wang
In this paper, we investigate GNNs from the lens of weight and feature loss landscapes, i. e., the loss changes with respect to model weights and node features, respectively.
1 code implementation • 23 Sep 2021 • Kai Guo, Kaixiong Zhou, Xia Hu, Yu Li, Yi Chang, Xin Wang
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.
1 code implementation • Findings (EMNLP) 2021 • Bo wang, Tao Shen, Guodong Long, Tianyi Zhou, Yi Chang
Aspect-level sentiment classification (ALSC) aims at identifying the sentiment polarity of a specified aspect in a sentence.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+3
1 code implementation • CVPR 2023 • Chuan Tang, Xi Yang, Bojian Wu, Zhizhong Han, Yi Chang
Specifically, we first segment the point clouds into parts, and then leverage optimal transport method to match parts and words in an optimized feature space, where each part is represented by aggregating features of all points within it and each word is abstracted by its contextual information.
no code implementations • 1 Jul 2021 • Benhood Rasti, Yi Chang, Emanuele Dalsasso, Loïc Denis, Pedram Ghamisi
Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community.
no code implementations • 29 May 2021 • Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yi Chang, Michael K. Ng, Chao Li
Recently, transform-based tensor nuclear norm minimization methods are considered to capture low-rank tensor structures to recover third-order tensors in multi-dimensional image processing applications.
1 code implementation • 28 May 2021 • Siyuan Guo, Lixin Zou, Yiding Liu, Wenwen Ye, Suqi Cheng, Shuaiqiang Wang, Hechang Chen, Dawei Yin, Yi Chang
Based on it, a more robust doubly robust (MRDR) estimator has been proposed to further reduce its variance while retaining its double robustness.
no code implementations • CVPR 2021 • Yuntong Ye, Yi Chang, Hanyu Zhou, Luxin Yan
Existing deep learning-based image deraining methods have achieved promising performance for synthetic rainy images, typically rely on the pairs of sharp images and simulated rainy counterparts.
1 code implementation • 22 Feb 2021 • Tingyu Xia, Yue Wang, Yuan Tian, Yi Chang
We study the problem of incorporating prior knowledge into a deep Transformer-based model, i. e., Bidirectional Encoder Representations from Transformers (BERT), to enhance its performance on semantic textual matching tasks.
no code implementations • 27 Jan 2021 • Yuxiang Ren, Bo wang, Jiawei Zhang, Yi Chang
AA-HGNN utilizes an active learning framework to enhance learning performance, especially when facing the paucity of labeled data.
no code implementations • ICLR 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Chen Gong, Nannan Wang, ZongYuan Ge, Yi Chang
The \textit{early stopping} method therefore can be exploited for learning with noisy labels.
Ranked #34 on
Image Classification
on mini WebVision 1.0
(ImageNet Top-1 Accuracy metric)
no code implementations • COLING 2020 • Erxin Yu, Wenjuan Han, Yuan Tian, Yi Chang
Distantly Supervised Relation Extraction (DSRE) has proven to be effective to find relational facts from texts, but it still suffers from two main problems: the wrong labeling problem and the long-tail problem.
2 code implementations • NeurIPS 2020 • Zhining Liu, Pengfei Wei, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang
This makes MESA generally applicable to most of the existing learning models and the meta-sampler can be efficiently applied to new tasks.
no code implementations • 22 Aug 2020 • Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yu-Bang Zheng, Yi Chang
Recently, convolutional neural network (CNN)-based methods are proposed for hyperspectral images (HSIs) denoising.
1 code implementation • 30 Apr 2020 • Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Yi Chang
In experiments, we achieve state-of-the-art performance on three benchmarks and a zero-shot dataset for link prediction, with highlights of inference costs reduced by 1-2 orders of magnitude compared to a textual encoding method.
Ranked #4 on
Link Prediction
on UMLS
2 code implementations • 17 Jan 2020 • Qiang Huang, Makoto Yamada, Yuan Tian, Dinesh Singh, Dawei Yin, Yi Chang
In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method.
1 code implementation • 8 Sep 2019 • Zhining Liu, Wei Cao, Zhifeng Gao, Jiang Bian, Hechang Chen, Yi Chang, Tie-Yan Liu
To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers.
5 code implementations • ACL 2020 • Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, Yi Chang
Extracting relational triples from unstructured text is crucial for large-scale knowledge graph construction.
Ranked #5 on
Relation Extraction
on NYT11-HRL
no code implementations • 28 Aug 2019 • Chao-Lin Liu, Yi Chang
Chinese characters that are and are not followed by a punctuation mark are classified into two categories.
no code implementations • 23 Aug 2019 • Zhepei Wei, Yantao Jia, Yuan Tian, Mohammad Javad Hosseini, Sujian Li, Mark Steedman, Yi Chang
In this work, we first introduce the hierarchical dependency and horizontal commonality between the two levels, and then propose an entity-enhanced dual tagging framework that enables the triple extraction (TE) task to utilize such interactions with self-learned entity features through an auxiliary entity extraction (EE) task, without breaking the joint decoding of relational triples.
1 code implementation • 13 Aug 2019 • Ye Liu, Chenwei Zhang, Xiaohui Yan, Yi Chang, Philip S. Yu
To improve the quality and retrieval performance of the generated questions, we make two major improvements: 1) To better encode the semantics of ill-formed questions, we enrich the representation of questions with character embedding and the recent proposed contextual word embedding such as BERT, besides the traditional context-free word embeddings; 2) To make it capable to generate desired questions, we train the model with deep reinforcement learning techniques that considers an appropriate wording of the generation as an immediate reward and the correlation between generated question and answer as time-delayed long-term rewards.
no code implementations • 1 Mar 2019 • Shubhra Kanti Karmaker Santu, Liangda Li, Yi Chang, ChengXiang Zhai
This assumption is unrealistic as there are many correlated events in the real world which influence each other and thus, would pose a joint influence on the user search behavior rather than posing influence independently.
no code implementations • 20 Nov 2018 • Dae Hoon Park, Chiu Man Ho, Yi Chang, Huaqing Zhang
However, we observe that imposing strong L1 or L2 regularization with stochastic gradient descent on deep neural networks easily fails, which limits the generalization ability of the underlying neural networks.
no code implementations • 9 Nov 2018 • Dae Hoon Park, Yi Chang
To solve the problems at the same time, we propose an adversarial sampling and training framework to learn ad-hoc retrieval models with implicit feedback.
no code implementations • ICLR 2019 • Chiu Man Ho, Dae Hoon Park, Wei Yang, Yi Chang
We propose sequenced-replacement sampling (SRS) for training deep neural networks.
4 code implementations • EMNLP 2018 • Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, Philip S. Yu
User intent detection plays a critical role in question-answering and dialog systems.
no code implementations • 26 Aug 2018 • Ye-Tao Wang, Xi-Le Zhao, Tai-Xiang Jiang, Liang-Jian Deng, Yi Chang, Ting-Zhu Huang
Then, our framework starts with learning the motion blur kernel, which is determined by two factors including angle and length, by a plain neural network, denoted as parameter net, from a patch of the texture component.
1 code implementation • ACL 2018 • Shashi Narayan, Ronald Cardenas, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata, Jiangsheng Yu, Yi Chang
Document modeling is essential to a variety of natural language understanding tasks.
no code implementations • ACL 2018 • Shuai Wang, Sahisnu Mazumder, Bing Liu, Mianwei Zhou, Yi Chang
In MNs, attention mechanism plays a crucial role in detecting the sentiment context for the given target.
no code implementations • NAACL 2018 • Fuad Issa, Marco Damonte, Shay B. Cohen, Xiaohui Yan, Yi Chang
Abstract Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denote only its meaning in a canonical form.
no code implementations • 16 Feb 2018 • Shuai Wang, Mianwei Zhou, Sahisnu Mazumder, Bing Liu, Yi Chang
Stage one extracts/groups the target-related words (call t-words) for a given target.
no code implementations • 18 Jan 2018 • Shuai Wang, Mianwei Zhou, Geli Fei, Yi Chang, Bing Liu
While existing machine learning models have achieved great success for sentiment classification, they typically do not explicitly capture sentiment-oriented word interaction, which can lead to poor results for fine-grained analysis at the snippet level (a phrase or sentence).
no code implementations • ICLR 2018 • Dae Hoon Park, Chiu Man Ho, Yi Chang
L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions.
no code implementations • ICCV 2017 • Yi Chang, Luxin Yan, Sheng Zhong
This paper addresses the problem of line pattern noise removal from a single image, such as rain streak, hyperspectral stripe and so on.
no code implementations • 1 Sep 2017 • Yi Chang, Luxin Yan, Houzhang Fang, Sheng Zhong, Zhijun Zhang
To overcome these limitations, in this work, we propose a unified low-rank tensor recovery model for comprehensive HSI restoration tasks, in which non-local similarity between spectral-spatial cubic and spectral correlation are simultaneously captured by 3-order tensors.
Ranked #12 on
Hyperspectral Image Denoising
on ICVL-HSI-Gaussian50
no code implementations • CVPR 2017 • Yi Chang, Luxin Yan, Sheng Zhong
Recent low-rank based matrix/tensor recovery methods have been widely explored in multispectral images (MSI) denoising.
no code implementations • 6 Jun 2017 • Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, Huan Liu
To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly.
no code implementations • 14 Aug 2016 • Makoto Yamada, Jiliang Tang, Jose Lugo-Martinez, Ermin Hodzic, Raunak Shrestha, Avishek Saha, Hua Ouyang, Dawei Yin, Hiroshi Mamitsuka, Cenk Sahinalp, Predrag Radivojac, Filippo Menczer, Yi Chang
However, sophisticated learning models are computationally unfeasible for data with millions of features.
no code implementations • 21 Jul 2016 • Yilin Wang, Suhang Wang, Jiliang Tang, Neil O'Hare, Yi Chang, Baoxin Li
Understanding human actions in wild videos is an important task with a broad range of applications.
no code implementations • 21 Jul 2016 • Shiyu Chang, Yang Zhang, Jiliang Tang, Dawei Yin, Yi Chang, Mark A. Hasegawa-Johnson, Thomas S. Huang
The increasing popularity of real-world recommender systems produces data continuously and rapidly, and it becomes more realistic to study recommender systems under streaming scenarios.
no code implementations • 1 Jun 2016 • Tianyi Zhou, Hua Ouyang, Yi Chang, Jeff Bilmes, Carlos Guestrin
We propose a new random pruning method (called "submodular sparsification (SS)") to reduce the cost of submodular maximization.
no code implementations • 24 Nov 2015 • Jiliang Tang, Yi Chang, Charu Aggarwal, Huan Liu
Many real-world relations can be represented by signed networks with positive and negative links, as a result of which signed network analysis has attracted increasing attention from multiple disciplines.
1 code implementation • 4 Jul 2015 • Makoto Yamada, Wenzhao Lian, Amit Goyal, Jianhui Chen, Kishan Wimalawarne, Suleiman A. Khan, Samuel Kaski, Hiroshi Mamitsuka, Yi Chang
We propose the convex factorization machine (CFM), which is a convex variant of the widely used Factorization Machines (FMs).
no code implementations • 5 Dec 2014 • Suriya Gunasekar, Makoto Yamada, Dawei Yin, Yi Chang
We address the collective matrix completion problem of jointly recovering a collection of matrices with shared structure from partial (and potentially noisy) observations.
no code implementations • 10 Nov 2014 • Makoto Yamada, Avishek Saha, Hua Ouyang, Dawei Yin, Yi Chang
We propose a feature selection method that finds non-redundant features from a large and high-dimensional data in nonlinear way.
no code implementations • 19 Apr 2013 • Jianhui Chen, Tianbao Yang, Qihang Lin, Lijun Zhang, Yi Chang
We consider stochastic strongly convex optimization with a complex inequality constraint.