Search Results for author: Yi Chang

Found 95 papers, 35 papers with code

Optimal Stochastic Strongly Convex Optimization with a Logarithmic Number of Projections

no code implementations19 Apr 2013 Jianhui Chen, Tianbao Yang, Qihang Lin, Lijun Zhang, Yi Chang

We consider stochastic strongly convex optimization with a complex inequality constraint.

Consistent Collective Matrix Completion under Joint Low Rank Structure

no code implementations5 Dec 2014 Suriya Gunasekar, Makoto Yamada, Dawei Yin, Yi Chang

We address the collective matrix completion problem of jointly recovering a collection of matrices with shared structure from partial (and potentially noisy) observations.

Matrix Completion

Convex Factorization Machine for Regression

1 code implementation4 Jul 2015 Makoto Yamada, Wenzhao Lian, Amit Goyal, Jianhui Chen, Kishan Wimalawarne, Suleiman A. Khan, Samuel Kaski, Hiroshi Mamitsuka, Yi Chang

We propose the convex factorization machine (CFM), which is a convex variant of the widely used Factorization Machines (FMs).

regression

A Survey of Signed Network Mining in Social Media

no code implementations24 Nov 2015 Jiliang Tang, Yi Chang, Charu Aggarwal, Huan Liu

Many real-world relations can be represented by signed networks with positive and negative links, as a result of which signed network analysis has attracted increasing attention from multiple disciplines.

Scaling Submodular Maximization via Pruned Submodularity Graphs

no code implementations1 Jun 2016 Tianyi Zhou, Hua Ouyang, Yi Chang, Jeff Bilmes, Carlos Guestrin

We propose a new random pruning method (called "submodular sparsification (SS)") to reduce the cost of submodular maximization.

Video Summarization

Streaming Recommender Systems

no code implementations21 Jul 2016 Shiyu Chang, Yang Zhang, Jiliang Tang, Dawei Yin, Yi Chang, Mark A. Hasegawa-Johnson, Thomas S. Huang

The increasing popularity of real-world recommender systems produces data continuously and rapidly, and it becomes more realistic to study recommender systems under streaming scenarios.

Recommendation Systems

Attributed Network Embedding for Learning in a Dynamic Environment

no code implementations6 Jun 2017 Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, Huan Liu

To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly.

Attribute Clustering +3

Hyper-Laplacian Regularized Unidirectional Low-Rank Tensor Recovery for Multispectral Image Denoising

no code implementations CVPR 2017 Yi Chang, Luxin Yan, Sheng Zhong

Recent low-rank based matrix/tensor recovery methods have been widely explored in multispectral images (MSI) denoising.

Image Denoising

Weighted Low-rank Tensor Recovery for Hyperspectral Image Restoration

no code implementations1 Sep 2017 Yi Chang, Luxin Yan, Houzhang Fang, Sheng Zhong, Zhijun Zhang

To overcome these limitations, in this work, we propose a unified low-rank tensor recovery model for comprehensive HSI restoration tasks, in which non-local similarity between spectral-spatial cubic and spectral correlation are simultaneously captured by 3-order tensors.

Deblurring Denoising +3

Transformed Low-Rank Model for Line Pattern Noise Removal

no code implementations ICCV 2017 Yi Chang, Luxin Yan, Sheng Zhong

This paper addresses the problem of line pattern noise removal from a single image, such as rain streak, hyperspectral stripe and so on.

Achieving Strong Regularization for Deep Neural Networks

no code implementations ICLR 2018 Dae Hoon Park, Chiu Man Ho, Yi Chang

L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions.

L2 Regularization

Contextual and Position-Aware Factorization Machines for Sentiment Classification

no code implementations18 Jan 2018 Shuai Wang, Mianwei Zhou, Geli Fei, Yi Chang, Bing Liu

While existing machine learning models have achieved great success for sentiment classification, they typically do not explicitly capture sentiment-oriented word interaction, which can lead to poor results for fine-grained analysis at the snippet level (a phrase or sentence).

Classification General Classification +6

Abstract Meaning Representation for Paraphrase Detection

no code implementations NAACL 2018 Fuad Issa, Marco Damonte, Shay B. Cohen, Xiaohui Yan, Yi Chang

Abstract Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denote only its meaning in a canonical form.

AMR Parsing Sentence

Rain Streak Removal for Single Image via Kernel Guided CNN

no code implementations26 Aug 2018 Ye-Tao Wang, Xi-Le Zhao, Tai-Xiang Jiang, Liang-Jian Deng, Yi Chang, Ting-Zhu Huang

Then, our framework starts with learning the motion blur kernel, which is determined by two factors including angle and length, by a plain neural network, denoted as parameter net, from a patch of the texture component.

Sequenced-Replacement Sampling for Deep Learning

no code implementations ICLR 2019 Chiu Man Ho, Dae Hoon Park, Wei Yang, Yi Chang

We propose sequenced-replacement sampling (SRS) for training deep neural networks.

Adversarial Sampling and Training for Semi-Supervised Information Retrieval

no code implementations9 Nov 2018 Dae Hoon Park, Yi Chang

To solve the problems at the same time, we propose an adversarial sampling and training framework to learn ad-hoc retrieval models with implicit feedback.

Information Retrieval Question Answering +1

Gradient-Coherent Strong Regularization for Deep Neural Networks

no code implementations20 Nov 2018 Dae Hoon Park, Chiu Man Ho, Yi Chang, Huaqing Zhang

However, we observe that imposing strong L1 or L2 regularization with stochastic gradient descent on deep neural networks easily fails, which limits the generalization ability of the underlying neural networks.

L2 Regularization

JIM: Joint Influence Modeling for Collective Search Behavior

no code implementations1 Mar 2019 Shubhra Kanti Karmaker Santu, Liangda Li, Yi Chang, ChengXiang Zhai

This assumption is unrealistic as there are many correlated events in the real world which influence each other and thus, would pose a joint influence on the user search behavior rather than posing influence independently.

Generative Question Refinement with Deep Reinforcement Learning in Retrieval-based QA System

1 code implementation13 Aug 2019 Ye Liu, Chenwei Zhang, Xiaohui Yan, Yi Chang, Philip S. Yu

To improve the quality and retrieval performance of the generated questions, we make two major improvements: 1) To better encode the semantics of ill-formed questions, we enrich the representation of questions with character embedding and the recent proposed contextual word embedding such as BERT, besides the traditional context-free word embeddings; 2) To make it capable to generate desired questions, we train the model with deep reinforcement learning techniques that considers an appropriate wording of the generation as an immediate reward and the correlation between generated question and answer as time-delayed long-term rewards.

Question Answering reinforcement-learning +3

Jointly Modeling Hierarchical and Horizontal Features for Relational Triple Extraction

no code implementations23 Aug 2019 Zhepei Wei, Yantao Jia, Yuan Tian, Mohammad Javad Hosseini, Sujian Li, Mark Steedman, Yi Chang

In this work, we first introduce the hierarchical dependency and horizontal commonality between the two levels, and then propose an entity-enhanced dual tagging framework that enables the triple extraction (TE) task to utilize such interactions with self-learned entity features through an auxiliary entity extraction (EE) task, without breaking the joint decoding of relational triples.

Entity Extraction using GAN graph construction +2

Classical Chinese Sentence Segmentation for Tomb Biographies of Tang Dynasty

no code implementations28 Aug 2019 Chao-Lin Liu, Yi Chang

Chinese characters that are and are not followed by a punctuation mark are classified into two categories.

BIG-bench Machine Learning Sentence +1

Self-paced Ensemble for Highly Imbalanced Massive Data Classification

1 code implementation8 Sep 2019 Zhining Liu, Wei Cao, Zhifeng Gao, Jiang Bian, Hechang Chen, Yi Chang, Tie-Yan Liu

To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers.

Classification General Classification +1

GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks

2 code implementations17 Jan 2020 Qiang Huang, Makoto Yamada, Yuan Tian, Dinesh Singh, Dawei Yin, Yi Chang

In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method.

Descriptive feature selection

Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion

1 code implementation30 Apr 2020 Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Yi Chang

In experiments, we achieve state-of-the-art performance on three benchmarks and a zero-shot dataset for link prediction, with highlights of inference costs reduced by 1-2 orders of magnitude compared to a textual encoding method.

Graph Embedding Link Prediction +1

Unsupervised Hyperspectral Mixed Noise Removal Via Spatial-Spectral Constrained Deep Image Prior

no code implementations22 Aug 2020 Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yu-Bang Zheng, Yi Chang

Recently, convolutional neural network (CNN)-based methods are proposed for hyperspectral images (HSIs) denoising.

Denoising

MESA: Boost Ensemble Imbalanced Learning with MEta-SAmpler

2 code implementations NeurIPS 2020 Zhining Liu, Pengfei Wei, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang

This makes MESA generally applicable to most of the existing learning models and the meta-sampler can be efficiently applied to new tasks.

imbalanced classification Meta-Learning

ToHRE: A Top-Down Classification Strategy with Hierarchical Bag Representation for Distantly Supervised Relation Extraction

no code implementations COLING 2020 Erxin Yu, Wenjuan Han, Yuan Tian, Yi Chang

Distantly Supervised Relation Extraction (DSRE) has proven to be effective to find relational facts from texts, but it still suffers from two main problems: the wrong labeling problem and the long-tail problem.

Classification Relation +1

Adversarial Active Learning based Heterogeneous Graph Neural Network for Fake News Detection

no code implementations27 Jan 2021 Yuxiang Ren, Bo wang, Jiawei Zhang, Yi Chang

AA-HGNN utilizes an active learning framework to enhance learning performance, especially when facing the paucity of labeled data.

Active Learning Fake News Detection +2

Using Prior Knowledge to Guide BERT's Attention in Semantic Textual Matching Tasks

1 code implementation22 Feb 2021 Tingyu Xia, Yue Wang, Yuan Tian, Yi Chang

We study the problem of incorporating prior knowledge into a deep Transformer-based model, i. e., Bidirectional Encoder Representations from Transformers (BERT), to enhance its performance on semantic textual matching tasks.

Closing the Loop: Joint Rain Generation and Removal via Disentangled Image Translation

no code implementations CVPR 2021 Yuntong Ye, Yi Chang, Hanyu Zhou, Luxin Yan

Existing deep learning-based image deraining methods have achieved promising performance for synthetic rainy images, typically rely on the pairs of sharp images and simulated rainy counterparts.

Disentanglement Rain Removal +1

Enhanced Doubly Robust Learning for Debiasing Post-click Conversion Rate Estimation

1 code implementation28 May 2021 Siyuan Guo, Lixin Zou, Yiding Liu, Wenwen Ye, Suqi Cheng, Shuaiqiang Wang, Hechang Chen, Dawei Yin, Yi Chang

Based on it, a more robust doubly robust (MRDR) estimator has been proposed to further reduce its variance while retaining its double robustness.

counterfactual Imputation +2

Self-Supervised Nonlinear Transform-Based Tensor Nuclear Norm for Multi-Dimensional Image Recovery

no code implementations29 May 2021 Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yi Chang, Michael K. Ng, Chao Li

Recently, transform-based tensor nuclear norm minimization methods are considered to capture low-rank tensor structures to recover third-order tensors in multi-dimensional image processing applications.

Image Restoration for Remote Sensing: Overview and Toolbox

no code implementations1 Jul 2021 Benhood Rasti, Yi Chang, Emanuele Dalsasso, Loïc Denis, Pedram Ghamisi

Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community.

Image Restoration

Parts2Words: Learning Joint Embedding of Point Clouds and Texts by Bidirectional Matching between Parts and Words

1 code implementation CVPR 2023 Chuan Tang, Xi Yang, Bojian Wu, Zhizhong Han, Yi Chang

Specifically, we first segment the point clouds into parts, and then leverage optimal transport method to match parts and words in an optimized feature space, where each part is represented by aggregating features of all points within it and each word is abstracted by its contextual information.

Retrieval Text Matching

Orthogonal Graph Neural Networks

1 code implementation23 Sep 2021 Kai Guo, Kaixiong Zhou, Xia Hu, Yu Li, Yi Chang, Xin Wang

Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.

Attribute Graph Classification

CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks

no code implementations28 Oct 2021 Haotian Xue, Kaixiong Zhou, Tianlong Chen, Kai Guo, Xia Hu, Yi Chang, Xin Wang

In this paper, we investigate GNNs from the lens of weight and feature loss landscapes, i. e., the loss changes with respect to model weights and node features, respectively.

IMBENS: Ensemble Class-imbalanced Learning in Python

1 code implementation24 Nov 2021 Zhining Liu, Jian Kang, Hanghang Tong, Yi Chang

imbalanced-ensemble, abbreviated as imbens, is an open-source Python toolbox for leveraging the power of ensemble learning to address the class imbalance problem.

Ensemble Learning

Physically Disentangled Intra- and Inter-Domain Adaptation for Varicolored Haze Removal

1 code implementation CVPR 2022 Yi Li, Yi Chang, Yan Gao, Changfeng Yu, Luxin Yan

Consequently, we perform inter-domain adaptation between the synthetic and real images by mutually exchanging the background and other two components.

Domain Adaptation Image Dehazing

Event-based Video Reconstruction via Potential-assisted Spiking Neural Network

1 code implementation CVPR 2022 Lin Zhu, Xiao Wang, Yi Chang, Jianing Li, Tiejun Huang, Yonghong Tian

We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN), which utilizes Leaky-Integrate-and-Fire (LIF) neuron and Membrane Potential (MP) neuron.

Computational Efficiency Event-Based Video Reconstruction +2

Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition

no code implementations9 Mar 2022 Yi Chang, Sofiane Laridi, Zhao Ren, Gregory Palmer, Björn W. Schuller, Marco Fisichella

The proposed framework consists of i) federated learning for data privacy, and ii) adversarial training at the training stage and randomisation at the testing stage for model robustness.

Federated Learning Speech Emotion Recognition

Climate Change & Computer Audition: A Call to Action and Overview on Audio Intelligence to Help Save the Planet

no code implementations10 Mar 2022 Björn W. Schuller, Alican Akman, Yi Chang, Harry Coppock, Alexander Gebhard, Alexander Kathan, Esther Rituerto-González, Andreas Triantafyllopoulos, Florian B. Pokorny

We categorise potential computer audition applications according to the five elements of earth, water, air, fire, and aether, proposed by the ancient Greeks in their five element theory; this categorisation serves as a framework to discuss computer audition in relation to different ecological aspects.

Unsupervised Image Deraining: Optimization Model Driven Deep CNN

no code implementations25 Mar 2022 Changfeng Yu, Yi Chang, Yi Li, XiLe Zhao, Luxin Yan

Consequently, we design an optimization model-driven deep CNN in which the unsupervised loss function of the optimization model is enforced on the proposed network for better generalization.

Rain Removal

Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis

1 code implementation30 Mar 2022 Yi Chang, Zhao Ren, Thanh Tam Nguyen, Wolfgang Nejdl, Björn W. Schuller

Respiratory sound classification is an important tool for remote screening of respiratory-related diseases such as pneumonia, asthma, and COVID-19.

Sound Classification

A Unified Collaborative Representation Learning for Neural-Network based Recommender Systems

no code implementations19 May 2022 Yuanbo Xu, En Wang, Yongjian Yang, Yi Chang

On the other hand, ME models directly employ inner products as a default loss function metric that cannot project users and items into a proper latent space, which is a methodological disadvantage.

Metric Learning Recommendation Systems +1

A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for Explainable Fake News Detection

1 code implementation COLING 2022 Zhiwei Yang, Jing Ma, Hechang Chen, Hongzhan Lin, Ziyang Luo, Yi Chang

Existing fake news detection methods aim to classify a piece of news as true or false and provide veracity explanations, achieving remarkable performances.

Fake News Detection

Knowledge Transfer For On-Device Speech Emotion Recognition with Neural Structured Learning

1 code implementation26 Oct 2022 Yi Chang, Zhao Ren, Thanh Tam Nguyen, Kun Qian, Björn W. Schuller

Our experiments demonstrate that training a lightweight SER model on the target dataset with speech samples and graphs can not only produce small SER models, but also enhance the model performance compared to models with speech samples only and those using classic transfer learning strategies.

Speech Emotion Recognition Transfer Learning

Unsupervised Deraining: Where Asymmetric Contrastive Learning Meets Self-similarity

no code implementations2 Nov 2022 Yi Chang, Yun Guo, Yuntong Ye, Changfeng Yu, Lin Zhu, XiLe Zhao, Luxin Yan, Yonghong Tian

In addition, considering that the existing real rain datasets are of low quality, either small scale or downloaded from the internet, we collect a real large-scale dataset under various rainy kinds of weather that contains high-resolution rainy images.

Contrastive Learning Rain Removal

Learning Semantic Textual Similarity via Topic-informed Discrete Latent Variables

1 code implementation7 Nov 2022 Erxin Yu, Lan Du, Yuan Jin, Zhepei Wei, Yi Chang

Recently, discrete latent variable models have received a surge of interest in both Natural Language Processing (NLP) and Computer Vision (CV), attributed to their comparable performance to the continuous counterparts in representation learning, while being more interpretable in their predictions.

Language Modelling Quantization +4

FastClass: A Time-Efficient Approach to Weakly-Supervised Text Classification

1 code implementation11 Dec 2022 Tingyu Xia, Yue Wang, Yuan Tian, Yi Chang

Weakly-supervised text classification aims to train a classifier using only class descriptions and unlabeled data.

text-classification Text Classification +1

One-shot Machine Teaching: Cost Very Few Examples to Converge Faster

no code implementations13 Dec 2022 Chen Zhang, Xiaofeng Cao, Yi Chang, Ivor W Tsang

Then, relying on the surjective mapping from the teaching set to the parameter, we develop a design strategy of the optimal teaching set under appropriate settings, of which two popular efficiency metrics, teaching dimension and iterative teaching dimension are one.

Both Diverse and Realism Matter: Physical Attribute and Style Alignment for Rainy Image Generation

no code implementations ICCV 2023 Changfeng Yu, Shiming Chen, Yi Chang, Yibing Song, Luxin Yan

To solve this dilemma, we propose a physical alignment and controllable generation network (PCGNet) for diverse and realistic rain generation.

Attribute Image Generation +1

A Comprehensive Survey on Heart Sound Analysis in the Deep Learning Era

no code implementations23 Jan 2023 Zhao Ren, Yi Chang, Thanh Tam Nguyen, Yang Tan, Kun Qian, Björn W. Schuller

Deep learning has been successfully applied to heart sound analysis in the past years.

Unsupervised Cumulative Domain Adaptation for Foggy Scene Optical Flow

no code implementations CVPR 2023 Hanyu Zhou, Yi Chang, Wending Yan, Luxin Yan

To handle the practical optical flow under real foggy scenes, in this work, we propose a novel unsupervised cumulative domain adaptation optical flow (UCDA-Flow) framework: depth-association motion adaptation and correlation-alignment motion adaptation.

Domain Adaptation Optical Flow Estimation

Unsupervised Hierarchical Domain Adaptation for Adverse Weather Optical Flow

no code implementations24 Mar 2023 Hanyu Zhou, Yi Chang, Gang Chen, Luxin Yan

In motion adaptation, we utilize the flow consistency knowledge to align the cross-domain optical flows into a motion-invariance common space, where the optical flow from clean weather is used as the guidance-knowledge to obtain a preliminary optical flow for adverse weather.

Domain Adaptation Optical Flow Estimation

A Two-Stage Real Image Deraining Method for GT-RAIN Challenge CVPR 2023 Workshop UG$^{\textbf{2}}$+ Track 3

1 code implementation13 May 2023 Yun Guo, Xueyao Xiao, Xiaoxiong Wang, Yi Li, Yi Chang, Luxin Yan

Secondly, a transformer-based single image deraining network Uformer is implemented to pre-train on large real rain dataset and then fine-tuned on pseudo GT to further improve image restoration.

Image Restoration Single Image Deraining +1

UADB: Unsupervised Anomaly Detection Booster

1 code implementation3 Jun 2023 Hangting Ye, Zhining Liu, Xinyi Shen, Wei Cao, Shun Zheng, Xiaofan Gui, Huishuai Zhang, Yi Chang, Jiang Bian

This is a challenging task given the heterogeneous model structures and assumptions adopted by existing UAD methods.

Unsupervised Anomaly Detection

1st Solution Places for CVPR 2023 UG$^2$+ Challenge Track 2.2-Coded Target Restoration through Atmospheric Turbulence

1 code implementation15 Jun 2023 Shengqi Xu, Shuning Cao, Haoyue Liu, Xueyao Xiao, Yi Chang, Luxin Yan

We subsequently select the sharpest set of registered frames by employing a frame selection approach based on image sharpness, and average them to produce an image that is largely free of geometric distortion, albeit with blurriness.

Deblurring Image Registration

1st Solution Places for CVPR 2023 UG$^{\textbf{2}}$+ Challenge Track 2.1-Text Recognition through Atmospheric Turbulence

1 code implementation15 Jun 2023 Shengqi Xu, Xueyao Xiao, Shuning Cao, Yi Chang, Luxin Yan

In this technical report, we present the solution developed by our team VIELab-HUST for text recognition through atmospheric turbulence in Track 2. 1 of the CVPR 2023 UG$^{2}$+ challenge.

Image Registration Optical Flow Estimation

A Survey on Evaluation of Large Language Models

1 code implementation6 Jul 2023 Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie

Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.

Ethics

Careful at Estimation and Bold at Exploration

no code implementations22 Aug 2023 Xing Chen, Yijun Liu, Zhaogeng Liu, Hechang Chen, Hengshuai Yao, Yi Chang

In prior work, it has been shown that policy-based exploration is beneficial for continuous action space in deterministic policy reinforcement learning(DPRL).

Learning Generalizable Agents via Saliency-Guided Features Decorrelation

no code implementations NeurIPS 2023 Sili Huang, Yanchao Sun, Jifeng Hu, Siyuan Guo, Hechang Chen, Yi Chang, Lichao Sun, Bo Yang

Our experimental results demonstrate that SGFD can generalize well on a wide range of test environments and significantly outperforms state-of-the-art methods in handling both task-irrelevant variations and task-relevant variations.

Reinforcement Learning (RL)

B-Spine: Learning B-Spline Curve Representation for Robust and Interpretable Spinal Curvature Estimation

no code implementations14 Oct 2023 Hao Wang, Qiang Song, Ruofeng Yin, Rui Ma, Yizhou Yu, Yi Chang

In this paper, we propose B-Spine, a novel deep learning pipeline to learn B-spline curve representation of the spine and estimate the Cobb angles for spinal curvature estimation from low-quality X-ray images.

Image-to-Image Translation

A Survey on Data Augmentation in Large Model Era

1 code implementation27 Jan 2024 Yue Zhou, Chenlu Guo, Xu Wang, Yi Chang, Yuan Wu

Leveraging large models, these data augmentation techniques have outperformed traditional approaches.

Audio Signal Processing Image Augmentation +1

Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow

no code implementations31 Jan 2024 Hanyu Zhou, Yi Chang, Haoyue Liu, Wending Yan, Yuxing Duan, Zhiwei Shi, Luxin Yan

In appearance adaptation, we employ the intrinsic image decomposition to embed the auxiliary daytime image and the nighttime image into a reflectance-aligned common space.

Domain Adaptation Intrinsic Image Decomposition +1

EPSD: Early Pruning with Self-Distillation for Efficient Model Compression

no code implementations31 Jan 2024 Dong Chen, Ning Liu, Yichen Zhu, Zhengping Che, Rui Ma, Fachao Zhang, Xiaofeng Mou, Yi Chang, Jian Tang

Instead of a simple combination of pruning and SD, EPSD enables the pruned network to favor SD by keeping more distillable weights before training to ensure better distillation of the pruned network.

Knowledge Distillation Network Pruning +1

STAA-Net: A Sparse and Transferable Adversarial Attack for Speech Emotion Recognition

no code implementations2 Feb 2024 Yi Chang, Zhao Ren, Zixing Zhang, Xin Jing, Kun Qian, Xi Shao, Bin Hu, Tanja Schultz, Björn W. Schuller

Speech contains rich information on the emotions of humans, and Speech Emotion Recognition (SER) has been an important topic in the area of human-computer interaction.

Adversarial Attack Speech Emotion Recognition

Copyright Protection in Generative AI: A Technical Perspective

no code implementations4 Feb 2024 Jie Ren, Han Xu, Pengfei He, Yingqian Cui, Shenglai Zeng, Jiankun Zhang, Hongzhi Wen, Jiayuan Ding, Hui Liu, Yi Chang, Jiliang Tang

We examine from two distinct viewpoints: the copyrights pertaining to the source data held by the data owners and those of the generative models maintained by the model builders.

Transductive Reward Inference on Graph

no code implementations6 Feb 2024 Bohao Qu, Xiaofeng Cao, Qing Guo, Yi Chang, Ivor W. Tsang, Chengqi Zhang

In this study, we present a transductive inference approach on that reward information propagation graph, which enables the effective estimation of rewards for unlabelled data in offline reinforcement learning.

reinforcement-learning

ScreenAgent: A Vision Language Model-driven Computer Control Agent

1 code implementation9 Feb 2024 Runliang Niu, Jindong Li, Shiqi Wang, Yali Fu, Xiyu Hu, Xueyuan Leng, He Kong, Yi Chang, Qi Wang

Additionally, we construct the ScreenAgent Dataset, which collects screenshots and action sequences when completing a variety of daily computer tasks.

Language Modelling

Investigating Out-of-Distribution Generalization of GNNs: An Architecture Perspective

no code implementations13 Feb 2024 Kai Guo, Hongzhi Wen, Wei Jin, Yaming Guo, Jiliang Tang, Yi Chang

These insights have empowered us to develop a novel GNN backbone model, DGAT, designed to harness the robust properties of both graph self-attention mechanism and the decoupled architecture.

Out-of-Distribution Generalization

The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)

1 code implementation23 Feb 2024 Shenglai Zeng, Jiankun Zhang, Pengfei He, Yue Xing, Yiding Liu, Han Xu, Jie Ren, Shuaiqiang Wang, Dawei Yin, Yi Chang, Jiliang Tang

In this work, we conduct extensive empirical studies with novel attack methods, which demonstrate the vulnerability of RAG systems on leaking the private retrieval database.

Language Modelling Retrieval

DS-Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning

1 code implementation27 Feb 2024 Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, Jun Wang

In the development stage, DS-Agent follows the CBR framework to structure an automatic iteration pipeline, which can flexibly capitalize on the expert knowledge from Kaggle, and facilitate consistent performance improvement through the feedback mechanism.

Code Generation

JSTR: Joint Spatio-Temporal Reasoning for Event-based Moving Object Detection

no code implementations12 Mar 2024 Hanyu Zhou, Zhiwei Shi, Hao Dong, Shihan Peng, Yi Chang, Luxin Yan

In spatial reasoning stage, we project the compensated events into the same image coordinate, discretize the timestamp of events to obtain a time image that can reflect the motion confidence, and further segment the moving object through adaptive threshold on the time image.

Motion Compensation Moving Object Detection +2

Bring Event into RGB and LiDAR: Hierarchical Visual-Motion Fusion for Scene Flow

no code implementations12 Mar 2024 Hanyu Zhou, Yi Chang, Zhiwei Shi, Luxin Yan

In this work, we bring the event as a bridge between RGB and LiDAR, and propose a novel hierarchical visual-motion fusion framework for scene flow, which explores a homogeneous space to fuse the cross-modal complementary knowledge for physical interpretation.

HiTRANS: A Hierarchical Transformer Network for Nested Named Entity Recognition

no code implementations Findings (EMNLP) 2021 Zhiwei Yang, Jing Ma, Hechang Chen, Yunke Zhang, Yi Chang

Specifically, we first utilize a two-phase module to generate span representations by aggregating context information based on a bottom-up and top-down transformer network.

named-entity-recognition Named Entity Recognition +3

Cannot find the paper you are looking for? You can Submit a new open access paper.