no code implementations • Findings (EMNLP) 2021 • Zhiwei Yang, Jing Ma, Hechang Chen, Yunke Zhang, Yi Chang
Specifically, we first utilize a two-phase module to generate span representations by aggregating context information based on a bottom-up and top-down transformer network.
no code implementations • 14 Oct 2023 • Hao Wang, Qiang Song, Ruofeng Yin, Rui Ma, Yizhou Yu, Yi Chang
In this paper, we propose B-Spine, a novel deep learning pipeline to learn B-spline curve representation of the spine and estimate the Cobb angles for spinal curvature estimation from low-quality X-ray images.
no code implementations • 22 Aug 2023 • Xing Chen, Yijun Liu, Zhaogeng Liu, Hechang Chen, Hengshuai Yao, Yi Chang
In prior work, it has been shown that policy-based exploration is beneficial for continuous action space in deterministic policy reinforcement learning(DPRL).
1 code implementation • ICCV 2023 • Yun Guo, Xueyao Xiao, Yi Chang, Shumin Deng, Luxin Yan
Learning-based image deraining methods have made great progress.
no code implementations • 19 Jul 2023 • Qingyao Ai, Ting Bai, Zhao Cao, Yi Chang, Jiawei Chen, Zhumin Chen, Zhiyong Cheng, Shoubin Dong, Zhicheng Dou, Fuli Feng, Shen Gao, Jiafeng Guo, Xiangnan He, Yanyan Lan, Chenliang Li, Yiqun Liu, Ziyu Lyu, Weizhi Ma, Jun Ma, Zhaochun Ren, Pengjie Ren, Zhiqiang Wang, Mingwen Wang, Ji-Rong Wen, Le Wu, Xin Xin, Jun Xu, Dawei Yin, Peng Zhang, Fan Zhang, Weinan Zhang, Min Zhang, Xiaofei Zhu
The research field of Information Retrieval (IR) has evolved significantly, expanding beyond traditional search to meet diverse user information needs.
1 code implementation • 6 Jul 2023 • Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.
no code implementations • 15 Jun 2023 • Shengqi Xu, Shuning Cao, Haoyue Liu, Xueyao Xiao, Yi Chang, Luxin Yan
We subsequently select the sharpest set of registered frames by employing a frame selection approach based on image sharpness, and average them to produce an image that is largely free of geometric distortion, albeit with blurriness.
no code implementations • 15 Jun 2023 • Shengqi Xu, Xueyao Xiao, Shuning Cao, Yi Chang, Luxin Yan
In this technical report, we present the solution developed by our team VIELab-HUST for text recognition through atmospheric turbulence in Track 2. 1 of the CVPR 2023 UG$^{2}$+ challenge.
no code implementations • 13 Jun 2023 • Siyuan Guo, Yanchao Sun, Jifeng Hu, Sili Huang, Hechang Chen, Haiyin Piao, Lichao Sun, Yi Chang
However, constrained by the limited quality of the offline dataset, its performance is often sub-optimal.
no code implementations • 8 Jun 2023 • Jifeng Hu, Yanchao Sun, Sili Huang, Siyuan Guo, Hechang Chen, Li Shen, Lichao Sun, Yi Chang, DaCheng Tao
Recent works have shown the potential of diffusion models in computer vision and natural language processing.
1 code implementation • 3 Jun 2023 • Hangting Ye, Zhining Liu, Xinyi Shen, Wei Cao, Shun Zheng, Xiaofan Gui, Huishuai Zhang, Yi Chang, Jiang Bian
This is a challenging task given the heterogeneous model structures and assumptions adopted by existing UAD methods.
1 code implementation • 13 May 2023 • Yun Guo, Xueyao Xiao, Xiaoxiong Wang, Yi Li, Yi Chang, Luxin Yan
Secondly, a transformer-based single image deraining network Uformer is implemented to pre-train on large real rain dataset and then fine-tuned on pseudo GT to further improve image restoration.
no code implementations • 24 Mar 2023 • Hanyu Zhou, Yi Chang, Gang Chen, Luxin Yan
In motion adaptation, we utilize the flow consistency knowledge to align the cross-domain optical flows into a motion-invariance common space, where the optical flow from clean weather is used as the guidance-knowledge to obtain a preliminary optical flow for adverse weather.
no code implementations • CVPR 2023 • Hanyu Zhou, Yi Chang, Wending Yan, Luxin Yan
To handle the practical optical flow under real foggy scenes, in this work, we propose a novel unsupervised cumulative domain adaptation optical flow (UCDA-Flow) framework: depth-association motion adaptation and correlation-alignment motion adaptation.
no code implementations • 23 Jan 2023 • Zhao Ren, Yi Chang, Thanh Tam Nguyen, Yang Tan, Kun Qian, Björn W. Schuller
Deep learning has been successfully applied to heart sound analysis in the past years.
no code implementations • ICCV 2023 • Changfeng Yu, Shiming Chen, Yi Chang, Yibing Song, Luxin Yan
To solve this dilemma, we propose a physical alignment and controllable generation network (PCGNet) for diverse and realistic rain generation.
no code implementations • 13 Dec 2022 • Chen Zhang, Xiaofeng Cao, Yi Chang, Ivor W Tsang
Then, relying on the surjective mapping from the teaching set to the parameter, we develop a design strategy of the optimal teaching set under appropriate settings, of which two popular efficiency metrics, teaching dimension and iterative teaching dimension are one.
1 code implementation • 11 Dec 2022 • Tingyu Xia, Yue Wang, Yuan Tian, Yi Chang
Weakly-supervised text classification aims to train a classifier using only class descriptions and unlabeled data.
1 code implementation • 7 Nov 2022 • Erxin Yu, Lan Du, Yuan Jin, Zhepei Wei, Yi Chang
Recently, discrete latent variable models have received a surge of interest in both Natural Language Processing (NLP) and Computer Vision (CV), attributed to their comparable performance to the continuous counterparts in representation learning, while being more interpretable in their predictions.
no code implementations • 2 Nov 2022 • Yi Chang, Yun Guo, Yuntong Ye, Changfeng Yu, Lin Zhu, XiLe Zhao, Luxin Yan, Yonghong Tian
In addition, considering that the existing real rain datasets are of low quality, either small scale or downloaded from the internet, we collect a real large-scale dataset under various rainy kinds of weather that contains high-resolution rainy images.
1 code implementation • 26 Oct 2022 • Yi Chang, Zhao Ren, Thanh Tam Nguyen, Kun Qian, Björn W. Schuller
Our experiments demonstrate that training a lightweight SER model on the target dataset with speech samples and graphs can not only produce small SER models, but also enhance the model performance compared to models with speech samples only and those using classic transfer learning strategies.
1 code implementation • 14 Oct 2022 • Jifeng Hu, Yanchao Sun, Hechang Chen, Sili Huang, Haiyin Piao, Yi Chang, Lichao Sun
Our main idea is to design the multi-action-branch reward estimation and policy-weighted reward aggregation for stabilized training.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
1 code implementation • COLING 2022 • Zhiwei Yang, Jing Ma, Hechang Chen, Hongzhan Lin, Ziyang Luo, Yi Chang
Existing fake news detection methods aim to classify a piece of news as true or false and provide veracity explanations, achieving remarkable performances.
1 code implementation • 20 May 2022 • Xing Chen, Dongcui Diao, Hechang Chen, Hengshuai Yao, Haiyin Piao, Zhixiao Sun, Zhiwei Yang, Randy Goebel, Bei Jiang, Yi Chang
The popular Proximal Policy Optimization (PPO) algorithm approximates the solution in a clipped policy space.
no code implementations • 19 May 2022 • Yuanbo Xu, En Wang, Yongjian Yang, Yi Chang
On the other hand, ME models directly employ inner products as a default loss function metric that cannot project users and items into a proper latent space, which is a methodological disadvantage.
1 code implementation • 30 Mar 2022 • Yi Chang, Zhao Ren, Thanh Tam Nguyen, Wolfgang Nejdl, Björn W. Schuller
Respiratory sound classification is an important tool for remote screening of respiratory-related diseases such as pneumonia, asthma, and COVID-19.
no code implementations • 25 Mar 2022 • Changfeng Yu, Yi Chang, Yi Li, XiLe Zhao, Luxin Yan
Consequently, we design an optimization model-driven deep CNN in which the unsupervised loss function of the optimization model is enforced on the proposed network for better generalization.
no code implementations • 10 Mar 2022 • Björn W. Schuller, Alican Akman, Yi Chang, Harry Coppock, Alexander Gebhard, Alexander Kathan, Esther Rituerto-González, Andreas Triantafyllopoulos, Florian B. Pokorny
We categorise potential computer audition applications according to the five elements of earth, water, air, fire, and aether, proposed by the ancient Greeks in their five element theory; this categorisation serves as a framework to discuss computer audition in relation to different ecological aspects.
no code implementations • 9 Mar 2022 • Yi Chang, Sofiane Laridi, Zhao Ren, Gregory Palmer, Björn W. Schuller, Marco Fisichella
The proposed framework consists of i) federated learning for data privacy, and ii) adversarial training at the training stage and randomisation at the testing stage for model robustness.
1 code implementation • CVPR 2022 • Lin Zhu, Xiao Wang, Yi Chang, Jianing Li, Tiejun Huang, Yonghong Tian
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN), which utilizes Leaky-Integrate-and-Fire (LIF) neuron and Membrane Potential (MP) neuron.
1 code implementation • CVPR 2022 • Yi Li, Yi Chang, Yan Gao, Changfeng Yu, Luxin Yan
Consequently, we perform inter-domain adaptation between the synthetic and real images by mutually exchanging the background and other two components.
1 code implementation • 24 Nov 2021 • Zhining Liu, Pengfei Wei, Zhepei Wei, Boyang Yu, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang
Class-imbalance is a common problem in machine learning practice.
1 code implementation • 24 Nov 2021 • Zhining Liu, Jian Kang, Hanghang Tong, Yi Chang
imbalanced-ensemble, abbreviated as imbens, is an open-source Python toolbox for leveraging the power of ensemble learning to address the class imbalance problem.
no code implementations • 28 Oct 2021 • Haotian Xue, Kaixiong Zhou, Tianlong Chen, Kai Guo, Xia Hu, Yi Chang, Xin Wang
In this paper, we investigate GNNs from the lens of weight and feature loss landscapes, i. e., the loss changes with respect to model weights and node features, respectively.
1 code implementation • 23 Sep 2021 • Kai Guo, Kaixiong Zhou, Xia Hu, Yu Li, Yi Chang, Xin Wang
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.
1 code implementation • Findings (EMNLP) 2021 • Bo wang, Tao Shen, Guodong Long, Tianyi Zhou, Yi Chang
Aspect-level sentiment classification (ALSC) aims at identifying the sentiment polarity of a specified aspect in a sentence.
Aspect-Based Sentiment Analysis (ABSA)
Representation Learning
+1
1 code implementation • CVPR 2023 • Chuan Tang, Xi Yang, Bojian Wu, Zhizhong Han, Yi Chang
Specifically, we first segment the point clouds into parts, and then leverage optimal transport method to match parts and words in an optimized feature space, where each part is represented by aggregating features of all points within it and each word is abstracted by its contextual information.
no code implementations • 1 Jul 2021 • Benhood Rasti, Yi Chang, Emanuele Dalsasso, Loïc Denis, Pedram Ghamisi
Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community.
no code implementations • 29 May 2021 • Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yi Chang, Michael K. Ng, Chao Li
Recently, transform-based tensor nuclear norm minimization methods are considered to capture low-rank tensor structures to recover third-order tensors in multi-dimensional image processing applications.
1 code implementation • 28 May 2021 • Siyuan Guo, Lixin Zou, Yiding Liu, Wenwen Ye, Suqi Cheng, Shuaiqiang Wang, Hechang Chen, Dawei Yin, Yi Chang
Based on it, a more robust doubly robust (MRDR) estimator has been proposed to further reduce its variance while retaining its double robustness.
no code implementations • CVPR 2021 • Yuntong Ye, Yi Chang, Hanyu Zhou, Luxin Yan
Existing deep learning-based image deraining methods have achieved promising performance for synthetic rainy images, typically rely on the pairs of sharp images and simulated rainy counterparts.
1 code implementation • 22 Feb 2021 • Tingyu Xia, Yue Wang, Yuan Tian, Yi Chang
We study the problem of incorporating prior knowledge into a deep Transformer-based model, i. e., Bidirectional Encoder Representations from Transformers (BERT), to enhance its performance on semantic textual matching tasks.
no code implementations • 27 Jan 2021 • Yuxiang Ren, Bo wang, Jiawei Zhang, Yi Chang
AA-HGNN utilizes an active learning framework to enhance learning performance, especially when facing the paucity of labeled data.
no code implementations • ICLR 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Chen Gong, Nannan Wang, ZongYuan Ge, Yi Chang
The \textit{early stopping} method therefore can be exploited for learning with noisy labels.
Ranked #32 on
Image Classification
on mini WebVision 1.0
(ImageNet Top-1 Accuracy metric)
no code implementations • COLING 2020 • Erxin Yu, Wenjuan Han, Yuan Tian, Yi Chang
Distantly Supervised Relation Extraction (DSRE) has proven to be effective to find relational facts from texts, but it still suffers from two main problems: the wrong labeling problem and the long-tail problem.
2 code implementations • NeurIPS 2020 • Zhining Liu, Pengfei Wei, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang
This makes MESA generally applicable to most of the existing learning models and the meta-sampler can be efficiently applied to new tasks.
no code implementations • 22 Aug 2020 • Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yu-Bang Zheng, Yi Chang
Recently, convolutional neural network (CNN)-based methods are proposed for hyperspectral images (HSIs) denoising.
1 code implementation • 30 Apr 2020 • Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Yi Chang
In experiments, we achieve state-of-the-art performance on three benchmarks and a zero-shot dataset for link prediction, with highlights of inference costs reduced by 1-2 orders of magnitude compared to a textual encoding method.
Ranked #4 on
Link Prediction
on UMLS
2 code implementations • 17 Jan 2020 • Qiang Huang, Makoto Yamada, Yuan Tian, Dinesh Singh, Dawei Yin, Yi Chang
In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method.
1 code implementation • 8 Sep 2019 • Zhining Liu, Wei Cao, Zhifeng Gao, Jiang Bian, Hechang Chen, Yi Chang, Tie-Yan Liu
To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers.
5 code implementations • ACL 2020 • Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, Yi Chang
Extracting relational triples from unstructured text is crucial for large-scale knowledge graph construction.
Ranked #5 on
Relation Extraction
on NYT11-HRL
no code implementations • 28 Aug 2019 • Chao-Lin Liu, Yi Chang
Chinese characters that are and are not followed by a punctuation mark are classified into two categories.
no code implementations • 23 Aug 2019 • Zhepei Wei, Yantao Jia, Yuan Tian, Mohammad Javad Hosseini, Sujian Li, Mark Steedman, Yi Chang
In this work, we first introduce the hierarchical dependency and horizontal commonality between the two levels, and then propose an entity-enhanced dual tagging framework that enables the triple extraction (TE) task to utilize such interactions with self-learned entity features through an auxiliary entity extraction (EE) task, without breaking the joint decoding of relational triples.
1 code implementation • 13 Aug 2019 • Ye Liu, Chenwei Zhang, Xiaohui Yan, Yi Chang, Philip S. Yu
To improve the quality and retrieval performance of the generated questions, we make two major improvements: 1) To better encode the semantics of ill-formed questions, we enrich the representation of questions with character embedding and the recent proposed contextual word embedding such as BERT, besides the traditional context-free word embeddings; 2) To make it capable to generate desired questions, we train the model with deep reinforcement learning techniques that considers an appropriate wording of the generation as an immediate reward and the correlation between generated question and answer as time-delayed long-term rewards.
no code implementations • 1 Mar 2019 • Shubhra Kanti Karmaker Santu, Liangda Li, Yi Chang, ChengXiang Zhai
This assumption is unrealistic as there are many correlated events in the real world which influence each other and thus, would pose a joint influence on the user search behavior rather than posing influence independently.
no code implementations • 20 Nov 2018 • Dae Hoon Park, Chiu Man Ho, Yi Chang, Huaqing Zhang
However, we observe that imposing strong L1 or L2 regularization with stochastic gradient descent on deep neural networks easily fails, which limits the generalization ability of the underlying neural networks.
no code implementations • 9 Nov 2018 • Dae Hoon Park, Yi Chang
To solve the problems at the same time, we propose an adversarial sampling and training framework to learn ad-hoc retrieval models with implicit feedback.
no code implementations • ICLR 2019 • Chiu Man Ho, Dae Hoon Park, Wei Yang, Yi Chang
We propose sequenced-replacement sampling (SRS) for training deep neural networks.
6 code implementations • EMNLP 2018 • Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, Philip S. Yu
User intent detection plays a critical role in question-answering and dialog systems.
no code implementations • 26 Aug 2018 • Ye-Tao Wang, Xi-Le Zhao, Tai-Xiang Jiang, Liang-Jian Deng, Yi Chang, Ting-Zhu Huang
Then, our framework starts with learning the motion blur kernel, which is determined by two factors including angle and length, by a plain neural network, denoted as parameter net, from a patch of the texture component.
1 code implementation • ACL 2018 • Shashi Narayan, Ronald Cardenas, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata, Jiangsheng Yu, Yi Chang
Document modeling is essential to a variety of natural language understanding tasks.
no code implementations • ACL 2018 • Shuai Wang, Sahisnu Mazumder, Bing Liu, Mianwei Zhou, Yi Chang
In MNs, attention mechanism plays a crucial role in detecting the sentiment context for the given target.
no code implementations • NAACL 2018 • Fuad Issa, Marco Damonte, Shay B. Cohen, Xiaohui Yan, Yi Chang
Abstract Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denote only its meaning in a canonical form.
no code implementations • 16 Feb 2018 • Shuai Wang, Mianwei Zhou, Sahisnu Mazumder, Bing Liu, Yi Chang
Stage one extracts/groups the target-related words (call t-words) for a given target.
no code implementations • 18 Jan 2018 • Shuai Wang, Mianwei Zhou, Geli Fei, Yi Chang, Bing Liu
While existing machine learning models have achieved great success for sentiment classification, they typically do not explicitly capture sentiment-oriented word interaction, which can lead to poor results for fine-grained analysis at the snippet level (a phrase or sentence).
no code implementations • ICLR 2018 • Dae Hoon Park, Chiu Man Ho, Yi Chang
L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions.
no code implementations • ICCV 2017 • Yi Chang, Luxin Yan, Sheng Zhong
This paper addresses the problem of line pattern noise removal from a single image, such as rain streak, hyperspectral stripe and so on.
no code implementations • 1 Sep 2017 • Yi Chang, Luxin Yan, Houzhang Fang, Sheng Zhong, Zhijun Zhang
To overcome these limitations, in this work, we propose a unified low-rank tensor recovery model for comprehensive HSI restoration tasks, in which non-local similarity between spectral-spatial cubic and spectral correlation are simultaneously captured by 3-order tensors.
Ranked #12 on
Hyperspectral Image Denoising
on ICVL-HSI-Gaussian50
no code implementations • CVPR 2017 • Yi Chang, Luxin Yan, Sheng Zhong
Recent low-rank based matrix/tensor recovery methods have been widely explored in multispectral images (MSI) denoising.
no code implementations • 6 Jun 2017 • Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, Huan Liu
To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly.
no code implementations • 14 Aug 2016 • Makoto Yamada, Jiliang Tang, Jose Lugo-Martinez, Ermin Hodzic, Raunak Shrestha, Avishek Saha, Hua Ouyang, Dawei Yin, Hiroshi Mamitsuka, Cenk Sahinalp, Predrag Radivojac, Filippo Menczer, Yi Chang
However, sophisticated learning models are computationally unfeasible for data with millions of features.
no code implementations • 21 Jul 2016 • Shiyu Chang, Yang Zhang, Jiliang Tang, Dawei Yin, Yi Chang, Mark A. Hasegawa-Johnson, Thomas S. Huang
The increasing popularity of real-world recommender systems produces data continuously and rapidly, and it becomes more realistic to study recommender systems under streaming scenarios.
no code implementations • 21 Jul 2016 • Yilin Wang, Suhang Wang, Jiliang Tang, Neil O'Hare, Yi Chang, Baoxin Li
Understanding human actions in wild videos is an important task with a broad range of applications.
no code implementations • 1 Jun 2016 • Tianyi Zhou, Hua Ouyang, Yi Chang, Jeff Bilmes, Carlos Guestrin
We propose a new random pruning method (called "submodular sparsification (SS)") to reduce the cost of submodular maximization.
no code implementations • 24 Nov 2015 • Jiliang Tang, Yi Chang, Charu Aggarwal, Huan Liu
Many real-world relations can be represented by signed networks with positive and negative links, as a result of which signed network analysis has attracted increasing attention from multiple disciplines.
1 code implementation • 4 Jul 2015 • Makoto Yamada, Wenzhao Lian, Amit Goyal, Jianhui Chen, Kishan Wimalawarne, Suleiman A. Khan, Samuel Kaski, Hiroshi Mamitsuka, Yi Chang
We propose the convex factorization machine (CFM), which is a convex variant of the widely used Factorization Machines (FMs).
no code implementations • 5 Dec 2014 • Suriya Gunasekar, Makoto Yamada, Dawei Yin, Yi Chang
We address the collective matrix completion problem of jointly recovering a collection of matrices with shared structure from partial (and potentially noisy) observations.
no code implementations • 10 Nov 2014 • Makoto Yamada, Avishek Saha, Hua Ouyang, Dawei Yin, Yi Chang
We propose a feature selection method that finds non-redundant features from a large and high-dimensional data in nonlinear way.
no code implementations • 19 Apr 2013 • Jianhui Chen, Tianbao Yang, Qihang Lin, Lijun Zhang, Yi Chang
We consider stochastic strongly convex optimization with a complex inequality constraint.