no code implementations • Findings (EMNLP) 2021 • Sen yang, Qingyu Zhou, Dawei Feng, Yang Liu, Chao Li, Yunbo Cao, Dongsheng Li
Moreover, this task can be used to improve visual question generation and visual question answering.
no code implementations • ICML 2020 • Chao Li, Zhun Sun
Tensor network (TN) decomposition is a promising framework to represent extremely high-dimensional problems with few parameters.
no code implementations • COLING 2022 • Yuanzhou Yao, Zhao Zhang, Yongjun Xu, Chao Li
To this end, we propose to solve the FKGC problem with the data augmentation technique.
no code implementations • COLING 2022 • Xiaofeng Qi, Chao Li, Zhongping Liang, Jigang Liu, Cheng Zhang, Yuanxin Wei, Lin Yuan, Guang Yang, Lanxiao Huang, Min Li
This paper introduces a generative system for in-battle real-time commentary in mobile MOBA games.
no code implementations • 19 Jan 2023 • Hang Zhang, Rongguang Wang, Jinwei Zhang, Dongdong Liu, Chao Li, Jiahao Li
Compared to natural images, medical images usually show stronger visual patterns and therefore this adds flexibility and elasticity to resource-limited clinical applications by injecting proper priors into neural networks.
1 code implementation • 6 Jan 2023 • Chao Li, Chen Gong, Qiang He, Xinwen Hou, Yu Liu
To explicitly encourage exploration in continuous control tasks, we propose CCEP (Centralized Cooperative Exploration Policy), which utilizes underestimation and overestimation of value functions to maintain the capacity of exploration.
no code implementations • 7 Dec 2022 • Chao Li
Both imitation learning (IL) and learning from demonstrations (LfD) improve the training process by using expert demonstrations, but imperfect expert demonstrations can mislead policy improvement.
no code implementations • 27 Nov 2022 • Chao Li, Hao Xu, Kun He
To address these issues, we propose a novel method called Partial Message Meta Multigraph search (PMMM) to automatically optimize the neural architecture design on HINs.
1 code implementation • 17 Nov 2022 • Jiawei Jiang, Dayan Pan, Houxing Ren, Xiaohan Jiang, Chao Li, Jingyuan Wang
TRL aims to convert complicated raw trajectories into low-dimensional representation vectors, which can be applied to various downstream tasks, such as trajectory classification, clustering, and similarity computation.
no code implementations • 17 Nov 2022 • Yuxuan Zhou, Chao Li, Zhi-Qi Cheng, Yifeng Geng, Xuansong Xie, Margret Keuper
Transformers assume that the input is permutation-invariant and homogeneous (partially alleviated by positional encoding), which ignores an important characteristic of skeleton data, i. e., bone connectivity.
1 code implementation • 1 Nov 2022 • Jinwei Zhang, Pascal Spincemaille, Hang Zhang, Thanh D. Nguyen, Chao Li, Jiahao Li, Ilhami Kovanlikaya, Mert R. Sabuncu, Yi Wang
In this paper, we present our new framework, called Learned Acquisition and Reconstruction Optimization (LARO), which aims to accelerate the multi-echo gradient echo (mGRE) pulse sequence for QSM.
1 code implementation • 25 Oct 2022 • Lizhao Liu, Kunyang Lin, Shangxin Huang, Zhongli Li, Chao Li, Yunbo Cao, Qingyu Zhou
Moreover, there are no standardized benchmarks to provide a fair comparison between different stroke extraction methods, which, we believe, is a major impediment to the development of Chinese character stroke understanding and related tasks.
no code implementations • 19 Oct 2022 • Xu Yuan, Chen Xu, Qiwei Chen, Tao Zhuang, Hongjie Chen, Chao Li, Junfeng Ge
This paper proposes a Hierarchical Multi-Interest Co-Network (HCN) to capture users' diverse interests in the coarse-grained ranking stage.
1 code implementation • 19 Oct 2022 • Yinghui Li, Shirong Ma, Qingyu Zhou, Zhongli Li, Li Yangning, Shulin Huang, Ruiyang Liu, Chao Li, Yunbo Cao, Haitao Zheng
Chinese Spell Checking (CSC) aims to detect and correct Chinese spelling errors.
no code implementations • 18 Oct 2022 • Shentong Mo, Zhun Sun, Chao Li
Particularly, in the classification down-stream tasks with linear probes, our proposed method outperforms the state-of-the-art instance-wise and prototypical contrastive learning methods on the ImageNet-100 dataset by 2. 96% and the ImageNet-1K dataset by 2. 46% under the same settings of batch size and epochs.
no code implementations • 22 Sep 2022 • Chao Li, Jiancheng Cai, Ranran Huang, Xinmin Liu
Most existing learning-based image matching pipelines are designed for better feature detectors and descriptors which are robust to repeated textures, viewpoint changes, etc., while little attention has been paid to rotation invariance.
no code implementations • 16 Sep 2022 • Kai Zhang, Qinmin Yang, Chao Li
Multivariate time series(MTS) is a universal data type related to many practical applications.
1 code implementation • ICLR 2022 • Chao Li, Aojun Zhou, Anbang Yao
Learning a single static convolutional kernel in each convolutional layer is the common training paradigm of modern Convolutional Neural Networks (CNNs).
no code implementations • 31 Aug 2022 • Yunhao Wang, Huixin Sun, Xiaodi Wang, Bin Zhang, Chao Li, Ying Xin, Baochang Zhang, Errui Ding, Shumin Han
We develop a simple but effective module to explore the full potential of transformers for visual representation by learning fine-grained and coarse-grained features at a token level and dynamically fusing them.
1 code implementation • COLING 2022 • Yusen Zhang, Zhongli Li, Qingyu Zhou, Ziyi Liu, Chao Li, Mina Ma, Yunbo Cao, Hongzhi Liu
To automatically correct handwritten assignments, the traditional approach is to use an OCR model to recognize characters and compare them to answers.
no code implementations • 18 Aug 2022 • Shentong Mo, Zhun Sun, Chao Li
One of the drawbacks of CSL is that the loss term requires a large number of negative samples to provide better mutual information bound ideally.
1 code implementation • 10 Aug 2022 • Wangmeng Xiang, Chao Li, Yuxuan Zhou, Biao Wang, Lei Zhang
More specifically, we employ a large-scale language model as the knowledge engine to provide text descriptions for body parts movements of actions, and propose a multi-modal training scheme by utilizing the text encoder to generate feature vectors for different body parts and supervise the skeleton encoder for action representation learning.
Ranked #1 on
Skeleton Based Action Recognition
on N-UCLA
1 code implementation • 27 Jul 2022 • Wangmeng Xiang, Chao Li, Biao Wang, Xihan Wei, Xian-Sheng Hua, Lei Zhang
For 3D video-based tasks such as action recognition, however, directly applying spatiotemporal transformers on video data will bring heavy computation and memory burdens due to the largely increased number of patches and the quadratic complexity of self-attention computation.
Ranked #4 on
Action Recognition
on Something-Something V1
1 code implementation • 9 Jul 2022 • Shihao Zou, Yuanlu Xu, Chao Li, Lingni Ma, Li Cheng, Minh Vo
In this paper, we propose Snipper, a framework to perform multi-person 3D pose estimation, tracking and motion forecasting simultaneously in a single inference.
no code implementations • 29 Jun 2022 • Guan Shen, Jieru Zhao, Quan Chen, Jingwen Leng, Chao Li, Minyi Guo
However, the quadratic complexity of self-attention w. r. t the sequence length incurs heavy computational and memory burdens, especially for tasks with long sequences.
no code implementations • 15 Jun 2022 • Yuxuan Zhou, Wangmeng Xiang, Chao Li, Biao Wang, Xihan Wei, Lei Zhang, Margret Keuper, Xiansheng Hua
Unlike convolutional inductive biases, which are forced to focus exclusively on hard-coded local regions, our proposed SPs are learned by the model itself and take a variety of spatial relations into account.
1 code implementation • 14 Jun 2022 • Chao Li, Junhua Zeng, Zerui Tao, Qibin Zhao
Recent works put much effort into tensor network structure search (TN-SS), aiming to select suitable tensor network (TN) structures, involving the TN-ranks, formats, and so on, for the decomposition or learning tasks.
no code implementations • 27 Apr 2022 • Chao Li, Yanan You, Wenli Zhou
3) With the guidance of convolution features, we define the cost function from both positive and negative sides.
no code implementations • 10 Apr 2022 • Chao Li, Jia Ning, Han Hu, Kun He
Differentiable architecture search (DARTS) has attracted much attention due to its simplicity and significant improvement in efficiency.
no code implementations • 21 Mar 2022 • Yiran Wei, Xi Chen, Lei Zhu, Lipei Zhang, Carola-Bibiane Schönlieb, Stephen J. Price, Chao Li
In this study, we propose a multi-modal learning framework using three separate encoders to extract features of focal tumor image, tumor geometrics and global brain networks.
1 code implementation • Findings (ACL) 2022 • Shaopeng Lai, Qingyu Zhou, Jiali Zeng, Zhongli Li, Chao Li, Yunbo Cao, Jinsong Su
First, they simply mix additionally-constructed training instances and original ones to train models, which fails to help models be explicitly aware of the procedure of gradual corrections.
no code implementations • 8 Mar 2022 • Yiran Wei, Stephen J. Price, Carola-Bibiane Schönlieb, Chao Li
In this study, we develop a self-supervised contrastive learning approach to generate structural brain networks from routine anatomical MRI under the guidance of diffusion MRI.
no code implementations • Applied Intelligence 2022 • Chao Li, Shifei Ding, Xiao Xu, Shuying Du & Tianhao Shi
Density peaks clustering (DPC) algorithm provides an efficient method to quickly find cluster centers with decision graphs.
no code implementations • 8 Mar 2022 • Lipei Zhang, Yiran Wei, Ying Fu, Stephen Price, Carola-Bibiane Schönlieb, Chao Li
In this proposed scheme, we design a normalized modality contrastive loss (NMC-loss), which could promote to disentangle multi-modality complementary representation of FFPE and frozen sections from the same patient.
no code implementations • 7 Mar 2022 • Jialiang Sun, Wen Yao, Tingsong Jiang, Chao Li, Xiaoqian Chen
To alleviate these problems, in this paper, we first propose a novel platform called auto adversarial attack and defense ($A^{3}D$), which can help search for robust neural network architectures and efficient adversarial attacks.
no code implementations • 4 Mar 2022 • Youneng Bao, Fangyang Meng, Wen Tan, Chao Li, Yonghong Tian, Yongsheng Liang
In the view of TSM, the existing transformation methods are mathematically reduced to a linear modulation.
no code implementations • Findings (ACL) 2022 • Yinghui Li, Qingyu Zhou, Yangning Li, Zhongli Li, Ruiyang Liu, Rongyi Sun, Zizhen Wang, Chao Li, Yunbo Cao, Hai-Tao Zheng
However, there exists a gap between the learned knowledge of PLMs and the goal of CSC task.
no code implementations • 9 Feb 2022 • Shanzhi Yin, Chao Li, Wen Tan, Youneng Bao, Yongsheng Liang, Wei Liu
Neural image compression have reached or out-performed traditional methods (such as JPEG, BPG, WebP).
no code implementations • 14 Jan 2022 • Yiran Wei, Chao Li, Xi Chen, Carola-Bibiane Schönlieb, Stephen J. Price
Further, the collaborative learning model achieves better performance than either the CNN or the GNN alone.
1 code implementation • 11 Jan 2022 • Jinyu Lu, Guoqiang Liu, Bing Sun, Chao Li, Li Liu
In CRYPTO 2019, Gohr made a pioneering attempt and successfully applied deep learning to the differential cryptanalysis against NSA block cipher SPECK32/64, achieving higher accuracy than the pure differential distinguishers.
1 code implementation • Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2021 • Dixi Yao, Liyao Xiang, Zifan Wang, Jiayu Xu, Chao Li, Xinbing Wang
Experimental results show that our system not only adapts well to, but also draws on the varying contexts, delivering a practical and efficient solution to edge-cloud model training.
Ranked #2 on
Recommendation Systems
on MovieLens 1M
(Precision metric)
no code implementations • 18 Nov 2021 • Shanzhi Yin, Chao Li, Youneng Bao, Yongsheng Liang
Recently, Learning-based image compression has reached comparable performance with traditional image codecs(such as JPEG, BPG, WebP).
no code implementations • 11 Nov 2021 • Zhao Zhang, Fuzhen Zhuang, HengShu Zhu, Chao Li, Hui Xiong, Qing He, Yongjun Xu
This will lead to low-quality and unreliable representations of KGs.
1 code implementation • International Conference on Advances in Geographic Information Systems 2021 • Jingyuan Wang, Jiawei Jiang, Wenjun Jiang, Chao Li, Wayne Xin Zhao
This paper presents LibCity, a unified, comprehensive, and extensible library for traffic prediction, which provides researchers with a credible experimental tool and a convenient development framework.
Multivariate Time Series Forecasting
Spatio-Temporal Forecasting
+2
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
1 code implementation • Findings (ACL) 2022 • Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou, Chao Li, Hongzhi Liu, Yunbo Cao
Math Word Problem (MWP) solving needs to discover the quantitative relationships over natural language narratives.
1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
3 code implementations • CVPR 2022 • Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik
We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.
no code implementations • 4 Sep 2021 • Yiran Wei, Yonghao Li, Xi Chen, Carola-Bibiane Schönlieb, Chao Li, Stephen J. Price
Here we propose a method to predict IDH mutation using GNN, based on the structural brain network of patients.
no code implementations • 21 Aug 2021 • Zihao Huang, Zhe Sun, Feng Duan, Andrzej Cichocki, Peiying Ruan, Chao Li
To tackle this, we propose L3C-Stereo, a multi-scale lossless compression model consisting of two main modules: the warping module and the probability estimation module.
no code implementations • 21 Aug 2021 • YiFan Li, Chao Li, Yiran Wei, Stephen Price, Carola-Bibiane Schönlieb, Xi Chen
In this paper, we propose an adaptive unsupervised learning approach for efficient MRI intra-tumor partitioning and glioblastoma survival prediction.
no code implementations • 10 Aug 2021 • Balagopal Unnikrishnan, Cuong Nguyen, Shafa Balaram, Chao Li, Chuan Sheng Foo, Pavitra Krishnaswamy
Specifically, we describe adaptations for scenarios with 2D and 3D inputs, uni and multi-label classification, and class distribution mismatch between labeled and unlabeled portions of the training data.
1 code implementation • 10 Aug 2021 • Qiwei Chen, Changhua Pei, Shanshan Lv, Chao Li, Junfeng Ge, Wenwu Ou
Recently, researchers have found that the performance of CTR model can be improved greatly by taking user behavior sequence into consideration, especially long-term user behavior sequence.
1 code implementation • 6 Jul 2021 • Kun He, Chao Li, Yixiao Yang, Gao Huang, John E. Hopcroft
We first propose a simple yet efficient implementation of the convolution using circular kernels, and empirically show the significant advantages of large circular kernels over the counterpart square kernels.
1 code implementation • 5 Jul 2021 • Yipeng Zhou, Xuezheng Liu, Yao Fu, Di wu, Chao Li, Shui Yu
In this work, we study a crucial question which has been vastly overlooked by existing works: what are the optimal numbers of queries and replies in FL with DP so that the final model accuracy is maximized.
no code implementations • 15 Jun 2021 • Akarsh Prabhakara, Diana Zhang, Chao Li, Sirajum Munir, Aswin Sankanaryanan, Anthony Rowe, Swarun Kumar
mmWave radars offer excellent depth resolution even at very long ranges owing to their high bandwidth.
no code implementations • 29 May 2021 • Yi-Si Luo, Xi-Le Zhao, Tai-Xiang Jiang, Yi Chang, Michael K. Ng, Chao Li
Recently, transform-based tensor nuclear norm minimization methods are considered to capture low-rank tensor structures to recover third-order tensors in multi-dimensional image processing applications.
1 code implementation • Findings (ACL) 2021 • Heng-Da Xu, Zhongli Li, Qingyu Zhou, Chao Li, Zizhen Wang, Yunbo Cao, Heyan Huang, Xian-Ling Mao
Chinese Spell Checking (CSC) aims to detect and correct erroneous characters for user-generated text in the Chinese language.
Ranked #2 on
Chinese Spell Checking
on SIGHAN 2015
no code implementations • NeurIPS 2021 • Chao Li, Junhua Zeng, Zerui Tao, Qibin Zhao
Recent works paid effort on the structure search issue for tensor network (TN) representation, of which the aim is to select the optimal network for TN contraction to fit a tensor.
no code implementations • 20 May 2021 • Miao Liu, Lingni Ma, Kiran Somasundaram, Yin Li, Kristen Grauman, James M. Rehg, Chao Li
Given a video captured from a first person perspective and the environment context of where the video is recorded, can we recognize what the person is doing and identify where the action occurs in the 3D space?
no code implementations • 4 May 2021 • Chao Li, Hang Zhang, Jinwei Zhang, Pascal Spincemaille, Thanh D. Nguyen, Yi Wang
An approach to reduce motion artifacts in Quantitative Susceptibility Mapping using deep learning is proposed.
1 code implementation • 28 Apr 2021 • Manyu Zhu, Dongliang He, Xin Li, Chao Li, Fu Li, Xiao Liu, Errui Ding, Zhaoxiang Zhang
Inpainting arbitrary missing regions is challenging because learning valid features for various masked regions is nontrivial.
Ranked #3 on
Image Inpainting
on CelebA-HQ
no code implementations • 10 Mar 2021 • Chao Li, Yiran Wei, Xi Chen, Carola-Bibiane Schonlieb
The proposed BrainNetGAN is a generative adversarial network variant to augment the brain structural connectivity matrices for binary dementia classification tasks.
no code implementations • 10 Mar 2021 • Jinwei Zhang, Hang Zhang, Chao Li, Pascal Spincemaille, Mert Sabuncu, Thanh D. Nguyen, Yi Wang
Quantitative imaging in MRI usually involves acquisition and reconstruction of a series of images at multi-echo time points, which possibly requires more scan time and specific reconstruction technique compared to conventional qualitative imaging.
no code implementations • 6 Mar 2021 • Hang Zhang, Rongguang Wang, Jinwei Zhang, Chao Li, Gufeng Yang, Pascal Spincemaille, Thanh Nguyen, Yi Wang
We introduce Neural Representation of Distribution (NeRD) technique, a module for convolutional neural networks (CNNs) that can estimate the feature distribution by optimizing an underlying function mapping image coordinates to the feature distribution.
1 code implementation • 2 Mar 2021 • Hejia Qiu, Chao Li, Ying Weng, Zhun Sun, Xingyu He, Qibin Zhao
Tensor-power (TP) recurrent model is a family of non-linear dynamical systems, of which the recurrence relation consists of a p-fold (a. k. a., degree-p) tensor product.
1 code implementation • 2 Feb 2021 • Runhua Xu, Chao Li, James Joshi
We also formally show the security guarantee provided by TAB, and analyze the privacy guarantee and trustworthiness it provides.
Cryptography and Security Networking and Internet Architecture
2 code implementations • 26 Jan 2021 • Qinwei Lin, Chao Li, Xifeng Zhao, Xianhai Chen
Decentralization has been widely acknowledged as a core virtue of blockchains.
Cryptography and Security Databases
no code implementations • 21 Jan 2021 • Chao Li, Wenjian Huang, Xi Chen, Yiran Wei, Stephen J. Price, Carola-Bibiane Schönlieb
EMReDL showed to effectively segment the infiltrated tumor from the partially labelled region of potential infiltration.
no code implementations • 11 Jan 2021 • Yao Fu, Yipeng Zhou, Di wu, Shui Yu, Yonggang Wen, Chao Li
Then, we theoretically derive: 1) the conditions for the DP based FedAvg to converge as the number of global iterations (GI) approaches infinity; 2) the method to set the number of local iterations (LI) to minimize the negative influence of DP noises.
no code implementations • 4 Jan 2021 • Jenn-Bing Ong, Wee-Keong Ng, Ivan Tjuawinata, Chao Li, Jielin Yang, Sai None Myne, Huaxiong Wang, Kwok-Yan Lam, C. -C. Jay Kuo
The distributed tensor representations are dispersed on multiple clouds / fogs or servers / devices with metadata privacy, this provides both distributed trust and management to seamlessly secure big data storage, communication, sharing, and computation.
no code implementations • ICCV 2021 • Chao Li, Shangqian Gao, Cheng Deng, Wei Liu, Heng Huang
Specifically, given a target model, we first construct its substitute model to exploit cross-modal correlations within hamming space, with which we create adversarial examples by limitedly querying from a target model.
1 code implementation • Findings (ACL) 2021 • Zhongli Li, Qingyu Zhou, Chao Li, Ke Xu, Yunbo Cao
Pre-trained Transformer-based neural language models, such as BERT, have achieved remarkable results on varieties of NLP tasks.
1 code implementation • 18 Dec 2020 • Runhua Xu, James Joshi, Chao Li
We propose a novel framework, NN-EMD, to train DNN over multiple encrypted datasets collected from multiple sources.
no code implementations • 5 Dec 2020 • YiFan Li, Chao Li, Stephen Price, Carola-Bibiane Schönlieb, Xi Chen
Although successful in tumor sub-region segmentation and survival prediction, radiomics based on machine learning algorithms, is challenged by its robustness, due to the vague intermediate process and track changes.
no code implementations • COLING 2020 • Yue Guan, Jingwen Leng, Chao Li, Quan Chen, Minyi Guo
Recent research on the multi-head attention mechanism, especially that in pre-trained models such as BERT, has shown us heuristics and clues in analyzing various aspects of the mechanism.
no code implementations • 6 Nov 2020 • Yangchun Yan, Rongzuo Guo, Chao Li, Kang Yang, Yongjun Xu
However, these methods ignore a small part of weights in the next layer which disappears as the feature map is removed.
no code implementations • 4 Nov 2020 • Nilanjan Goswami, Amer Qouneh, Chao Li, Tao Li
Growing deployment of power and energy efficient throughput accelerators (GPU) in data centers demands enhancement of power-performance co-optimization capabilities of GPUs.
Distributed, Parallel, and Cluster Computing Hardware Architecture Graphics
no code implementations • 2 Nov 2020 • Yue Guan, Jingwen Leng, Chao Li, Quan Chen, Minyi Guo
Recent research on the multi-head attention mechanism, especially that in pre-trained models such as BERT, has shown us heuristics and clues in analyzing various aspects of the mechanism.
1 code implementation • 24 Oct 2020 • wei he, Quanming Yao, Chao Li, Naoto Yokoya, Qibin Zhao, Hongyan zhang, Liangpei Zhang
Non-local low-rank tensor approximation has been developed as a state-of-the-art method for hyperspectral image (HSI) restoration, which includes the tasks of denoising, compressed HSI reconstruction and inpainting.
no code implementations • 2 Sep 2020 • Zhihui Zhang, Jingwen Leng, Lingxiao Ma, Youshan Miao, Chao Li, Minyi Guo
Graph neural networks (GNN) represent an emerging line of deep learning models that operate on graph structures.
no code implementations • 1 Aug 2020 • Guanghao Yin, Shou-qian Sun, Chao Li, Xin Min
Firstly, the downsampling degradation GAN(DD-GAN) is trained to model the degradation and produces more various of LR images, which is validated to be efficient for data augmentation.
1 code implementation • 29 Jul 2020 • Fanfan Ye, ShiLiang Pu, Qiaoyong Zhong, Chao Li, Di Xie, Huiming Tang
The key lies in the design of the graph structure, which encodes skeleton topology information.
no code implementations • ECCV 2020 • Chao Li, Xiaohu Guo
In the classic volumetric fusion-based framework, a mesh is usually extracted from the TSDF volume as the canonical surface representation to help estimating deformation field.
no code implementations • 23 Jun 2020 • Jianrong Xu, Boyu Diao, Bifeng Cui, Kang Yang, Chao Li, Yongjun Xu
Deep learning has achieved impressive results in many areas, but the deployment of edge intelligent devices is still very slow.
no code implementations • 5 May 2020 • Dario Fuoli, Zhiwu Huang, Martin Danelljan, Radu Timofte, Hua Wang, Longcun Jin, Dewei Su, Jing Liu, Jaehoon Lee, Michal Kudelski, Lukasz Bala, Dmitry Hrybov, Marcin Mozejko, Muchen Li, Si-Yao Li, Bo Pang, Cewu Lu, Chao Li, Dongliang He, Fu Li, Shilei Wen
For track 2, some existing methods are evaluated, showing promising solutions to the weakly-supervised video quality mapping problem.
no code implementations • 3 May 2020 • Kai Zhang, Shuhang Gu, Radu Timofte, Taizhang Shang, Qiuju Dai, Shengchen Zhu, Tong Yang, Yandong Guo, Younghyun Jo, Sejong Yang, Seon Joo Kim, Lin Zha, Jiande Jiang, Xinbo Gao, Wen Lu, Jing Liu, Kwangjin Yoon, Taegyun Jeon, Kazutoshi Akita, Takeru Ooba, Norimichi Ukita, Zhipeng Luo, Yuehan Yao, Zhenyu Xu, Dongliang He, Wenhao Wu, Yukang Ding, Chao Li, Fu Li, Shilei Wen, Jianwei Li, Fuzhi Yang, Huan Yang, Jianlong Fu, Byung-Hoon Kim, JaeHyun Baek, Jong Chul Ye, Yuchen Fan, Thomas S. Huang, Junyeop Lee, Bokyeung Lee, Jungki Min, Gwantae Kim, Kanghyu Lee, Jaihyun Park, Mykola Mykhailych, Haoyu Zhong, Yukai Shi, Xiaojun Yang, Zhijing Yang, Liang Lin, Tongtong Zhao, Jinjia Peng, Huibing Wang, Zhi Jin, Jiahao Wu, Yifu Chen, Chenming Shang, Huanrong Zhang, Jeongki Min, Hrishikesh P. S, Densen Puthussery, Jiji C. V
This paper reviews the NTIRE 2020 challenge on perceptual extreme super-resolution with focus on proposed solutions and results.
no code implementations • 18 Feb 2020 • Cong Guo, Yangjie Zhou, Jingwen Leng, Yuhao Zhu, Zidong Du, Quan Chen, Chao Li, Bin Yao, Minyi Guo
We propose Simultaneous Multi-mode Architecture (SMA), a novel architecture design and execution model that offers general-purpose programmability on DNN accelerators in order to accelerate end-to-end applications.
no code implementations • 29 Jan 2020 • Zihao Huang, Chao Li, Feng Duan, Qibin Zhao
It is a challenging task to restore images from their variants with combined distortions.
no code implementations • 6 Jan 2020 • Wei He, Yong Chen, Naoto Yokoya, Chao Li, Qibin Zhao
In this paper, we propose a new model, named coupled tensor ring factorization (CTRF), for HSR.
1 code implementation • NeurIPS 2019 • Chao Li, Shangqian Gao, Cheng Deng, De Xie, Wei Liu
Extensive experiments on two cross-modal benchmark datasets show that the adversarial examples produced by our CMLA are efficient in fooling a target deep cross-modal hashing network.
2 code implementations • CVPR 2020 • Chao Li, Yixiao Yang, Kun He, Stephen Lin, John E. Hopcroft
IBCLN is a cascaded network that iteratively refines the estimates of transmission and reflection layers in a manner that they can boost the prediction quality to each other, and information across steps of the cascade is transferred using an LSTM.
Ranked #1 on
Reflection Removal
on SIR^2(Postcard)
no code implementations • 14 Oct 2019 • Fan Yang, Xiao Liu, Dongliang He, Chuang Gan, Jian Wang, Chao Li, Fu Li, Shilei Wen
In this work, we introduce a new problem, named as {\em story-preserving long video truncation}, that requires an algorithm to automatically truncate a long-duration video into multiple short and attractive sub-videos with each one containing an unbroken story.
1 code implementation • 8 Oct 2019 • Chao Li, Kun He, Guangshuai Liu, John E. Hopcroft
Results: We propose a method called HirHide (Hierarchical Hidden Community Detection), which can be combined with traditional community detection methods to enable them to discover hierarchical hidden communities.
Molecular Networks
1 code implementation • ICCV 2019 • Chaohao Xie, Shaohui Liu, Chao Li, Ming-Ming Cheng, WangMeng Zuo, Xiao Liu, Shilei Wen, Errui Ding
Most convolutional network (CNN)-based inpainting methods adopt standard convolution to indistinguishably treat valid pixels and holes, making them limited in handling irregular holes and more likely to generate inpainting results with color discrepancy and blurriness.
Ranked #2 on
Image Inpainting
on Paris StreetView
2 code implementations • 26 Aug 2019 • Xin Li, Tianwei Lin, Xiao Liu, Chuang Gan, WangMeng Zuo, Chao Li, Xiang Long, Dongliang He, Fu Li, Shilei Wen
In this paper, we empirically find that stacking more conventional temporal convolution layers actually deteriorates action classification performance, possibly ascribing to that all channels of 1D feature map, which generally are highly abstract and can be regarded as latent concepts, are excessively recombined in temporal convolution.
1 code implementation • 26 Aug 2019 • Chuanguang Yang, Zhulin An, Hui Zhu, Xiaolong Hu, Kun Zhang, Kaiqiang Xu, Chao Li, Yongjun Xu
We propose a simple yet effective method to reduce the redundancy of DenseNet by substantially decreasing the number of stacked modules by replacing the original bottleneck by our SMG module, which is augmented by local residual.
Ranked #59 on
Image Classification
on CIFAR-10
no code implementations • 10 Aug 2019 • Guanghao Yin, Shou-qian Sun, HUI ZHANG, Dian Yu, Chao Li, Ke-jun Zhang, Ning Zou
To the best of author's knowledge, our method is the first attempt to classify large scale subject-independent emotion with 7962 pieces of EDA signals from 457 subjects.
no code implementations • WS 2019 • Lena Shakurova, Beata Nyari, Chao Li, Mihai Rotaru
Cross-lingual embeddings aim to represent words in multiple languages in a shared vector space by capturing semantic similarities across languages.
no code implementations • 2 Jun 2019 • Chuanguang Yang, Zhulin An, Chao Li, Boyu Diao, Yongjun Xu
In this work, we propose a heuristic genetic algorithm (GA) for pruning convolutional neural networks (CNNs) according to the multi-objective trade-off among error, computation and sparsity.
no code implementations • CVPR 2019 • Chao Li, Wei He, Longhao Yuan, Zhun Sun, Qibin Zhao
Low-rank matrix completion (LRMC) is a classical model in both computer vision (CV) and machine learning, and has been successfully applied to various real applications.
1 code implementation • CVPR 2019 • Chao Li, Qiaoyong Zhong, Di Xie, Shiliang Pu
By sharing the convolution kernels of different views, spatial and temporal features are collaboratively learned and thus benefit from each other.
Ranked #17 on
Action Classification
on Moments in Time
no code implementations • 7 May 2019 • Chao Li, Dongliang He, Xiao Liu, Yukang Ding, Shilei Wen
Recently, image super-resolution has been widely studied and achieved significant progress by leveraging the power of deep convolutional neural networks.
1 code implementation • 6 May 2019 • Wen Chen, Pipei Huang, Jiaming Xu, Xin Guo, Cheng Guo, Fei Sun, Chao Li, Andreas Pfadler, Huan Zhao, Binqiang Zhao
In particular, there exist two requirements for fashion outfit recommendation: the Compatibility of the generated fashion outfits, and the Personalization in the recommendation process.
3 code implementations • 17 Apr 2019 • Chao Li, Zhiyuan Liu, Mengmeng Wu, Yuchi Xu, Pipei Huang, Huan Zhao, Guoliang Kang, Qiwei Chen, Wei Li, Dik Lun Lee
Industrial recommender systems usually consist of the matching stage and the ranking stage, in order to handle the billion-scale of users and items.
no code implementations • CVPR 2019 • Yuxian Qiu, Jingwen Leng, Cong Guo, Quan Chen, Chao Li, Minyi Guo, Yuhao Zhu
Recently, researchers have started decomposing deep neural network models according to their semantics or functions.
no code implementations • 16 Apr 2019 • Erkun Yang, Cheng Deng, Chao Li, Wei Liu, Jie Li, DaCheng Tao
In this paper, we propose a deep quantization approach, which is among the early attempts of leveraging deep neural networks into quantization-based cross-modal similarity search.
1 code implementation • 15 Apr 2019 • Runhua Xu, James B. D. Joshi, Chao Li
To tackle the above issue, we propose a CryptoNN framework that supports training a neural network model over encrypted data by using the emerging functional encryption scheme instead of SMC or HE.
no code implementations • 10 Apr 2019 • Cheng Deng, Xianglong Liu, Chao Li, DaCheng Tao
Recent years have witnessed the quick progress of the hyperspectral images (HSI) classification.
no code implementations • 4 Apr 2019 • Cheng Deng, Yumeng Xue, Xianglong Liu, Chao Li, DaCheng Tao
The advantages of our proposed method are threefold: 1) the network can be effectively trained using only limited labeled samples with the help of novel active learning strategies; 2) the network is flexible and scalable enough to function across various transfer situations, including cross-dataset and intra-image; 3) the learned deep joint spectral-spatial feature representation is more generic and robust than many joint spectral-spatial feature representation.
no code implementations • 21 Mar 2019 • Jinshi Yu, Chao Li, Qibin Zhao, Guoxu Zhou
Tensor ring (TR) decomposition has been successfully used to obtain the state-of-the-art performance in the visual data completion problem.
1 code implementation • 7 Mar 2019 • Steffen Eger, Chao Li, Florian Netzer, Iryna Gurevych
By extrapolation, we predict that these topics will remain lead problems/approaches in their fields in the short- and mid-term.
no code implementations • 6 Mar 2019 • De Xie, Cheng Deng, Hao Wang, Chao Li, Dapeng Tao
Two-stream architecture have shown strong performance in video classification task.
no code implementations • 6 Mar 2019 • Chao Li, Cheng Deng, Lei Wang, De Xie, Xianglong Liu
In recent years, hashing has attracted more and more attention owing to its superior capacity of low storage cost and high query efficiency in large-scale cross-modal retrieval.
1 code implementation • 4 Mar 2019 • Chao Li, Qiaoyong Zhong, Di Xie, ShiLiang Pu
By sharing the convolution kernels of different views, spatial and temporal features are collaboratively learned and thus benefit from each other.
no code implementations • 25 Feb 2019 • Jagdish Ramakrishnan, Elham Shaabani, Chao Li, Mátyás A. Sustik
Our system detects anomalies both in batch and real-time streaming settings, and the items flagged are reviewed and actioned based on priority and business impact.
no code implementations • 7 Jan 2019 • Longhao Yuan, Chao Li, Jianting Cao, Qibin Zhao
Dimensionality reduction is an essential technique for multi-way large-scale data, i. e., tensor.
2 code implementations • CVPR 2019 • Wei He, Quanming Yao, Chao Li, Naoto Yokoya, Qibin Zhao
This is done by first learning a low-dimensional projection and the related reduced image from the noisy HSI.
no code implementations • 19 Nov 2018 • Lili Yao, Ruijian Xu, Chao Li, Dongyan Zhao, Rui Yan
To build an open-domain multi-turn conversation system is one of the most interesting and challenging tasks in Artificial Intelligence.
no code implementations • 31 Oct 2018 • Chao Li, Zhun Sun, Jinshi Yu, Ming Hou, Qibin Zhao
We demonstrate this by compressing the convolutional layers via randomly-shuffled tensor decomposition (RsTD) for a standard classification task using CIFAR-10.
no code implementations • 27 Sep 2018 • Yuxian Qiu, Jingwen Leng, Yuhao Zhu, Quan Chen, Chao Li, Minyi Guo
Despite their enormous success, there is still no solid understanding of deep neural network’s working mechanism.
no code implementations • 7 Sep 2018 • Longhao Yuan, Chao Li, Danilo Mandic, Jianting Cao, Qibin Zhao
In this paper, by exploiting the low-rank structure of the TR latent space, we propose a novel tensor completion method which is robust to model selection.
no code implementations • ECCV 2018 • Chao Li, Zheheng Zhao, Xiaohu Guo
This paper proposes a real-time dynamic scene reconstruction method capable of reproducing the motion, geometry, and segmentation simultaneously given live depth stream from a single RGB-D camera.
no code implementations • 22 May 2018 • Longhao Yuan, Chao Li, Danilo Mandic, Jianting Cao, Qibin Zhao
In low-rank tensor completion tasks, due to the underlying multiple large-scale singular value decomposition (SVD) operations and rank selection problem of the traditional methods, they suffer from high computational cost and high sensitivity of model complexity.
no code implementations • 22 May 2018 • Chao Li, Mohammad Emtiyaz Khan, Zhun Sun, Gang Niu, Bo Han, Shengli Xie, Qibin Zhao
Exact recovery of tensor decomposition (TD) methods is a desirable property in both unsupervised learning and scientific data analysis.
no code implementations • 23 Apr 2018 • Peng Gao, Yipeng Ma, Ke Song, Chao Li, Fei Wang, Liyi Xiao, Yan Zhang
Based on the proposed circular and structural operators, a set of primal confidence score maps can be obtained by circular correlating feature maps with their corresponding structural correlation filters.
no code implementations • 20 Apr 2018 • Peng Gao, Yipeng Ma, Chao Li, Ke Song, Fei Wang, Liyi Xiao
Discriminative Correlation Filters based tracking algorithms exploiting conventional handcrafted features have achieved impressive results both in terms of accuracy and robustness.
no code implementations • 19 Apr 2018 • Peng Gao, Yipeng Ma, Ke Song, Chao Li, Fei Wang, Liyi Xiao
To the best of our knowledge, we are the first to incorporate the advantages of DCF and SOSVM for TIR object tracking.
6 code implementations • 17 Apr 2018 • Chao Li, Qiaoyong Zhong, Di Xie, ShiLiang Pu
Skeleton-based human action recognition has recently drawn increasing attentions with the availability of large-scale skeleton datasets.
Ranked #2 on
Skeleton Based Action Recognition
on PKU-MMD
1 code implementation • CVPR 2018 • Chao Li, Cheng Deng, Ning li, Wei Liu, Xinbo Gao, DaCheng Tao
In addition, we harness a self-supervised semantic network to discover high-level semantic information in the form of multi-label annotations.
1 code implementation • 23 Feb 2018 • Oliver Schulte, Yejia Liu, Chao Li
Successful previous approaches have built a predictive model based on player features, or derived performance predictions from the observed performance of comparable players in a cohort.
no code implementations • 16 Feb 2018 • Chao Li, Shohei Shimizu
Most existing causal discovery methods either ignore the discrete data and apply a continuous-valued algorithm or discretize all the continuous data and then apply a discrete Bayesian network approach.
no code implementations • 21 Nov 2017 • Ming Hou, Brahim Chaib-Draa, Chao Li, Qibin Zhao
However, given limited P data, the conventional PU models tend to suffer from overfitting when adapted to very flexible deep neural networks.
no code implementations • 30 Oct 2017 • Qiaoyong Zhong, Chao Li, Yingying Zhang, Di Xie, Shicai Yang, ShiLiang Pu
Deep region-based object detector consists of a region proposal step and a deep object recognition step.
1 code implementation • 23 Oct 2017 • Kai Han, Yunhe Wang, Chao Zhang, Chao Li, Chao Xu
High-dimensional data in many areas such as computer vision and machine learning tasks brings in computational and analytical difficulty.
no code implementations • ICCV 2017 • Chao Li, Jiewei Cao, Zi Huang, Lei Zhu, Heng Tao Shen
In this paper, we propose a novel approach to automatically maximize the utility of weak semantic annotations (formalized as the semantic relevance of video shots to the target event) to facilitate video event classification.
15 code implementations • 5 May 2017 • Chao Li, Xiaokong Ma, Bing Jiang, Xiangang Li, Xuewei Zhang, Xiao Liu, Ying Cao, Ajay Kannan, Zhenyao Zhu
We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity.
1 code implementation • 25 Apr 2017 • Chao Li, Qiaoyong Zhong, Di Xie, ShiLiang Pu
Current state-of-the-art approaches to skeleton-based action recognition are mostly based on recurrent neural networks (RNN).
Ranked #3 on
Skeleton Based Action Recognition
on PKU-MMD
no code implementations • 12 Oct 2016 • Chao Li, Yi Yang, Min Feng, Srimat Chakradhar, Huiyang Zhou
Leveraging large data sets, deep Convolutional Neural Networks (CNNs) achieve state-of-the-art recognition accuracy.
no code implementations • ICCV 2015 • Dingwen Zhang, Deyu Meng, Chao Li, Lu Jiang, Qian Zhao, Junwei Han
As an interesting and emerging topic, co-saliency detection aims at simultaneously extracting common salient objects in a group of images.
no code implementations • CVPR 2015 • Dingwen Zhang, Junwei Han, Chao Li, Jingdong Wang
In the proposed framework, the wide and deep information are explored for the object proposal windows extracted in each image, and the co-saliency scores are calculated by integrating the intra-image contrast and intra group consistency via a principled Bayesian formulation.
no code implementations • 23 Oct 2014 • Junhua Li, Chao Li, Andrzej Cichocki
Unlike vector-based methods that destroy data structure, Canonical Polyadic Decomposition (CPD) aims to process physiological signals in the form of multi-way array, which considers relationships between dimensions and preserves structure information contained by the physiological signal.
no code implementations • 1 Sep 2014 • Chao Li, Lili Guo, Andrzej Cichocki
Thus one question raised whether such the relationship can improve the performance of data completion or not?