no code implementations • CCL 2020 • Ting Jiang, Bing Xu, Tiejun Zhao, Sheng Li
In the first layer, in order to extract textual features of utterances, we propose a convolutional self-attention network(CAN).
no code implementations • CAI (COLING) 2022 • Zhuo Gong, Daisuke Saito, Sheng Li, Hisashi Kawai, Nobuaki Minematsu
The experiments show that we can enhance an ASR E2E model based on encoder-decoder architecture by pre-training the decoder with text data.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • LREC 2022 • Sheng Li, Jiyi Li, Qianying Liu, Zhuo Gong
Moreover, based on the speech collection, we proposed a neural network-based frame-by-frame mapping method to recover the speech content by converting from the adversarial speech to the human speech.
no code implementations • ACL (WOAH) 2021 • Sumer Singh, Sheng Li
Our approach introduces domain adaptation (DA) training procedures to ALBERT, such that it can effectively exploit auxiliary data from source domains to improve the OLD performance in a target domain.
no code implementations • SIGDIAL (ACL) 2022 • Longfei Yang, Jiyi Li, Sheng Li, Takahiro Shinozaki
In the slot self-attention layers, we force each slot to involve information from the other k prominent slots and mask the rest out.
Dialogue State Tracking
Multi-domain Dialogue State Tracking
+1
no code implementations • 18 Sep 2023 • Zicong Luo, Sheng Li, Guobiao Li, Zhenxing Qian, Xinpeng Zhang
To deal with this issue, we propose a key-based FNNS scheme to improve the security of the FNNS, where we generate key-controlled perturbations from the FNN for data embedding.
no code implementations • 14 Sep 2023 • Fei Dou, Jin Ye, Geng Yuan, Qin Lu, Wei Niu, Haijian Sun, Le Guan, Guoyu Lu, Gengchen Mai, Ninghao Liu, Jin Lu, Zhengliang Liu, Zihao Wu, Chenjiao Tan, Shaochen Xu, Xianqiao Wang, Guoming Li, Lilong Chai, Sheng Li, Jin Sun, Hongyue Sun, Yunli Shao, Changying Li, Tianming Liu, WenZhan Song
Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas.
no code implementations • 23 Aug 2023 • Ronghang Zhu, Dongliang Guo, Daiqing Qi, Zhixuan Chu, Xiang Yu, Sheng Li
Inspired by the concepts in trustworthy AI, we proposed the first trustworthy representation learning across domains framework which includes four concepts, i. e, robustness, privacy, fairness, and explainability, to give a comprehensive literature review on this research direction.
no code implementations • 21 Aug 2023 • Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, james Y zhang, Sheng Li
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs.
no code implementations • 21 Aug 2023 • Yan Wang, Zhixuan Chu, Xin Ouyang, Simeng Wang, Hongyan Hao, Yue Shen, Jinjie Gu, Siqiao Xue, james Y zhang, Qing Cui, Longfei Li, Jun Zhou, Sheng Li
In this paper, we propose a novel approach that leverages large language models (LLMs) to construct personalized reasoning graphs.
no code implementations • 11 Aug 2023 • Yongqi Huang, Peng Ye, Xiaoshui Huang, Sheng Li, Tao Chen, Tong He, Wanli Ouyang
As Vision Transformers (ViTs) are gradually surpassing CNNs in various visual tasks, one may question: if a training scheme specifically for ViTs exists that can also achieve performance improvement without increasing inference cost?
no code implementations • 7 Aug 2023 • Jordan Dotzel, Gang Wu, Andrew Li, Muhammad Umar, Yun Ni, Mohamed S. Abdelfattah, Zhiru Zhang, Liqun Cheng, Martin G. Dixon, Norman P. Jouppi, Quoc V. Le, Sheng Li
With the proposed integer quantization search, we increase the accuracy of ResNet-18 on ImageNet by 1. 31% points and ResNet-50 by 0. 90% points with equivalent model cost over previous methods.
no code implementations • ICCV 2023 • Xiaoxiao Hu, Qichao Ying, Zhenxing Qian, Sheng Li, Xinpeng Zhang
RAW files are the initial measurement of scene radiance widely used in most cameras, and the ubiquitously-used RGB images are converted from RAW data through Image Signal Processing (ISP) pipelines.
no code implementations • 20 Jul 2023 • Qichao Ying, Jiaxin Liu, Sheng Li, Haisheng Xu, Zhenxing Qian, Xinpeng Zhang
However, the lack of large-scale and fine-grained face retouching datasets has been a major obstacle to progress in this field.
no code implementations • 19 Jul 2023 • Zhengliang Liu, Zihao Wu, Mengxuan Hu, Bokai Zhao, Lin Zhao, Tianyi Zhang, Haixing Dai, Xianyan Chen, Ye Shen, Sheng Li, Brian Murray, Tianming Liu, Andrea Sikora
In this study, we introduce PharmacyGPT, a novel framework to assess the capabilities of large language models (LLMs) such as ChatGPT and GPT-4 in emulating the role of clinical pharmacists.
no code implementations • 7 Jul 2023 • Guobiao Li, Sheng Li, Meiling Li, Zhenxing Qian, Xinpeng Zhang
In this paper, we propose deep network steganography for the covert communication of DNN models.
no code implementations • 3 Jul 2023 • Haixing Dai, Mengxuan Hu, Qing Li, Lu Zhang, Lin Zhao, Dajiang Zhu, Ibai Diez, Jorge Sepulcre, Fan Zhang, Xingyu Gao, Manhua Liu, Quanzheng Li, Sheng Li, Tianming Liu, Xiang Li
Alzheimer's disease (AD) is a neurodegenerative disorder that is beginning with amyloidosis, followed by neuronal loss and deterioration in structure, function, and cognition.
no code implementations • 20 Jun 2023 • Saed Rezayi, Zhengliang Liu, Zihao Wu, Chandra Dhakal, Bao Ge, Haixing Dai, Gengchen Mai, Ninghao Liu, Chen Zhen, Tianming Liu, Sheng Li
ChatGPT has shown to be a strong baseline in many NLP tasks, and we believe it has the potential to improve our model in the task of semantic matching and enhance our model's understanding of food-related concepts and relationships.
no code implementations • 16 Jun 2023 • Haixing Dai, Yiwei Li, Zhengliang Liu, Lin Zhao, Zihao Wu, Suhang Song, Ye Shen, Dajiang Zhu, Xiang Li, Sheng Li, Xiaobai Yao, Lu Shi, Quanzheng Li, Zhuo Chen, Donglan Zhang, Gengchen Mai, Tianming Liu
In this pioneering study, inspired by AutoGPT, the state-of-the-art open-source application based on the GPT-4 large language model, we develop a novel tool called AD-AutoGPT which can conduct data collection, processing, and analysis about complex health narratives of Alzheimer's Disease in an autonomous manner via users' textual prompts.
no code implementations • 16 May 2023 • Yunyi Zhou, Zhixuan Chu, Yijia Ruan, Ge Jin, Yuchen Huang, Sheng Li
However, the choice of model highly relies on the characteristics of the input time series and the fixed distribution that the model is based on.
1 code implementation • 16 May 2023 • Shuichiro Shimizu, Chenhui Chu, Sheng Li, Sadao Kurohashi
We present a new task, speech dialogue translation mediating speakers of different languages.
no code implementations • 10 May 2023 • Ping Wei, Ge Luo, Qi Song, Xinpeng Zhang, Zhenxing Qian, Sheng Li
In the forward mapping, secret data is hidden in the input latent of Glow model to generate stego images.
no code implementations • 5 May 2023 • Ping Wei, Qing Zhou, Zichi Wang, Zhenxing Qian, Xinpeng Zhang, Sheng Li
However, existing GAN-based GS methods cannot completely recover the hidden secret data due to the lack of network invertibility, while Flow-based methods produce poor image quality due to the stringent reversibility restriction in each module.
no code implementations • 5 May 2023 • Zihan Guan, Mengxuan Hu, Zhongliang Zhou, Jielu Zhang, Sheng Li, Ninghao Liu
Recently, the Segment Anything Model (SAM) has gained significant attention as an image segmentation foundation model due to its strong performance on various downstream tasks.
no code implementations • 24 Apr 2023 • Ehsan Latif, Gengchen Mai, Matthew Nyaaba, Xuansheng Wu, Ninghao Liu, Guoyu Lu, Sheng Li, Tianming Liu, Xiaoming Zhai
Artificial general intelligence (AGI) has gained global recognition as a future technology due to the emergence of breakthrough large language models and chatbots such as GPT-4 and ChatGPT, respectively.
1 code implementation • 20 Apr 2023 • Jielu Zhang, Zhongliang Zhou, Gengchen Mai, Lan Mu, Mengxuan Hu, Sheng Li
We developed a pipeline that leverages multiple FMs to facilitate remote sensing image semantic segmentation tasks guided by text prompt, which we denote as Text2Seg.
Instance Segmentation
Segmentation Of Remote Sensing Imagery
+2
no code implementations • 12 Apr 2023 • Guoyu Lu, Sheng Li, Gengchen Mai, Jin Sun, Dajiang Zhu, Lilong Chai, Haijian Sun, Xianqiao Wang, Haixing Dai, Ninghao Liu, Rui Xu, Daniel Petti, Tianming Liu, Changying Li
Artificial General Intelligence (AGI) is poised to revolutionize a variety of sectors, including healthcare, finance, transportation, and education.
no code implementations • 4 Apr 2023 • Norman P. Jouppi, George Kurian, Sheng Li, Peter Ma, Rahul Nagarajan, Lifeng Nai, Nishant Patil, Suvinay Subramanian, Andy Swing, Brian Towles, Cliff Young, Xiang Zhou, Zongwei Zhou, David Patterson
For similar sized systems, it is ~4. 3x-4. 5x faster than the Graphcore IPU Bow and is 1. 2x-1. 7x faster and uses 1. 3x-1. 9x less power than the Nvidia A100.
1 code implementation • CVPR 2023 • Yitian Zhang, Yue Bai, Chang Liu, Huan Wang, Sheng Li, Yun Fu
To fix this issue, we propose a general framework, named Frame Flexible Network (FFN), which not only enables the model to be evaluated at different frames to adjust its computation, but also reduces the memory costs of storing multiple models significantly.
no code implementations • 3 Mar 2023 • Zhixuan Chu, Ruopeng Li, Stephen Rathbun, Sheng Li
We propose a Continual Causal Effect Representation Learning method for estimating causal effects with observational data, which are incrementally available from non-stationary data distributions.
no code implementations • 28 Feb 2023 • Guobiao Li, Sheng Li, Meiling Li, Xinpeng Zhang, Zhenxing Qian
We propose to disguise a steganographic network (termed as the secret DNN model) into a stego DNN model which performs an ordinary machine learning task (termed as the stego task).
no code implementations • 25 Feb 2023 • Daiqing Qi, Handong Zhao, Sheng Li
Federated learning is a technique that enables a centralized server to learn from distributed clients via communications without accessing the client local data.
1 code implementation • 25 Feb 2023 • Dongliang Guo, Zhixuan Chu, Sheng Li
To our best knowledge, FairAC is the first method that jointly addresses the graph attribution completion and graph unfairness problems.
no code implementations • 25 Feb 2023 • Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Yihan Cao, Zihao Wu, Lin Zhao, Shaochen Xu, Wei Liu, Ninghao Liu, Sheng Li, Dajiang Zhu, Hongmin Cai, Lichao Sun, Quanzheng Li, Dinggang Shen, Tianming Liu, Xiang Li
Text data augmentation is an effective strategy for overcoming the challenge of limited sample sizes in many natural language processing (NLP) tasks.
no code implementations • 21 Feb 2023 • Wenxiong Liao, Zhengliang Liu, Haixing Dai, Zihao Wu, Yiyang Zhang, Xiaoke Huang, Yuzhong Chen, Xi Jiang, Wei Liu, Dajiang Zhu, Tianming Liu, Sheng Li, Xiang Li, Hongmin Cai
The main challenge of FSL is the difficulty of training robust models on small amounts of samples, which frequently leads to overfitting.
no code implementations • 13 Feb 2023 • Yizhou Wang, Dongliang Guo, Sheng Li, Yun Fu
Anomaly detection and localization of visual data, including images and videos, are of great significance in both machine learning academia and applied real-world scenarios.
no code implementations • 2 Feb 2023 • Zhixuan Chu, Jianmin Huang, Ruopeng Li, Wei Chu, Sheng Li
Causal inference has numerous real-world applications in many domains, such as health care, marketing, political science, and online advertising.
no code implementations • 3 Jan 2023 • Zhixuan Chu, Sheng Li
A further understanding of cause and effect within observational data is critical across many domains, such as economics, health care, public policy, web mining, online advertising, and marketing campaigns.
no code implementations • ICCV 2023 • Cheng Fu, Hanxian Huang, Zixuan Jiang, Yun Ni, Lifeng Nai, Gang Wu, Liqun Cheng, Yanqi Zhou, Sheng Li, Andrew Li, Jishen Zhao
One promising way to accelerate transformer training is to reuse small pretrained models to initialize the transformer, as their existing representation power facilitates faster model convergence.
no code implementations • 29 Dec 2022 • Haoyue Wang, Meiling Li, Sheng Li, Zhenxing Qian, Xinpeng Zhang
As one of the important face features, the face depth map, which has shown to be effective in other areas such as the face recognition or face detection, is unfortunately paid little attention to in literature for detecting the manipulated face images.
1 code implementation • 8 Dec 2022 • Xiaoshui Huang, Sheng Li, Wentao Qu, Tong He, Yifan Zuo, Wanli Ouyang
This paper introduces Efficient Point Cloud Learning (EPCL), an effective and efficient point cloud learner for directly training high-quality point cloud models with a frozen CLIP model.
no code implementations • 20 Nov 2022 • Zhaiming Shen, Ming-Jun Lai, Sheng Li
Local clustering problem aims at extracting a small local structure inside a graph without the necessity of knowing the entire graph structure.
no code implementations • 5 Nov 2022 • Hongmin Cai, Wenxiong Liao, Zhengliang Liu, Yiyang Zhang, Xiaoke Huang, Siqi Ding, Hui Ren, Zihao Wu, Haixing Dai, Sheng Li, Lingfei Wu, Ninghao Liu, Quanzheng Li, Tianming Liu, Xiang Li
In this framework, we apply distant-supervision on cross-domain knowledge graph adaptation.
1 code implementation • 1 Nov 2022 • Yuhang Yang, HaiHua Xu, Hao Huang, Eng Siong Chng, Sheng Li
To let the state-of-the-art end-to-end ASR model enjoy data efficiency, as well as much more unpaired text data by multi-modal training, one needs to address two problems: 1) the synchronicity of feature sampling rates between speech and language (aka text data); 2) the homogeneity of the learned representations from two encoders.
no code implementations • 28 Oct 2022 • Qichao Ying, Hang Zhou, Zhenxing Qian, Sheng Li, Xinpeng Zhang
Image immunization (Imuge) is a technology of protecting the images by introducing trivial perturbation, so that the protected images are immune to the viruses in that the tampered contents can be auto-recovered.
1 code implementation • 22 Sep 2022 • Geng Yuan, Yanyu Li, Sheng Li, Zhenglun Kong, Sergey Tulyakov, Xulong Tang, Yanzhi Wang, Jian Ren
Therefore, we analyze the feasibility and potentiality of using the layer freezing technique in sparse training and find it has the potential to save considerable training costs.
no code implementations • 22 Aug 2022 • Zhixuan Chu, Hui Ding, Guang Zeng, Yuchen Huang, Tan Yan, Yulin kang, Sheng Li
In this paper, we provide an in-depth analysis of the underlying parse tree-like structure involved in the effect prediction task and we further establish a Hierarchical Capsule Prediction Network (HapNet) for predicting the effects of marketing campaigns.
no code implementations • 28 Jul 2022 • Ping Wei, Sheng Li, Xinpeng Zhang, Ge Luo, Zhenxing Qian, Qing Zhou
A new steganographic approach called generative steganography (GS) has emerged recently, in which stego images (images containing secret data) are generated from secret data directly without cover media.
no code implementations • 21 Jul 2022 • Zhengxin You, Qichao Ying, Sheng Li, Zhenxing Qian, Xinpeng Zhang
Online social networks have stimulated communications over the Internet more than ever, making it possible for secret message transmission over such noisy channels.
no code implementations • 7 Jul 2022 • Yangming Zhou, Qichao Ying, Xiangyu Zhang, Zhenxing Qian, Sheng Li, Xinpeng Zhang
We jointly train a 3D-UNet-based watermark embedding network and a decoder that predicts the tampering mask.
no code implementations • 4 Jul 2022 • Sébastien Ollivier, Sheng Li, Yue Tang, Chayanika Chaudhuri, Peipei Zhou, Xulong Tang, Jingtong Hu, Alex K. Jones
In particular, we explore the use of processing-in-memory (PIM) approaches, mobile GPU accelerators, and recently released FPGAs, and compare them with novel Racetrack memory PIM.
no code implementations • 22 Jun 2022 • Lin Zhao, Haixing Dai, Zihao Wu, Zhenxiang Xiao, Lu Zhang, David Weizhong Liu, Xintao Hu, Xi Jiang, Sheng Li, Dajiang Zhu, Tianming Liu
However, whether there exists semantic correlations/connections between the visual representations in ANNs and those in BNNs remains largely unexplored due to both the lack of an effective tool to link and couple two different domains, and the lack of a general and effective framework of representing the visual semantics in BNNs such as human functional brain networks (FBNs).
no code implementations • 6 Jun 2022 • Qichao Ying, Hang Zhou, Xiaoxiao Hu, Zhenxing Qian, Sheng Li, Xinpeng Zhang
Existing image cropping detection schemes ignore that recovering the cropped-out contents can unveil the purpose of the behaved cropping attack.
no code implementations • 2 Jun 2022 • Yifei Wang, Qichao Ying, Zhenxing Qian, Sheng Li, Xinpeng Zhang
To address this issue, we present a new video watermarking based on joint Dual-Tree Cosine Wavelet Transformation (DTCWT) and Singular Value Decomposition (SVD), which is resistant to frame rate conversion.
no code implementations • 28 May 2022 • Yangming Zhou, Qichao Ying, Zhenxing Qian, Sheng Li, Xinpeng Zhang
The results indicate that the proposed framework has a better capability in mining crucial features for fake news detection.
no code implementations • 11 Apr 2022 • Zhengdong Yang, Wangjin Zhou, Chenhui Chu, Sheng Li, Raj Dabre, Raphael Rubino, Yi Zhao
This challenge aims to predict MOS scores of synthetic speech on two tracks, the main track and a more challenging sub-track: out-of-domain (OOD).
1 code implementation • 8 Apr 2022 • Qianying Liu, Zhuo Gong, Zhengdong Yang, Yuhang Yang, Sheng Li, Chenchen Ding, Nobuaki Minematsu, Hao Huang, Fei Cheng, Chenhui Chu, Sadao Kurohashi
Low-resource speech recognition has been long-suffering from insufficient training data.
no code implementations • 10 Mar 2022 • Zhixuan Chu, Stephen L. Rathbun, Sheng Li
In our paper, the basket trial is employed as an intuitive example to present this new causal inference setting.
no code implementations • 22 Feb 2022 • Zhixuan Chu, Stephen Rathbun, Sheng Li
In this paper, we reveal the weaknesses of these strategies, i. e., they lead to the loss of predictive information when enforcing the domain invariance; and the treatment effect estimation performance is unstable, which heavily relies on the characteristics of the domain distributions and the choice of domain divergence metrics.
no code implementations • 29 Dec 2021 • Kejiang Chen, Xianhan Zeng, Qichao Ying, Sheng Li, Zhenxing Qian, Xinpeng Zhang
We develop a reversible adversarial example generator (RAEG) that introduces slight changes to the images to fool traditional classification models.
no code implementations • 23 Nov 2021 • Xiaoshui Huang, Zongyi Xu, Guofeng Mei, Sheng Li, Jian Zhang, Yifan Zuo, Yucheng Wang
To solve this challenge, we propose a new data-driven registration algorithm by investigating deep generative neural networks to point cloud registration.
no code implementations • 15 Oct 2021 • Zhuowen Yuan, Zhengxin You, Sheng Li, Xinpeng Zhang, Zhenxin Qian, Alex Kot
Our virtual face images are visually different from the original ones for privacy protection.
no code implementations • 29 Sep 2021 • Weili Shi, Ronghang Zhu, Sheng Li
In this paper, we propose a pairwise adversarial training approach to augment training data for unsupervised class-imbalanced domain adaptation.
no code implementations • ICLR 2022 • Ronghang Zhu, Sheng Li
In this paper, we propose a challenging and untouched problem: \textit{Open-Set Single Domain Generalization} (OS-SDG), where target domains include unseen categories out of source label space.
no code implementations • 9 Jul 2021 • Ronghang Zhu, Zhiqiang Tao, Yaliang Li, Sheng Li
Owing to the remarkable capability of extracting effective graph embeddings, graph convolutional network (GCN) and its variants have been successfully applied to a broad range of tasks, such as node classification, link prediction, and graph classification.
no code implementations • 8 Jun 2021 • Ziyu Guan, Hongchang Wu, Qingyu Cao, Hao liu, Wei Zhao, Sheng Li, Cai Xu, Guang Qiu, Jian Xu, Bo Zheng
Although a few studies use multi-agent reinforcement learning to set up a cooperative game, they still suffer the following drawbacks: (1) They fail to avoid collusion solutions where all the advertisers involved in an auction collude to bid an extremely low price on purpose.
no code implementations • 5 Jun 2021 • Zhixuan Chu, Stephen L. Rathbun, Sheng Li
The foremost challenge in treatment effect estimation is how to capture hidden confounders.
no code implementations • 13 May 2021 • Matthew Junge, Sheng Li, Samitha Samaranayake, Matthew Zalesak
We construct an agent-based SEIR model to simulate COVID-19 spread at a 16000-student mostly non-residential urban university during the Fall 2021 Semester.
no code implementations • NAACL 2021 • Saed Rezayi, Handong Zhao, Sungchul Kim, Ryan A. Rossi, Nedim Lipka, Sheng Li
Knowledge graphs suffer from sparsity which degrades the quality of representations generated by various methods.
no code implementations • 24 Feb 2021 • Sheng Li, Yutai Zhou, Ross Allen, Mykel J. Kochenderfer
Communication is a important factor that enables agents work cooperatively in multi-agent reinforcement learning (MARL).
Multi-agent Reinforcement Learning
reinforcement-learning
+1
no code implementations • CVPR 2021 • Sheng Li, Mingxing Tan, Ruoming Pang, Andrew Li, Liqun Cheng, Quoc Le, Norman P. Jouppi
On top of our DC accelerator optimized neural architecture search space, we further propose a latency-aware compound scaling (LACS), the first multi-objective compound scaling method optimizing both accuracy and latency.
no code implementations • 1 Jan 2021 • Zhixuan Chu, Stephen Rathbun, Sheng Li
We propose a Continual Causal Effect Representation Learning method for estimating causal effect with observational data, which are incrementally available from non-stationary data distributions.
no code implementations • 1 Jan 2021 • Ronghang Zhu, Xiaodong Jiang, Jiasen Lu, Sheng Li
In this paper, we propose a novel Transferable Feature Learning approach on Graphs (TFLG) for unsupervised adversarial domain adaptation, which jointly incorporates sample-level and class-level structure information across two domains.
no code implementations • 25 Oct 2020 • Xiaodong Jiang, Ronghang Zhu, Pengsheng Ji, Sheng Li
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
no code implementations • 15 Sep 2020 • Zhixuan Chu, Stephen L. Rathbun, Sheng Li
The dramatically growing availability of observational data is being witnessed in various domains of science and technology, which facilitates the study of causal inference.
no code implementations • 14 Sep 2020 • Yue Bai, Zhiqiang Tao, Lichen Wang, Sheng Li, Yu Yin, Yun Fu
Extensive experiments on four action datasets illustrate the proposed CAM achieves better results for each view and also boosts multi-view performance.
1 code implementation • 19 Jun 2020 • Sheng Li, Jayesh K. Gupta, Peter Morales, Ross Allen, Mykel J. Kochenderfer
Coordination graph based formalization allows reasoning about the joint action based on the structure of interactions.
no code implementations • 29 Apr 2020 • Mohammadhossein Toutiaee, Soheyla Amirian, John A. Miller, Sheng Li
The proposed approach aids labeling new data (fictitious output images) by minimizing a penalized version of the least squares cost function between realistic pictures and target pictures.
no code implementations • 13 Feb 2020 • Hongwei Yi, Shaoshuai Shi, Mingyu Ding, Jiankai Sun, Kui Xu, Hui Zhou, Zhe Wang, Sheng Li, Guoping Wang
First, the semantic context information in LiDAR is seldom explored in previous works, which may help identify ambiguous vehicles.
1 code implementation • 5 Feb 2020 • Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, Aidong Zhang
Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up.
no code implementations • 27 Dec 2019 • Xugang Lu, Peng Shen, Sheng Li, Yu Tsao, Hisashi Kawai
However, a potential limitation of the network is that the discriminative features from the bottom layers (which can model the short-range dependency) are smoothed out in the final representation.
no code implementations • 26 Dec 2019 • Jiahuan Ren, Zhao Zhang, Sheng Li, Yang Wang, Guangcan Liu, Shuicheng Yan, Meng Wang
Specifically, J-RFDL performs the robust representation by DL in a factorized compressed space to eliminate the negative effects of noise and outliers on the results, which can also make the DL process efficient.
no code implementations • 20 Dec 2019 • Sheng Li, Maxim Egorov, Mykel Kochenderfer
New methodologies will be needed to ensure the airspace remains safe and efficient as traffic densities rise to accommodate new unmanned operations.
no code implementations • 24 Nov 2019 • Yue Bai, Lichen Wang, Zhiqiang Tao, Sheng Li, Yun Fu
Multi-view time series classification (MVTSC) aims to improve the performance by fusing the distinctive temporal information from multiple views.
no code implementations • 28 Sep 2019 • Zhengming Ding, Ming Shao, Handong Zhao, Sheng Li
It is always demanding to learn robust visual representation for various learning problems; however, this learning and maintenance process usually suffers from noise, incompleteness or knowledge domain mismatch.
no code implementations • 2 Sep 2019 • Zhao Zhang, Yan Zhang, Sheng Li, Guangcan Liu, Dan Zeng, Shuicheng Yan, Meng Wang
For auto-weighting, RFA-LCF jointly preserves the manifold structures in the basis concept space and new coordinate space in an adaptive manner by minimizing the reconstruction errors on clean data, anchor points and coordinates.
no code implementations • 21 Aug 2019 • Zhao Zhang, Lei Wang, Sheng Li, Yang Wang, Zheng Zhang, Zheng-Jun Zha, Meng Wang
Specifically, AS-LRC performs the latent decomposition of given data into a low-rank reconstruction by a block-diagonal codes matrix, a group sparse locality-adaptive salient feature part and a sparse error part.
no code implementations • Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) 2019 • Xiaodong Jiang, Pengsheng Ji, Sheng Li
In this paper, we present CensNet, Convolution with Edge-Node Switching graph neural network, for semi-supervised classification and regression in graph-structured data with both node and edge features.
Ranked #1 on
Graph Regression
on Tox21
no code implementations • 4 Aug 2019 • Zhao Zhang, Jiahuan Ren, Sheng Li, Richang Hong, Zheng-Jun Zha, Meng Wang
Leveraging on the Frobenius-norm based latent low-rank representation model, rBDLR jointly learns the coding coefficients and salient features, and improves the results by enhancing the robustness to outliers and errors in given data, preserving local information of salient features adaptively and ensuring the block-diagonal structures of the coefficients.
no code implementations • 25 May 2019 • Zhao Zhang, Weiming Jiang, Zheng Zhang, Sheng Li, Guangcan Liu, Jie Qin
More importantly, LC-PDL avoids using the complementary data matrix to learn the sub-dictionary over each class.
no code implementations • 25 May 2019 • Zhao Zhang, Yan Zhang, Sheng Li, Guangcan Liu, Meng Wang, Shuicheng Yan
RFA-LCF integrates the robust flexible CF, robust sparse local-coordinate coding and the adaptive reconstruction weighting learning into a unified model.
no code implementations • CVPR 2019 • Sheng Li, Fengxiang He, Bo Du, Lefei Zhang, Yonghao Xu, DaCheng Tao
Recently, deep learning based video super-resolution (SR) methods have achieved promising performance.
no code implementations • 3 Apr 2019 • Zheng Zhang, Guo-Sen Xie, Yang Li, Sheng Li, Zi Huang
Due to its low storage cost and fast query speed, hashing has been recognized to accomplish similarity search in large-scale multimedia retrieval applications.
no code implementations • CVPR 2019 • Jiuxiang Gu, Handong Zhao, Zhe Lin, Sheng Li, Jianfei Cai, Mingyang Ling
Scene graph generation has received growing attention with the advancements in image understanding tasks such as object detection, attributes and relationship prediction,~\etc.
no code implementations • 8 Jan 2019 • Tuan Manh Lai, Trung Bui, Nedim Lipka, Sheng Li
Popular e-commerce websites such as Amazon offer community question answering systems for users to pose product related questions and experienced customers may provide answers voluntarily.
1 code implementation • NeurIPS 2018 • Liuyi Yao, Sheng Li, Yaliang Li, Mengdi Huai, Jing Gao, Aidong Zhang
Estimating individual treatment effect (ITE) is a challenging problem in causal inference, due to the missing counterfactuals and the selection bias.
no code implementations • ECCV 2018 • Zhengming Ding, Sheng Li, Ming Shao, Yun Fu
However, existing approaches separate target label optimization and domain-invariant feature learning as different steps.
no code implementations • 6 Aug 2018 • Longfei Liu, Sheng Li, Yisong Chen, Guoping Wang
Image reconstruction including image restoration and denoising is a challenging problem in the field of image computing.
no code implementations • COLING 2018 • Tuan Manh Lai, Trung Bui, Sheng Li
Given a question and a set of candidate answers, answer selection is the task of identifying which of the candidates answers the question correctly.
no code implementations • WS 2018 • Tuan Lai, Trung Bui, Sheng Li, Nedim Lipka
When evaluating a potential product purchase, customers may have many questions in mind.
no code implementations • ACL 2018 • Xinzhou Jiang, Zhenghua Li, Bo Zhang, Min Zhang, Sheng Li, Luo Si
Treebank conversion is a straightforward and effective way to exploit various heterogeneous treebanks for boosting parsing performance.
no code implementations • 5 Jan 2018 • Jingang Wang, Junfeng Tian, Long Qiu, Sheng Li, Jun Lang, Luo Si, Man Lan
It is a challenging and practical research problem to obtain effective compression of lengthy product titles for E-commerce.
no code implementations • NeurIPS 2017 • Sheng Li, Yun Fu
Estimating treatment effects from observational data is challenging due to the missing counterfactuals.
1 code implementation • 28 Feb 2017 • Sheng Li, Jongsoo Park, Ping Tak Peter Tang
Sparse methods and the use of Winograd convolutions are two orthogonal approaches, each of which significantly accelerates convolution computations in modern CNNs.
1 code implementation • 18 Nov 2016 • Shihao Ji, Nadathur Satish, Sheng Li, Pradeep Dubey
Word2vec is a widely used algorithm for extracting low-dimensional vector representations of words.
1 code implementation • 4 Aug 2016 • Jongsoo Park, Sheng Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, Pradeep Dubey
Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels.
no code implementations • 15 Apr 2016 • Shihao Ji, Nadathur Satish, Sheng Li, Pradeep Dubey
In combination, these techniques allow us to scale up the computation near linearly across cores and nodes, and process hundreds of millions of words per second, which is the fastest word2vec implementation to the best of our knowledge.
no code implementations • ICCV 2015 • Sheng Li, Kang Li, Yun Fu
Subspace clustering is an effective technique for segmenting data drawn from multiple subspaces.