no code implementations • 12 May 2022 • Jian Zhang, Yuanqing Zhang, Huan Fu, Xiaowei Zhou, Bowen Cai, Jinchi Huang, Rongfei Jia, Binqiang Zhao, Xing Tang
Neural Radiance Fields (NeRF) have emerged as a potent paradigm for representing scenes and synthesizing photo-realistic images.
no code implementations • 10 May 2022 • Chong Mou, Yanze Wu, Xintao Wang, Chao Dong, Jian Zhang, Ying Shan
Instead of using known degradation levels as explicit supervision to the interactive mechanism, we propose a metric learning strategy to map the unquantifiable degradation levels in real-world scenarios to a metric space, which is trained in an unsupervised manner.
1 code implementation • 28 Apr 2022 • Chong Mou, Qian Wang, Jian Zhang
Concretely, without loss of interpretability, we integrate a gradient estimation strategy into the gradient descent step of the Proximal Gradient Descent (PGD) algorithm, driving it to deal with complex and real-world image degradation.
1 code implementation • 26 Apr 2022 • Minghao Zhao, Le Wu, Yile Liang, Lei Chen, Jian Zhang, Qilin Deng, Kai Wang, Xudong Shen, Tangjie Lv, Runze Wu
While conventional CF models are known for facing the challenges of the popularity bias that favors popular items, one may wonder "Whether the existing graph-based CF models alleviate or exacerbate popularity bias of recommender systems?"
1 code implementation • 26 Apr 2022 • Yuqing Liu, Qi Jia, Jian Zhang, Xin Fan, Shanshe Wang, Siwei Ma, Wen Gao
Existing BDE methods have no unified solution for various BDE situations, and directly learn a mapping for each pixel from LBD image to the desired value in HBD image, which may change the given high-order bits and lead to a huge deviation from the ground truth.
1 code implementation • 24 Apr 2022 • Jingfen Xie, Jian Zhang, Yongbing Zhang, Xiangyang Ji
Compressed Sensing MRI (CS-MRI) aims at reconstructing de-aliased images from sub-Nyquist sampling k-space data to accelerate MR Imaging, thus presenting two basic issues, i. e., where to sample and how to reconstruct.
no code implementations • 24 Mar 2022 • Qiankun Gao, Chen Zhao, Bernard Ghanem, Jian Zhang
After RRL, the classification head is fine-tuned with global class-balanced classification loss to address the data imbalance issue as well as learn the decision boundary between new and previous classes.
no code implementations • 21 Mar 2022 • Yuting Yang, Pei Huang, Juan Cao, Jintao Li, Yun Lin, Jin Song Dong, Feifei Ma, Jian Zhang
Our attack technique targets the inherent vulnerabilities of NLP models, allowing us to generate samples even without interacting with the victim NLP model, as long as it is based on pre-trained language models (PLMs).
no code implementations • 18 Mar 2022 • Jin Huang, Lu Zhang, Yongshun Gong, Jian Zhang, Xiushan Nie, Yilong Yin
Series photo selection (SPS) is an important branch of the image aesthetics quality assessment, which focuses on finding the best one from a series of nearly identical photos.
1 code implementation • 16 Mar 2022 • Yinhuai Wang, Yujie Hu, Jian Zhang
Emerging high-quality face restoration (FR) methods often utilize pre-trained GAN models (\textit{i. e.}, StyleGAN2) as GAN Prior.
no code implementations • 10 Mar 2022 • Yinhuai Wang, Shuzhou Yang, Yujie Hu, Jian Zhang
Unlike the pinhole, the thin lens refracts rays of a scene point, so its imaging on the sensor plane is scattered as a circle of confusion (CoC).
no code implementations • 5 Feb 2022 • Guofeng Mei, Litao Yu, Qiang Wu, Jian Zhang, Mohammed Bennamoun
This paper proposes a general unsupervised approach, named \textbf{ConClu}, to perform the learning of point-wise and global features by jointly leveraging point-level clustering and instance-level contrasting.
no code implementations • 11 Jan 2022 • Yuting Yang, Pei Huang, Feifei Ma, Juan Cao, Meishan Zhang, Jian Zhang, Jintao Li
Deep-learning-based NLP models are found to be vulnerable to word substitution perturbations.
no code implementations • 31 Dec 2021 • Dongjie Ye, Zhangkai Ni, Hanli Wang, Jian Zhang, Shiqi Wang, Sam Kwong
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
no code implementations • 29 Dec 2021 • Guofeng Mei, Xiaoshui Huang, Litao Yu, Jian Zhang, Mohammed Bennamoun
Generating a set of high-quality correspondences or matches is one of the most critical steps in point cloud registration.
no code implementations • 23 Dec 2021 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
In test stage, to alleviate unstable prediction, we utilize multiple augmented images to yield multi-view prediction, which significantly promotes model reliability via fusing the results of different views of a test image.
1 code implementation • 12 Dec 2021 • Xuanyu Zhang, Yongbing Zhang, Ruiqin Xiong, Qilin Sun, Jian Zhang
Hyperspectral imaging is an essential imaging modality for a wide range of applications, especially in remote sensing, agriculture, and medicine.
no code implementations • 23 Nov 2021 • Xiaoshui Huang, Zongyi Xu, Guofeng Mei, Sheng Li, Jian Zhang, Yifan Zuo, Yucheng Wang
To solve this challenge, we propose a new data-driven registration algorithm by investigating deep generative neural networks to point cloud registration.
no code implementations • 15 Nov 2021 • Minghao Liu, Fuqi Jia, Pei Huang, Fan Zhang, Yuchen Sun, Shaowei Cai, Feifei Ma, Jian Zhang
With the rapid development of deep learning techniques, various recent work has tried to apply graph neural networks (GNNs) to solve NP-hard problems such as Boolean Satisfiability (SAT), which shows the potential in bridging the gap between machine learning and symbolic reasoning.
no code implementations • 29 Oct 2021 • Pei Huang, Yuting Yang, Minghao Liu, Fuqi Jia, Feifei Ma, Jian Zhang
This paper introduces a notation of $\varepsilon$-weakened robustness for analyzing the reliability and stability of deep neural networks (DNNs).
no code implementations • 26 Oct 2021 • Huichen Ma, Junjie Zhou, Jian Zhang, Lingyu Zhang
On the other hand, the material model is a complicated system with significant nonlinearity, non-stationarity, and uncertainty, making it challenging to develop an appropriate system model.
1 code implementation • 19 Oct 2021 • Jiechong Song, Bin Chen, Jian Zhang
By understanding DUNs from the perspective of the human brain's memory processing, we find there exists two issues in existing DUNs.
1 code implementation • 17 Oct 2021 • Yinghuan Shi, Jian Zhang, Tong Ling, Jiwen Lu, Yefeng Zheng, Qian Yu, Lei Qi, Yang Gao
In semi-supervised medical image segmentation, most previous works draw on the common assumption that higher entropy means higher uncertainty.
Semantic Segmentation
Semi-supervised Medical Image Segmentation
no code implementations • 8 Oct 2021 • Jinyin Chen, Haiyang Xiong, Haibin Zheng, Jian Zhang, Guodong Jiang, Yi Liu
Backdoor attacks induce the DLP methods to make wrong prediction by the malicious training data, i. e., generating a subgraph sequence as the trigger and embedding it to the training data.
no code implementations • 18 Sep 2021 • Cheng Tan, Zhichao Li, Jian Zhang, Yu Cao, Sikai Qi, Zherui Liu, Yibo Zhu, Chuanxiong Guo
With MIG, A100 can be the most cost-efficient GPU ever for serving Deep Neural Networks (DNNs).
1 code implementation • ICCV 2021 • Zhuoyuan Wu, Jian Zhang, Chong Mou
To better exploit the spatial-temporal correlation among frames and address the problem of information loss between adjacent phases in existing DUNs, we propose to adopt the 3D-CNN prior in our proximal mapping module and develop a novel dense feature map (DFM) strategy, respectively.
1 code implementation • ICCV 2021 • Chong Mou, Jian Zhang, Zhuoyuan Wu
Specifically, we propose an improved graph model to perform patch-wise graph convolution with a dynamic and adaptive number of neighbors for each node.
1 code implementation • ICCV 2021 • Zeren Sun, Yazhou Yao, Xiu-Shen Wei, Yongshun Zhang, Fumin Shen, Jianxin Wu, Jian Zhang, Heng-Tao Shen
Learning from the web can ease the extreme dependence of deep learning on large-scale manually labeled datasets.
no code implementations • 22 Jul 2021 • Ke-Yue Zhang, Taiping Yao, Jian Zhang, Shice Liu, Bangjie Yin, Shouhong Ding, Jilin Li
In pursuit of consolidating the face verification systems, prior face anti-spoofing studies excavate the hidden cues in original images to discriminate real persons and diverse attack types with the assistance of auxiliary supervision.
no code implementations • 15 Jul 2021 • Qing Chen, Jian Zhang
Most current applications of contrastive learning benefit only a single representation from the last layer of an encoder. In this paper, we propose a multi-level contrasitive learning approach which applies contrastive losses at different layers of an encoder to learn multiple representations from the encoder.
1 code implementation • 15 Jul 2021 • Di You, Jian Zhang, Jingfen Xie, Bin Chen, Siwei Ma
In this paper, we propose a novel COntrollable Arbitrary-Sampling neTwork, dubbed COAST, to solve CS problems of arbitrary-sampling matrices (including unseen sampling matrices) with one single model.
no code implementations • 6 Jul 2021 • Mengxi Jia, Xinhua Cheng, Shijian Lu, Jian Zhang
To better eliminate interference from occlusions, we design a contrast feature learning technique (CFL) for better separation of occlusion features and discriminative ID features.
no code implementations • CVPR 2021 • Jing Zhao, Ruiqin Xiong, Hangfan Liu, Jian Zhang, Tiejun Huang
Different from the conventional digital cameras that compact the photoelectric information within the exposure interval into a single snapshot, the spike camera produces a continuous spike stream to record the dynamic light intensity variation process.
2 code implementations • NeurIPS 2021 • Zhuchen Shao, Hao Bian, Yang Chen, Yifeng Wang, Jian Zhang, Xiangyang Ji, Yongbing Zhang
Multiple instance learning (MIL) is a powerful tool to solve the weakly supervised classification in whole slide image (WSI) based pathology diagnosis.
no code implementations • 21 May 2021 • Yinglin Zhang, Risa Higashita, Huazhu Fu, Yanwu Xu, Yang Zhang, Haofeng Liu, Jian Zhang, Jiang Liu
Corneal endothelial cell segmentation plays a vital role inquantifying clinical indicators such as cell density, coefficient of variation, and hexagonality.
no code implementations • 18 May 2021 • Bofeng Wu, guocheng niu, Jun Yu, Xinyan Xiao, Jian Zhang, Hua Wu
This paper proposes an approach to Dense Video Captioning (DVC) without pairwise event-sentence annotation.
2 code implementations • 17 May 2021 • Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua Susskind, Jian Zhang, Ruslan Salakhutdinov, Hanlin Goh
Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration.
no code implementations • 9 May 2021 • Yong Dai, Jian Liu, Jian Zhang, Hongguang Fu, Zenglin Xu
The first mechanism is a selective domain adaptation (SDA) method, which transfers knowledge from the closest source domain.
no code implementations • 26 Apr 2021 • Jie Chen, Jie Liu, Chang Liu, Jian Zhang, Bing Han
To overcome this issue and to further improve the recognition performance, we adopt a deep learning approach for underwater target recognition and propose a LOFAR spectrum enhancement (LSE)-based underwater target recognition scheme, which consists of preprocessing, offline training, and online testing.
no code implementations • 26 Mar 2021 • Dewang Hou, Yang Zhao, Yuyao Ye, Jiayu Yang, Jian Zhang, Ronggang Wang
Scaling and lossy coding are widely used in video transmission and storage.
1 code implementation • CVPR 2021 • Yazhou Yao, Tao Chen, GuoSen Xie, Chuanyi Zhang, Fumin Shen, Qi Wu, Zhenmin Tang, Jian Zhang
To further mine the non-salient region objects, we propose to exert the segmentation network's self-correction ability.
1 code implementation • CVPR 2021 • Yazhou Yao, Zeren Sun, Chuanyi Zhang, Fumin Shen, Qi Wu, Jian Zhang, Zhenmin Tang
Due to the memorization effect in Deep Neural Networks (DNNs), training with noisy labels usually results in inferior model performance.
1 code implementation • 22 Mar 2021 • Di You, Jingfen Xie, Jian Zhang
While deep neural networks have achieved impressive success in image compressive sensing (CS), most of them lack flexibility when dealing with multi-ratio tasks and multi-scene images in practical applications.
no code implementations • 12 Mar 2021 • Jianhui Chang, Zhenghui Zhao, Lingbo Yang, Chuanmin Jia, Jian Zhang, Siwei Ma
To this end, we propose a novel end-to-end semantic prior modeling-based conceptual coding scheme towards extremely low bitrate image compression, which leverages semantic-wise deep representations as a unified prior for entropy estimation and texture synthesis.
2 code implementations • 10 Mar 2021 • Chong Mou, Jian Zhang, Xiaopeng Fan, Hangfan Liu, Ronggang Wang
Local and non-local attention-based methods have been well studied in various image restoration tasks while leading to promising performance.
no code implementations • 3 Mar 2021 • Xiaoshui Huang, Guofeng Mei, Jian Zhang, Rana Abbas
This survey conducts a comprehensive survey, including both same-source and cross-source registration methods, and summarize the connections between optimization-based and deep learning methods, to provide further research insight.
no code implementations • 25 Feb 2021 • Shengran Lin, Changfeng Weng, Yuanjie Yang, Jiaxin Zhao, Yuhang Guo, Jian Zhang, Liren Lou, Wei Zhu, Guanzhong Wang
Nitrogen-vacancy (NV) center in diamond is an ideal candidate for quantum sensors because of its excellent optical and coherence property.
Quantum Physics Mesoscale and Nanoscale Physics
1 code implementation • 22 Feb 2021 • Tao Chen, GuoSen Xie, Yazhou Yao, Qiong Wang, Fumin Shen, Zhenmin Tang, Jian Zhang
Then we utilize the fused prototype to guide the final segmentation of the query image.
no code implementations • 16 Feb 2021 • Yunyi Xie, Jie Jin, Jian Zhang, Shanqing Yu, Qi Xuan
With the wide application of blockchain in the financial field, the rise of various types of cybercrimes has brought great challenges to the security of blockchain.
no code implementations • 1 Feb 2021 • Jian Zhang, Ying Tai, Taiping Yao, Jia Meng, Shouhong Ding, Chengjie Wang, Jilin Li, Feiyue Huang, Rongrong Ji
Face authentication on mobile end has been widely applied in various scenarios.
1 code implementation • 23 Jan 2021 • Huafeng Liu, Chuanyi Zhang, Yazhou Yao, Xiushen Wei, Fumin Shen, Jian Zhang, Zhenmin Tang
Labeling objects at a subordinate level typically requires expert knowledge, which is not always available when using random annotators.
no code implementations • ICCV 2021 • Jing Zhao, Jiyu Xie, Ruiqin Xiong, Jian Zhang, Zhaofei Yu, Tiejun Huang
In this paper, we properly exploit the relative motion and derive the relationship between light intensity and each spike, so as to recover the external scene with both high temporal and high spatial resolution.
no code implementations • 1 Jan 2021 • Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua M. Susskind, Jian Zhang, Ruslan Salakhutdinov, Hanlin Goh
Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration.
no code implementations • 1 Jan 2021 • Pedram Zamirai, Jian Zhang, Christopher R Aberger, Christopher De Sa
We ask can we do pure 16-bit training which requires only 16-bit compute units, while still matching the model accuracy attained by 32-bit training.
no code implementations • 1 Jan 2021 • Qing Chen, Jian Zhang
Deep neural networks (DNNs) compute representations in a layer by layer fashion, producing a final representation at the top layer of the pipeline, and classification or regression is made using the final representation.
no code implementations • 28 Dec 2020 • Jian Zhang, Cunjing Ge, Feifei Ma
Compared with constraint satisfaction problems, counting problems have received less attention.
no code implementations • 22 Dec 2020 • Yi Ding, Qiqi Yang, Guozheng Wu, Jian Zhang, Zhiguang Qin
In this paper, a network called Brachial Plexus Multi-instance Segmentation Network (BPMSegNet) is proposed to identify different tissues (nerves, arteries, veins, muscles) in ultrasound images.
no code implementations • 20 Dec 2020 • Huaxi Huang, Junjie Zhang, Jian Zhang, Qiang Wu, Chang Xu
Second, the extra unlabeled samples are employed to transfer the knowledge from base classes to novel classes through contrastive learning.
no code implementations • 10 Dec 2020 • Hugues Thomas, Ben Agro, Mona Gridseth, Jian Zhang, Timothy D. Barfoot
We provide insights into our network predictions and show that our approach can also improve the performances of common localization techniques.
no code implementations • 9 Dec 2020 • Radu Horaud, Florence Forbes, Manuel Yguel, Guillaume Dewaele, Jian Zhang
This paper addresses the issue of matching rigid and articulated shapes through probabilistic point registration.
1 code implementation • NeurIPS 2020 • Zhibin Li, Jian Zhang, Yongshun Gong, Yazhou Yao, Qiang Wu
We present a model that utilizes linear models with variance and low-rank constraints, to help it generalize better and reduce the number of parameters.
no code implementations • 18 Nov 2020 • Jinyin Chen, Yunyi Xie, Jian Zhang, Xincheng Shu, Qi Xuan
In this paper, we introduce time-series snapshot network (TSSN) which is a mixture network to model the interactions among users and developers.
Social and Information Networks
2 code implementations • 10 Nov 2020 • Jianhui Chang, Zhenghui Zhao, Chuanmin Jia, Shiqi Wang, Lingbo Yang, Qi Mao, Jian Zhang, Siwei Ma
To this end, we propose a novel conceptual compression framework that encodes visual data into compact structure and texture representations, then decodes in a deep synthesis fashion, aiming to achieve better visual reconstruction quality, flexible content manipulation, and potential support for various vision tasks.
no code implementations • 4 Nov 2020 • Litao Yu, Yongsheng Gao, Jun Zhou, Jian Zhang, Qiang Wu
The proposed module can auto-select the intermediate visual features to correlate the spatial and semantic information.
Ranked #22 on
Semantic Segmentation
on NYU Depth v2
1 code implementation • 3 Nov 2020 • Litao Yu, Yongsheng Gao, Jun Zhou, Jian Zhang
Recent research on deep neural networks (DNNs) has primarily focused on improving the model accuracy.
no code implementations • 3 Nov 2020 • Zhibin Li, Litao Yu, Jian Zhang
In this paper, we present a novel data-distribution-aware margin calibration method for a better generalization of the mIoU over the whole data-distribution, underpinned by a rigid lower bound.
no code implementations • 2 Nov 2020 • Litao Yu, Jian Zhang, Qiang Wu
In this paper, we propose to apply dual attention on pyramid image feature maps to fully explore the visual-semantic correlations and improve the quality of generated sentences.
no code implementations • 20 Oct 2020 • Yunlu Wang, Cheng Yang, Menghan Hu, Jian Zhang, Qingli Li, Guangtao Zhai, Xiao-Ping Zhang
This paper presents an unobtrusive solution that can automatically identify deep breath when a person is walking past the global depth camera.
no code implementations • NeurIPS 2020 • Yikang Zhang, Jian Zhang, Zhao Zhong
Neural network architecture design mostly focuses on the new convolutional operator or special topological structure of network block, little attention is drawn to the configuration of stacking each block, called Block Stacking Style (BSS).
no code implementations • 13 Oct 2020 • Pedram Zamirai, Jian Zhang, Christopher R. Aberger, Christopher De Sa
State-of-the-art generic low-precision training algorithms use a mix of 16-bit and 32-bit precision, creating the folklore that 16-bit hardware compute units alone are not enough to maximize model accuracy.
no code implementations • 9 Oct 2020 • Yuzhen Chen, Menghan Hu, Chunjun Hua, Guangtao Zhai, Jian Zhang, Qingli Li, Simon X. Yang
Aimed at solving the problem that we don't know which service stage of the mask belongs to, we propose a detection system based on the mobile phone.
1 code implementation • 6 Oct 2020 • Jialiang Shen, Yucheng Wang, Jian Zhang
For SR of small-scales (between 1 and 2), images are constructed by interpolation from a sparse set of precalculated Laplacian pyramid levels.
no code implementations • 7 Sep 2020 • Caiqing Jian, Xinyu Cheng, Jian Zhang, Lihui Wang
The experimental results demonstrate that, compared to the traditional chemical bond structure representations, the rotation and translation invariant structure representations proposed in this work can improve the SCC prediction accuracy; with the graph embedded local self-attention, the mean absolute error (MAE) of the prediction model in the validation set decreases from 0. 1603 Hz to 0. 1067 Hz; using the classification based loss function instead of the scaled regression loss, the MAE of the predicted SCC can be decreased to 0. 0963 HZ, which is close to the quantum chemistry standard on CHAMPS dataset.
1 code implementation • 7 Sep 2020 • Rongzheng Bian, Yumeng Xue, Liang Zhou, Jian Zhang, Baoquan Chen, Daniel Weiskopf, Yunhai Wang
We propose a visualization method to understand the effect of multidimensional projection on local subspaces, using implicit function differentiation.
no code implementations • 3 Sep 2020 • Bin Huang, Yuanyang Du, Shuai Zhang, Wenfei Li, Jun Wang, Jian Zhang
RNAs play crucial and versatile roles in biological processes.
no code implementations • ECCV 2020 • Ke-Yue Zhang, Taiping Yao, Jian Zhang, Ying Tai, Shouhong Ding, Jilin Li, Feiyue Huang, Haichuan Song, Lizhuang Ma
Face anti-spoofing is crucial to security of face recognition systems.
1 code implementation • 6 Aug 2020 • Zeren Sun, Xian-Sheng Hua, Yazhou Yao, Xiu-Shen Wei, Guosheng Hu, Jian Zhang
To this end, we propose a certainty-based reusable sample selection and correction approach, termed as CRSSC, for coping with label noise in training deep FG models with web images.
no code implementations • 3 Aug 2020 • Haoqiang Guo, Lu Peng, Jian Zhang, Fang Qi, Lide Duan
Recent studies identify that Deep learning Neural Networks (DNNs) are vulnerable to subtle perturbations, which are not perceptible to human visual system but can fool the DNN models and lead to wrong outputs.
no code implementations • 28 Jul 2020 • Baoyan Ma, Jian Zhang, Feng Cao, Yongjun He
We design a fixed proposal module to generate fixed-sized feature maps of nuclei, which allows the new information of nucleus is used for classification.
no code implementations • 3 Jul 2020 • Mengxi Jia, Yunpeng Zhai, Shijian Lu, Siwei Ma, Jian Zhang
RGB-Infrared (IR) cross-modality person re-identification (re-ID), which aims to search an IR image in RGB gallery or vice versa, is a challenging task due to the large discrepancy between IR and RGB modalities.
Cross-Modality Person Re-identification
Person Re-Identification
no code implementations • 27 Jun 2020 • Qian Li, Qingyuan Hu, Yong Qi, Saiyu Qi, Jie Ma, Jian Zhang
SBA stochastically decides whether to augment at iterations controlled by the batch scheduler and in which a ''distilled'' dynamic soft label regularization is introduced by incorporating the similarity in the vicinity distribution respect to raw samples.
no code implementations • 18 Jun 2020 • Shuai Zhang, Xiaoyan Xin, Yang Wang, Yachong Guo, Qiuqiao Hao, Xianfeng Yang, Jun Wang, Jian Zhang, Bing Zhang, Wei Wang
The model provides automated recognition of given scans and generation of reports.
1 code implementation • 8 Jun 2020 • Guoji Fu, Yifan Hou, Jian Zhang, Kaili Ma, Barakeel Fanseu Kamhoua, James Cheng
This paper aims to provide a theoretical framework to understand GNNs, specifically, spectral graph convolutional networks and graph attention networks, from graph signal denoising perspectives.
no code implementations • 28 May 2020 • Huaxi Huang, Jun-Jie Zhang, Jian Zhang, Qiang Wu, Chang Xu
The challenges of high intra-class variance yet low inter-class fluctuations in fine-grained visual categorization are more severe with few labeled samples, \textit{i. e.,} Fine-Grained categorization problems under the Few-Shot setting (FGFS).
1 code implementation • 20 May 2020 • Yuqing Liu, Shiqi Wang, Jian Zhang, Shanshe Wang, Siwei Ma, Wen Gao
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
no code implementations • ACL 2020 • Simran Arora, Avner May, Jian Zhang, Christopher Ré
We study the settings for which deep contextual embeddings (e. g., BERT) give large improvements in performance relative to classic pretrained embeddings (e. g., GloVe), and an even simpler baseline---random word embeddings---focusing on the impact of the training set size and the linguistic properties of the task.
no code implementations • 13 May 2020 • Lu Zhang, Jian Zhang, Zhibin Li, Jingsong Xu
Inspired by the fact that spreading and collecting information through the Internet becomes the norm, more and more people choose to post for-profit contents (images and texts) in social networks.
1 code implementation • CVPR 2020 • Xiaoshui Huang, Guofeng Mei, Jian Zhang
We present a fast feature-metric point cloud registration framework, which enforces the optimisation of registration by minimising a feature-metric projection error without correspondences.
1 code implementation • ICLR 2020 • Yifan Hou, Jian Zhang, James Cheng, Kaili Ma, Richard T. B. Ma, Hongzhi Chen, Ming-Chang Yang
Graph neural networks (GNNs) have been widely used for representation learning on graph data.
no code implementations • 22 Apr 2020 • Yikang Zhang, Jian Zhang, Qiang Wang, Zhao Zhong
On one hand, we can reduce the computation cost remarkably while maintaining the performance.
1 code implementation • 27 Mar 2020 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
Semantic segmentation in a supervised learning manner has achieved significant progress in recent years.
no code implementations • 11 Mar 2020 • Dongming Yang, Yuexian Zou, Jian Zhang, Ge Li
GID block breaks through the local neighborhoods and captures long-range dependency of pixels both in global-level and instance-level from the scene to help detecting interactions between instances.
1 code implementation • 29 Feb 2020 • Megan Leszczynski, Avner May, Jian Zhang, Sen Wu, Christopher R. Aberger, Christopher Ré
To theoretically explain this tradeoff, we introduce a new measure of embedding instability---the eigenspace instability measure---which we prove bounds the disagreement in downstream predictions introduced by the change in word embeddings.
no code implementations • 18 Jan 2020 • Zhengping Liang, Jian Zhang, Liang Feng, Zexuan Zhu
However, as growing demand for cloud services, the existing EAs fail to implement in large-scale virtual machine placement (LVMP) problem due to the high time complexity and poor scalability.
no code implementations • 30 Dec 2019 • Jie Wu, Ying Peng, Chenghao Zheng, Zongbo Hao, Jian Zhang
Recently, generative adversarial networks (GANs) have shown great advantages in synthesizing images, leading to a boost of explorations of using faked images to augment data.
no code implementations • ICLR 2020 • Xin-Yu Zhang, Qiang Wang, Jian Zhang, Zhao Zhong
The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization.
Ranked #335 on
Image Classification
on ImageNet
no code implementations • 18 Dec 2019 • Lionel Blondé, Yichuan Charlie Tang, Jian Zhang, Russ Webb
In this work, we introduce a new method for imitation learning from video demonstrations.
no code implementations • 7 Dec 2019 • Yongshun Gong, Zhibin Li, Jian Zhang, Wei Liu, Jin-Feng Yi
In this paper, this specific problem is termed as potential passenger flow (PPF) prediction, which is a novel and important study connected with urban computing and intelligent transportation systems.
1 code implementation • 4 Dec 2019 • Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, Wenjun Zhang
In this paper, we present adversarial domain adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a more continuous latent space and guides the domain discriminator in judging samples' difference relative to source and target domains.
no code implementations • 24 Nov 2019 • Jinyin Chen, Jian Zhang, Zhi Chen, Min Du, Qi Xuan
In this work, we present the first study of adversarial attack on dynamic network link prediction (DNLP).
1 code implementation • 9 Nov 2019 • Yichuan Charlie Tang, Jian Zhang, Ruslan Salakhutdinov
Recent advances in deep reinforcement learning have demonstrated the capability of learning complex control policies from many types of environments.
no code implementations • 17 Oct 2019 • Yijie Mao, Bruno Clerckx, Jian Zhang, Victor O. K. Li, Mohammed Arafah
Cooperative Rate-Splitting (CRS) strategy, relying on linearly precoded rate-splitting at the transmitter and opportunistic transmission of the common message by the relaying user, has recently been shown to outperform typical Non-cooperative Rate-Splitting (NRS), Cooperative Non-Orthogonal Multiple Access (C-NOMA) and Space Division Multiple Access (SDMA) in a two-user Multiple Input Single Output (MISO) Broadcast Channel (BC) with user relaying.
no code implementations • 9 Oct 2019 • Bowen Yang, Jian Zhang, Jonathan Li, Christopher Ré, Christopher R. Aberger, Christopher De Sa
Pipeline parallelism (PP) when training neural networks enables larger models to be partitioned spatially, leading to both lower network communication and overall higher hardware utilization.
no code implementations • ICCV 2019 • Jian Zhang, Chenglong Zhao, Bingbing Ni, Minghao Xu, Xiaokang Yang
We propose a variational Bayesian framework for enhancing few-shot learning performance.
no code implementations • 25 Sep 2019 • Kane Zhang, Jian Zhang, Qiang Wang, Zhao Zhong
To verify the scalability, we also apply DyNet on segmentation task, the results show that DyNet can reduces 69. 3% FLOPs while maintaining the Mean IoU on segmentation task.
1 code implementation • NeurIPS 2019 • Avner May, Jian Zhang, Tri Dao, Christopher Ré
Finally, we show that by using the eigenspace overlap score as a selection criterion between embeddings drawn from a representative set we compressed, we can efficiently identify the better performing embedding with up to $2\times$ lower selection error rates than the next best measure of compression quality, and avoid the cost of training a model for each task of interest.
no code implementations • 19 Aug 2019 • Dongming Yang, Yuexian Zou, Jian Zhang, Ge Li
Although two-stage detectors like Faster R-CNN achieved big successes in object detection due to the strategy of extracting region proposals by region proposal network, they show their poor adaption in real-world object detection as a result of without considering mining hard samples during extracting region proposals.
no code implementations • 6 Aug 2019 • Yunxiang Zhang, Chenglong Zhao, Bingbing Ni, Jian Zhang, Haoran Deng
To address the limitations of existing magnitude-based pruning algorithms in cases where model weights or activations are of large and similar magnitude, we propose a novel perspective to discover parameter redundancy among channels and accelerate deep CNNs via channel pruning.
no code implementations • 4 Aug 2019 • Huaxi Huang, Jun-Jie Zhang, Jian Zhang, Jingsong Xu, Qiang Wu
A novel low-rank pairwise bilinear pooling operation is proposed to capture the nuanced differences between the support and query images for learning an effective distance metric.
no code implementations • 2 Jul 2019 • Zhibin Li, Jian Zhang, Qiang Wu, Yongshun Gong, Jin-Feng Yi, Christina Kirsch
In this paper, we formulate our prediction task as a multiple kernel learning problem with missing kernels.
no code implementations • 7 Jun 2019 • Yazhou Yao, Jian Zhang, Xian-Sheng Hua, Fumin Shen, Zhenmin Tang
Recent successes in visual recognition can be primarily attributed to feature representation, learning algorithms, and the ever-increasing size of labeled training data.
1 code implementation • 4 Jun 2019 • Guodong Ding, Salman Khan, Zhenmin Tang, Jian Zhang, Fatih Porikli
With this insight, we design a novel Dispersion-based Clustering (DBC) approach which can discover the underlying patterns in data.
Ranked #10 on
Unsupervised Person Re-Identification
on Market-1501
no code implementations • 23 May 2019 • Qijian Chen, Lihui Wang, Li Wang, Zeyu Deng, Jian Zhang, Yuemin Zhu
Glioma grading before surgery is very critical for the prognosis prediction and treatment plan making.
no code implementations • 11 May 2019 • Songsong Wu, Yan Yan, Hao Tang, Jianjun Qian, Jian Zhang, Xiao-Yuan Jing
However, the number of labeled source samples are always limited due to expensive annotation cost in practice, making sub-optimal performance been observed.
no code implementations • 1 May 2019 • Yongshun Gong, Jin-Feng Yi, Dong-Dong Chen, Jian Zhang, Jiayu Zhou, Zhihua Zhou
In this paper, we aim to infer the significance of every item's appearance in consumer decision making and identify the group of items that are suitable for screenless shopping.
no code implementations • 24 Apr 2019 • Nimit S. Sohoni, Christopher R. Aberger, Megan Leszczynski, Jian Zhang, Christopher Ré
In this paper we study a fundamental question: How much memory is actually needed to train a neural network?
no code implementations • 22 Apr 2019 • Jian Zhang, Jun Yu, DaCheng Tao
Next, we exploit an affine transformation to align the local deep features of each neighbourhood with the global features.
no code implementations • 16 Apr 2019 • Jun Yu, Jinghan Yao, Jian Zhang, Zhou Yu, DaCheng Tao
In this paper, we propose a one-stage framework, SPRNet, which performs efficient instance segmentation by introducing a single pixel reconstruction (SPR) branch to off-the-shelf one-stage detectors.
no code implementations • 7 Apr 2019 • Huaxi Huang, Jun-Jie Zhang, Jian Zhang, Qiang Wu, Jingsong Xu
Unlike traditional deep bilinear networks for fine-grained classification, which adopt the self-bilinear pooling to capture the subtle features of images, the proposed model uses a novel pairwise bilinear pooling to compare the nuanced differences between base images and query images for learning a deep distance metric.
no code implementations • 6 Apr 2019 • Muming Zhao, Jian Zhang, Chongyang Zhang, Wenjun Zhang
Towards this problem, in this paper we propose a constrained multi-stage Convolutional Neural Networks (CNNs) to jointly pursue locally consistent density map from two aspects.
no code implementations • 11 Mar 2019 • Xiaoshui Huang, Lixin Fan, Qiang Wu, Jian Zhang, Chun Yuan
Accurate and fast registration of cross-source 3D point clouds from different sensors is an emerged research problem in computer vision.
no code implementations • 26 Feb 2019 • Runsheng Zhang, Jian Zhang, Yaping Huang, Qi Zou
To tackle this issue, we propose a fully unsupervised part mining (UPM) approach to localize the discriminative parts without even image-level annotations, which largely improves the fine-grained classification performance.
1 code implementation • 26 Feb 2019 • Runsheng Zhang, Yaping Huang, Mengyang Pu, Jian Zhang, Qingji Guan, Qi Zou, Haibin Ling
To tackle this problem, we propose a simple but effective pattern mining-based method, called Object Location Mining (OLM), which exploits the advantages of data mining and feature representation of pre-trained convolutional neural networks (CNNs).
no code implementations • 25 Feb 2019 • Fan Fei, Zhan Tu, Jian Zhang, Xinyan Deng
Inspired by the hummingbirds' near-maximal performance during such extreme maneuvers, we developed a flight control strategy and experimentally demonstrated that such maneuverability can be achieved by an at-scale 12-gram hummingbird robot equipped with just two actuators.
1 code implementation • 25 Feb 2019 • Fan Fei, Zhan Tu, Yilun Yang, Jian Zhang, Xinyan Deng
Here, we present an open source high fidelity dynamic simulation for FWMAVs to serve as a testbed for the design, optimization and flight control of FWMAVs.
1 code implementation • 22 Feb 2019 • Jinyin Chen, Jian Zhang, Xuanheng Xu, Chengbo Fu, Dan Zhang, Qingpeng Zhang, Qi Xuan
Predicting the potential relations between nodes in networks, known as link prediction, has long been a challenge in network science.
no code implementations • 14 Jan 2019 • Yi Zhen, Hang Chen, Xu Zhang, Meng Liu, Xin Meng, Jian Zhang, Jiantao Pu
To investigate whether and to what extent central serous chorioretinopathy (CSC) depicted on color fundus photographs can be assessed using deep learning technology.
1 code implementation • 31 Oct 2018 • Jian Zhang, Avner May, Tri Dao, Christopher Ré
We investigate how to train kernel approximation methods that generalize well under a memory budget.
no code implementations • 31 Oct 2018 • Yi Zhen, Lei Wang, Han Liu, Jian Zhang, Jiantao Pu
Among these CNNs, the DenseNet had the highest classification accuracy (i. e., 75. 50%) based on pre-trained weights when using global ROIs, as compared to 65. 50% when using local ROIs.
1 code implementation • 11 Oct 2018 • Yucheng Wang, Jialiang Shen, Jian Zhang
In this way, feature information propagates from a single dense block to all subsequent blocks, instead of to a single successor.
no code implementations • 18 Sep 2018 • Minghui Liao, Jian Zhang, Zhaoyi Wan, Fengming Xie, Jiajun Liang, Pengyuan Lyu, Cong Yao, Xiang Bai
Inspired by speech recognition, recent state-of-the-art algorithms mostly consider scene text recognition as a sequence prediction problem.
Ranked #15 on
Scene Text Recognition
on ICDAR2013
no code implementations • ECCV 2018 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, Anton Van Den Hengel
Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge.
no code implementations • 4 Jun 2018 • Cody Coleman, Daniel Kang, Deepak Narayanan, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Re, Matei Zaharia
In this work, we analyze the entries from DAWNBench, which received optimized submissions from multiple industrial groups, to investigate the behavior of TTA as a metric as well as trends in the best-performing entries.
no code implementations • CVPR 2018 • Huanyu Yu, Shuo Cheng, Bingbing Ni, Minsi Wang, Jian Zhang, Xiaokang Yang
First, to facilitate this novel research of fine-grained video caption, we collected a novel dataset called Fine-grained Sports Narrative dataset (FSN) that contains 2K sports videos with ground-truth narratives from YouTube. com.
no code implementations • 16 May 2018 • Guodong Ding, Shanshan Zhang, Salman Khan, Zhenmin Tang, Jian Zhang, Fatih Porikli
Our approach measures the affinity of unlabeled samples with the underlying clusters of labeled data samples using the intermediate feature representations from deep networks.
1 code implementation • 9 Mar 2018 • Christopher De Sa, Megan Leszczynski, Jian Zhang, Alana Marzoev, Christopher R. Aberger, Kunle Olukotun, Christopher Ré
Low-precision computation is often used to lower the time and energy cost of machine learning, and recently hardware accelerators have been developed to support it.
1 code implementation • ICML 2018 • Mario Srouji, Jian Zhang, Ruslan Salakhutdinov
The proposed Structured Control Net (SCN) splits the generic MLP into two separate sub-modules: a nonlinear control module and a linear control module.
no code implementations • 19 Feb 2018 • Emilio Parisotto, Devendra Singh Chaplot, Jian Zhang, Ruslan Salakhutdinov
The ability for an agent to localize itself within an environment is crucial for many real-world applications.
no code implementations • 7 Feb 2018 • Yuxin Peng, Jian Zhang, Zhaoda Ye
Inspired by the sequential decision ability of deep reinforcement learning, we propose a new Deep Reinforcement Learning approach for Image Hashing (DRLIH).
no code implementations • 7 Feb 2018 • Jian Zhang, Yuxin Peng, Mingkuan Yuan
(2) Ignore the rich information contained in the large amount of unlabeled data across different modalities, especially the margin examples that are easily to be incorrectly retrieved, which can help to model the correlations.
no code implementations • 21 Jan 2018 • Yan Huang, Jinsong Xu, Qiang Wu, Zhedong Zheng, Zhao-Xiang Zhang, Jian Zhang
Unlike the traditional label which usually is a single integral number, the virtual label proposed in this work is a set of weight-based values each individual of which is a number in (0, 1] called multi-pseudo label and reflects the degree of relation between each generated data to every pre-defined class of real data.
no code implementations • 1 Dec 2017 • Jian Zhang, Yuxin Peng, Mingkuan Yuan
To address the above problem, in this paper we propose an Unsupervised Generative Adversarial Cross-modal Hashing approach (UGACH), which makes full use of GAN's ability for unsupervised representation learning to exploit the underlying manifold structure of cross-modal data.
no code implementations • 21 Nov 2017 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, Anton Van Den Hengel
Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge.
no code implementations • 19 Nov 2017 • Jun-Jie Zhang, Qi Wu, Jian Zhang, Chunhua Shen, Jianfeng Lu
These comments can be a description of the image, or some objects, attributes, scenes in it, which are normally used as the user-provided tags.
2 code implementations • ICLR 2018 • Yichen Gong, Heng Luo, Jian Zhang
Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis.
Ranked #12 on
Paraphrase Identification
on Quora Question Pairs
(Accuracy metric)
no code implementations • 22 Aug 2017 • Yazhou Yao, Jian Zhang, Fumin Shen, Li Liu, Fan Zhu, Dongxiang Zhang, Heng-Tao Shen
To eliminate manual annotation, in this work, we propose a novel image dataset construction framework by employing multiple textual queries.
no code implementations • 17 Aug 2017 • Thorsten Kurth, Jian Zhang, Nadathur Satish, Ioannis Mitliagkas, Evan Racah, Mostofa Ali Patwary, Tareq Malas, Narayanan Sundaram, Wahid Bhimji, Mikhail Smorkalov, Jack Deslippe, Mikhail Shiryaev, Srinivas Sridharan, Prabhat, Pradeep Dubey
This paper presents the first, 15-PetaFLOP Deep Learning system for solving scientific pattern classification problems on contemporary HPC architectures.
1 code implementation • CVPR 2018 • Jian Zhang, Bernard Ghanem
With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones.
no code implementations • 17 Jun 2017 • Zhiqiang Zeng, Jian Zhang, Xiaodong Wang, Yuming Chen, Chaoyang Zhu
Place recognition is one of the most fundamental topics in computer vision and robotics communities, where the task is to accurately and efficiently recognize the location of a given query image.
no code implementations • 13 Jun 2017 • Cunjing Ge, Feifei Ma, Tian Liu, Jian Zhang
Constrained counting is important in domains ranging from artificial intelligence to software analysis.
2 code implementations • ICLR 2018 • Jian Zhang, Ioannis Mitliagkas
We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam.
no code implementations • 16 Mar 2017 • Yazhou Yao, Jian Zhang, Fumin Shen, Xian-Sheng Hua, Wankou Yang, Zhenmin Tang
To tackle these problems, in this work, we exploit general corpus information to automatically select and subsequently classify web images into semantic rich (sub-)categories.
no code implementations • 8 Dec 2016 • Jian Zhang, Yuxin Peng
On the other hand, different hash bits actually contribute to the image retrieval differently, and treating them equally greatly affects the retrieval accuracy of image.
no code implementations • 4 Dec 2016 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu
Recent state-of-the-art approaches to multi-label image classification exploit the label dependencies in an image, at global level, largely improving the labeling capacity.
no code implementations • COLING 2016 • Jian Zhang, Xiaofeng Wu, Andy Way, Qun Liu
We show that the neural LM perplexity can be reduced by 7. 395 and 12. 011 using the proposed domain adaptation mechanism on the Penn Treebank and News data, respectively.
no code implementations • COLING 2016 • Jian Zhang, Liangyou Li, Andy Way, Qun Liu
In recent years, neural machine translation (NMT) has demonstrated state-of-the-art machine translation (MT) performance.
no code implementations • 22 Nov 2016 • Yazhou Yao, Jian Zhang, Fumin Shen, Xian-Sheng Hua, Jingsong Xu, Zhenmin Tang
To reduce the cost of manual labelling, there has been increased research interest in automatically constructing image datasets by exploiting web images.
no code implementations • 24 Oct 2016 • Xiaoshui Huang, Jian Zhang, Qiang Wu, Lixin Fan, Chun Yuan
In this paper, different from previous ICP-based methods, and from a statistic view, we propose a effective coarse-to-fine algorithm to detect and register a small scale SFM point cloud in a large scale Lidar point cloud.
no code implementations • 18 Aug 2016 • Xiaoshui Huang, Jian Zhang, Lixin Fan, Qiang Wu, Chun Yuan
We propose a systematic approach for registering cross-source point clouds.
no code implementations • 17 Aug 2016 • Xiang Zhang, Jiarui Sun, Siwei Ma, Zhouchen Lin, Jian Zhang, Shiqi Wang, Wen Gao
Therefore, introducing an accurate rate-constraint in sparse coding and dictionary learning becomes meaningful, which has not been fully exploited in the context of sparse representation.
no code implementations • 28 Jul 2016 • Jian Zhang, Yuxin Peng
(2) A semi-supervised deep hashing network is designed to extensively exploit both labeled and unlabeled data, in which we propose an online graph construction method to benefit from the evolving deep features during training to better capture semantic neighbors.
no code implementations • 23 Jun 2016 • Jian Zhang, Christopher De Sa, Ioannis Mitliagkas, Christopher Ré
Consider a number of workers running SGD independently on the same pool of data and averaging the models every once in a while -- a common but not well understood practice.
18 code implementations • EMNLP 2016 • Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100, 000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.
no code implementations • ICCV 2015 • Jian Zhang, Josip Djolonga, Andreas Krause
Higher-order models have been shown to be very useful for a plethora of computer vision tasks.
no code implementations • 1 Jul 2015 • Cunjing Ge, Feifei Ma, Jian Zhang
There are already quite a few tools for solving the Satisfiability Modulo Theories (SMT) problems.
no code implementations • CVPR 2015 • Hangfan Liu, Ruiqin Xiong, Jian Zhang, Wen Gao
To estimate the expectation and variance parameters for the transform bands of a particular patch, we exploit the non-local correlation of image and collect a set of similar patches as data samples to form the distribution.
no code implementations • NeurIPS 2014 • Jian Zhang, Alex Schwing, Raquel Urtasun
To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields.
no code implementations • 28 Oct 2014 • Tianfei Zhou, Yao Lu, Feng Lv, Huijun Di, Qingjie Zhao, Jian Zhang
Stochastic sampling based trackers have shown good performance for abrupt motion tracking so that they have gained popularity in recent years.
1 code implementation • 14 May 2014 • Jian Zhang, Debin Zhao, Wen Gao
In this paper, instead of using patch as the basic unit of sparse representation, we exploit the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and establish a novel sparse representation modeling of natural images, called group-based sparse representation (GSR).
no code implementations • 11 May 2014 • Jian Zhang, Debin Zhao, Ruiqin Xiong, Siwei Ma, Wen Gao
This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner.
no code implementations • 30 Apr 2014 • Jian Zhang, Chen Zhao, Debin Zhao, Wen Gao
From many fewer acquired measurements than suggested by the Nyquist sampling theory, compressive sensing (CS) theory demonstrates that, a signal can be reconstructed with high probability when it exhibits sparsity in some domain.
no code implementations • 29 Apr 2014 • Jian Zhang, Debin Zhao, Feng Jiang
At the encoder, for each block of compressive sensing (CS) measurements, the optimal pre-diction is selected from a set of prediction candidates that are generated by four designed directional predictive modes.
no code implementations • 29 Apr 2014 • Jian Zhang, Debin Zhao, Feng Jiang, Wen Gao
Compressive Sensing (CS) theory shows that a signal can be decoded from many fewer measurements than suggested by the Nyquist sampling theory, when the signal is sparse in some domain.
no code implementations • 24 May 2013 • Yin Song, Longbing Cao, Xuhui Fan, Wei Cao, Jian Zhang
These sequence-level latent parameters for each sequence are modeled as latent Dirichlet random variables and parameterized by a set of deterministic database-level hyper-parameters.
no code implementations • NeurIPS 2011 • Dan Zhang, Yan Liu, Luo Si, Jian Zhang, Richard D. Lawrence
Ignoring this structure information limits the performance of existing MIL algorithms.
no code implementations • 29 Sep 2010 • Sakrapee Paisitkriangkrai, Chunhua Shen, Jian Zhang
There is an abundant literature on face detection due to its important role in many vision applications.