2 code implementations • 16 Feb 2023 • Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, XiaoHu Qie
In this paper, we aim to ``dig out" the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly.
19 code implementations • EMNLP 2016 • Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100, 000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.
3 code implementations • 1 Dec 2022 • Yinhuai Wang, Jiwen Yu, Jian Zhang
Most existing Image Restoration (IR) models are task-specific, which can not be generalized to different degradation operators.
Ranked #1 on Image Compressed Sensing on CelebA
1 code implementation • 1 Mar 2023 • Yinhuai Wang, Jiwen Yu, Runyi Yu, Jian Zhang
Our simple, parameter-free approaches can be used not only for image restoration but also for image generation of unlimited sizes, with the potential to be a general tool for diffusion models.
1 code implementation • 5 Jul 2023 • Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, Jian Zhang
Specifically, we construct classifier guidance based on the strong correspondence of intermediate features in the diffusion model.
1 code implementation • 4 Feb 2024 • Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, Jian Zhang
Large-scale Text-to-Image (T2I) diffusion models have revolutionized image generation over the last few years.
2 code implementations • ICLR 2018 • Jian Zhang, Ioannis Mitliagkas
We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam.
1 code implementation • 6 Dec 2023 • Jiwen Yu, Xiaodong Cun, Chenyang Qi, Yong Zhang, Xintao Wang, Ying Shan, Jian Zhang
For appearance control, we borrow intermediate latents and their features from the text-to-image (T2I) generation for ensuring the generated first frame is equal to the given generated image.
3 code implementations • NeurIPS 2021 • Zhuchen Shao, Hao Bian, Yang Chen, Yifeng Wang, Jian Zhang, Xiangyang Ji, Yongbing Zhang
Multiple instance learning (MIL) is a powerful tool to solve the weakly supervised classification in whole slide image (WSI) based pathology diagnosis.
2 code implementations • ICLR 2018 • Yichen Gong, Heng Luo, Jian Zhang
Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis.
Ranked #12 on Paraphrase Identification on Quora Question Pairs (Accuracy metric)
1 code implementation • ICCV 2023 • Jiwen Yu, Yinhuai Wang, Chen Zhao, Bernard Ghanem, Jian Zhang
In this work, we propose a training-Free conditional Diffusion Model (FreeDoM) used for various conditions.
1 code implementation • CVPR 2018 • Jian Zhang, Bernard Ghanem
With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones.
1 code implementation • 25 Feb 2019 • Fan Fei, Zhan Tu, Yilun Yang, Jian Zhang, Xinyan Deng
Here, we present an open source high fidelity dynamic simulation for FWMAVs to serve as a testbed for the design, optimization and flight control of FWMAVs.
1 code implementation • ICCV 2023 • Shuzhou Yang, Moxuan Ding, Yanmin Wu, Zihan Li, Jian Zhang
Finally, extensive experiments demonstrate the robustness and superior effectiveness of our proposed NeRCo.
1 code implementation • 4 Dec 2019 • Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, Wenjun Zhang
In this paper, we present adversarial domain adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a more continuous latent space and guides the domain discriminator in judging samples' difference relative to source and target domains.
1 code implementation • CVPR 2020 • Xiaoshui Huang, Guofeng Mei, Jian Zhang
We present a fast feature-metric point cloud registration framework, which enforces the optimisation of registration by minimising a feature-metric projection error without correspondences.
1 code implementation • 10 May 2022 • Chong Mou, Yanze Wu, Xintao Wang, Chao Dong, Jian Zhang, Ying Shan
Instead of using known degradation levels as explicit supervision to the interactive mechanism, we propose a metric learning strategy to map the unquantifiable degradation levels in real-world scenarios to a metric space, which is trained in an unsupervised manner.
2 code implementations • 12 Jul 2022 • Yuyang Long, Qilong Zhang, Boheng Zeng, Lianli Gao, Xianglong Liu, Jian Zhang, Jingkuan Song
Specifically, we apply a spectrum transformation to the input and thus perform the model augmentation in the frequency domain.
1 code implementation • 25 May 2023 • Qiantong Xu, Fenglu Hong, Bo Li, Changran Hu, Zhengyu Chen, Jian Zhang
In this paper, we ask can we enhance open-source LLMs to be competitive to leading closed LLM APIs in tool manipulation, with practical amount of human supervision.
1 code implementation • CVPR 2022 • Chong Mou, Qian Wang, Jian Zhang
Concretely, without loss of interpretability, we integrate a gradient estimation strategy into the gradient descent step of the Proximal Gradient Descent (PGD) algorithm, driving it to deal with complex and real-world image degradation.
1 code implementation • 16 Mar 2022 • Yinhuai Wang, Yujie Hu, Jian Zhang
Emerging high-quality face restoration (FR) methods often utilize pre-trained GAN models (\textit{i. e.}, StyleGAN2) as GAN Prior.
2 code implementations • CVPR 2023 • Yanmin Wu, Xinhua Cheng, Renrui Zhang, Zesen Cheng, Jian Zhang
3D visual grounding aims to find the object within point clouds mentioned by free-form natural language descriptions with rich semantic cues.
1 code implementation • ICCV 2021 • Chong Mou, Jian Zhang, Zhuoyuan Wu
Specifically, we propose an improved graph model to perform patch-wise graph convolution with a dynamic and adaptive number of neighbors for each node.
1 code implementation • ICLR 2020 • Yifan Hou, Jian Zhang, James Cheng, Kaili Ma, Richard T. B. Ma, Hongzhi Chen, Ming-Chang Yang
Graph neural networks (GNNs) have been widely used for representation learning on graph data.
1 code implementation • ICCV 2023 • Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, Jian Zhang
1) Learning: the pre-trained model adapts to the new task by tuning an online PET module, along with our adaptation speed calibration to align different PET modules, 2) Accumulation: the task-specific knowledge learned by the online PET module is accumulated into an offline PET module through momentum update, 3) Ensemble: During inference, we respectively construct two experts with online/offline PET modules (which are favored by the novel/historical tasks) for prediction ensemble.
1 code implementation • 24 Nov 2022 • Yinhuai Wang, Yujie Hu, Jiwen Yu, Jian Zhang
Consistency and realness have always been the two critical issues of image super-resolution.
1 code implementation • 10 Mar 2022 • Yinhuai Wang, Shuzhou Yang, Yujie Hu, Jian Zhang
Unlike the pinhole, the thin lens refracts rays of a scene point, so its imaging on the sensor plane is scattered as a circle of confusion (CoC).
1 code implementation • 26 Jun 2022 • Hao Bian, Zhuchen Shao, Yang Chen, Yifeng Wang, Haoqian Wang, Jian Zhang, Yongbing Zhang
We achieve the state-of-the-art performance on the SICAPv2 dataset, and the visual analysis shows the accurate prediction results of instance level.
1 code implementation • 22 Mar 2021 • Di You, Jingfen Xie, Jian Zhang
While deep neural networks have achieved impressive success in image compressive sensing (CS), most of them lack flexibility when dealing with multi-ratio tasks and multi-scene images in practical applications.
1 code implementation • 31 Dec 2023 • Weijian Mai, Jian Zhang, Pengfei Fang, Zhijun Zhang
This survey comprehensively examines the emerging field of AIGC-based Brain-conditional Multimodal Synthesis, termed AIGC-Brain, to delineate the current landscape and future directions.
2 code implementations • 10 Mar 2021 • Chong Mou, Jian Zhang, Xiaopeng Fan, Hangfan Liu, Ronggang Wang
Local and non-local attention-based methods have been well studied in various image restoration tasks while leading to promising performance.
1 code implementation • 17 Oct 2021 • Yinghuan Shi, Jian Zhang, Tong Ling, Jiwen Lu, Yefeng Zheng, Qian Yu, Lei Qi, Yang Gao
In semi-supervised medical image segmentation, most previous works draw on the common assumption that higher entropy means higher uncertainty.
1 code implementation • ICCV 2023 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
To deal with the domain shift between training and test samples, current methods have primarily focused on learning generalizable features during training and ignore the specificity of unseen samples that are also critical during the test.
1 code implementation • CVPR 2022 • Xiyao Liu, Ziping Ma, Junxing Ma, Jian Zhang, Gerald Schaefer, Hui Fang
Conventional steganography approaches embed a secret message into a carrier for concealed communication but are prone to attack by recent advanced steganalysis tools.
1 code implementation • CVPR 2022 • Xuanyu Zhang, Yongbing Zhang, Ruiqin Xiong, Qilin Sun, Jian Zhang
Hyperspectral imaging is an essential imaging modality for a wide range of applications, especially in remote sensing, agriculture, and medicine.
1 code implementation • CVPR 2021 • Yazhou Yao, Tao Chen, GuoSen Xie, Chuanyi Zhang, Fumin Shen, Qi Wu, Zhenmin Tang, Jian Zhang
To further mine the non-salient region objects, we propose to exert the segmentation network's self-correction ability.
1 code implementation • 4 Jun 2019 • Guodong Ding, Salman Khan, Zhenmin Tang, Jian Zhang, Fatih Porikli
With this insight, we design a novel Dispersion-based Clustering (DBC) approach which can discover the underlying patterns in data.
Ranked #19 on Unsupervised Person Re-Identification on Market-1501
2 code implementations • 17 May 2021 • Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua Susskind, Jian Zhang, Ruslan Salakhutdinov, Hanlin Goh
Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration.
1 code implementation • 17 Mar 2024 • Dian Zheng, Xiao-Ming Wu, Shuzhou Yang, Jian Zhang, Jian-Fang Hu, Wei-Shi Zheng
Universal image restoration is a practical and potential computer vision task for real-world applications.
1 code implementation • 14 May 2014 • Jian Zhang, Debin Zhao, Wen Gao
In this paper, instead of using patch as the basic unit of sparse representation, we exploit the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and establish a novel sparse representation modeling of natural images, called group-based sparse representation (GSR).
1 code implementation • ICCV 2021 • Zhuoyuan Wu, Jian Zhang, Chong Mou
To better exploit the spatial-temporal correlation among frames and address the problem of information loss between adjacent phases in existing DUNs, we propose to adopt the 3D-CNN prior in our proximal mapping module and develop a novel dense feature map (DFM) strategy, respectively.
1 code implementation • 25 Jul 2022 • Chong Mou, Jian Zhang
Compressive learning (CL) is an emerging framework that integrates signal acquisition via compressed sensing (CS) and machine learning for inference tasks directly on a small number of measurements.
1 code implementation • 9 Nov 2022 • Jie Wu, Ying Peng, Shengming Zhang, Weigang Qi, Jian Zhang
MVLT is trained in two stages: in the first stage, we design a STR-tailored pretraining method based on a masking strategy; in the second stage, we fine-tune our model and adopt an iterative correction method to improve the performance.
1 code implementation • 27 Mar 2020 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
Semantic segmentation in a supervised learning manner has achieved significant progress in recent years.
1 code implementation • 19 Jul 2022 • Bin Chen, Jian Zhang
To more efficiently address image compressed sensing (CS) problems, we present a novel content-aware scalable network dubbed CASNet which collectively achieves adaptive sampling rate allocation, fine granular scalability and high-quality reconstruction.
Ranked #1 on Image Compressed Sensing on CBSD68
1 code implementation • 16 Oct 2023 • Tingyu Xie, Qi Li, Jian Zhang, Yan Zhang, Zuozhu Liu, Hongwei Wang
Large language models (LLMs) exhibited powerful capability in various natural language processing tasks.
1 code implementation • 17 Oct 2022 • Guofeng Mei, Fabio Poiesi, Cristiano Saltori, Jian Zhang, Elisa Ricci, Nicu Sebe
Probabilistic 3D point cloud registration methods have shown competitive performance in overcoming noise, outliers, and density variations.
1 code implementation • 28 Jun 2023 • Jiechong Song, Bin Chen, Jian Zhang
Deep unfolding network (DUN) that unfolds the optimization algorithm into a deep neural network has achieved great success in compressive sensing (CS) due to its good interpretability and high performance.
1 code implementation • ICCV 2021 • Zeren Sun, Yazhou Yao, Xiu-Shen Wei, Yongshun Zhang, Fumin Shen, Jianxin Wu, Jian Zhang, Heng-Tao Shen
Learning from the web can ease the extreme dependence of deep learning on large-scale manually labeled datasets.
1 code implementation • 30 Jun 2023 • Zhuchen Shao, Yang Chen, Hao Bian, Jian Zhang, Guojun Liu, Yongbing Zhang
Many studies adopt random sampling pre-processing strategy and WSI-level aggregation models, which inevitably lose critical prognostic information in the patient-level bag.
1 code implementation • CVPR 2023 • Jiechong Song, Chong Mou, Shiqi Wang, Siwei Ma, Jian Zhang
And, PGCA block achieves an enhanced information interaction, which introduces the inertia force into the gradient descent step through a cross attention block.
1 code implementation • 20 May 2020 • Yuqing Liu, Shiqi Wang, Jian Zhang, Shanshe Wang, Siwei Ma, Wen Gao
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
1 code implementation • 8 Jun 2020 • Guoji Fu, Yifan Hou, Jian Zhang, Kaili Ma, Barakeel Fanseu Kamhoua, James Cheng
This paper aims to provide a theoretical framework to understand GNNs, specifically, spectral graph convolutional networks and graph attention networks, from graph signal denoising perspectives.
1 code implementation • 31 Dec 2021 • Dongjie Ye, Zhangkai Ni, Hanli Wang, Jian Zhang, Shiqi Wang, Sam Kwong
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
1 code implementation • 22 Feb 2019 • Jinyin Chen, Jian Zhang, Xuanheng Xu, Chengbo Fu, Dan Zhang, Qingpeng Zhang, Qi Xuan
Predicting the potential relations between nodes in networks, known as link prediction, has long been a challenge in network science.
1 code implementation • CVPR 2023 • Chong Mou, Youmin Xu, Jiechong Song, Chen Zhao, Bernard Ghanem, Jian Zhang
For large-capacity, we present a reversible pipeline to perform multiple videos hiding and recovering through a single invertible neural network (INN).
1 code implementation • Tiny Papers @ ICLR 2023 • Xiao Liu, Jian Zhang, Heng Zhang, Fuzhao Xue, Yang You
We evaluate our model on various dialogue understanding tasks including dialogue relation extraction, dialogue emotion recognition, and dialogue act classification.
Ranked #1 on Dialog Relation Extraction on DialogRE
1 code implementation • NeurIPS 2019 • Avner May, Jian Zhang, Tri Dao, Christopher Ré
Finally, we show that by using the eigenspace overlap score as a selection criterion between embeddings drawn from a representative set we compressed, we can efficiently identify the better performing embedding with up to $2\times$ lower selection error rates than the next best measure of compression quality, and avoid the cost of training a model for each task of interest.
1 code implementation • 15 Jul 2021 • Di You, Jian Zhang, Jingfen Xie, Bin Chen, Siwei Ma
In this paper, we propose a novel COntrollable Arbitrary-Sampling neTwork, dubbed COAST, to solve CS problems of arbitrary-sampling matrices (including unseen sampling matrices) with one single model.
1 code implementation • 9 Mar 2018 • Christopher De Sa, Megan Leszczynski, Jian Zhang, Alana Marzoev, Christopher R. Aberger, Kunle Olukotun, Christopher Ré
Low-precision computation is often used to lower the time and energy cost of machine learning, and recently hardware accelerators have been developed to support it.
1 code implementation • 31 Oct 2018 • Jian Zhang, Avner May, Tri Dao, Christopher Ré
We investigate how to train kernel approximation methods that generalize well under a memory budget.
1 code implementation • 6 Oct 2022 • Guofeng Mei, Cristiano Saltori, Fabio Poiesi, Jian Zhang, Elisa Ricci, Nicu Sebe, Qiang Wu
Unsupervised learning on 3D point clouds has undergone a rapid evolution, especially thanks to data augmentation-based contrastive methods.
1 code implementation • 18 Jul 2023 • Bin Chen, Jiechong Song, Jingfen Xie, Jian Zhang
By absorbing the merits of both the model- and data-driven methods, deep physics-engaged learning scheme achieves high-accuracy and interpretable image reconstruction.
1 code implementation • 26 Feb 2019 • Runsheng Zhang, Yaping Huang, Mengyang Pu, Jian Zhang, Qingji Guan, Qi Zou, Haibin Ling
To tackle this problem, we propose a simple but effective pattern mining-based method, called Object Location Mining (OLM), which exploits the advantages of data mining and feature representation of pre-trained convolutional neural networks (CNNs).
1 code implementation • 11 Jul 2023 • Jian Zhang, Runwei Ding, Miaoju Ban, Ge Yang
It follows the unsupervised setting and only normal (defect-free) images are used for training.
1 code implementation • 10 Dec 2022 • Runyi Yu, Zhennan Wang, Yinhuai Wang, Kehan Li, Yian Zhao, Jian Zhang, Guoli Song, Jie Chen
By analyzing the input and output of each encoder layer in VTs using reparameterization and visualization, we find that the default PE joining method (simply adding the PE and patch embedding together) operates the same affine transformation to token embedding and PE, which limits the expressiveness of PE and hence constrains the performance of VTs.
1 code implementation • 29 Feb 2020 • Megan Leszczynski, Avner May, Jian Zhang, Sen Wu, Christopher R. Aberger, Christopher Ré
To theoretically explain this tradeoff, we introduce a new measure of embedding instability---the eigenspace instability measure---which we prove bounds the disagreement in downstream predictions introduced by the change in word embeddings.
1 code implementation • 19 Oct 2021 • Jiechong Song, Bin Chen, Jian Zhang
By understanding DUNs from the perspective of the human brain's memory processing, we find there exists two issues in existing DUNs.
1 code implementation • 8 Jan 2023 • Fangzhi Xu, Jun Liu, Qika Lin, Tianzhe Zhao, Jian Zhang, Lingling Zhang
(2) How to enhance the perception of reasoning types for the models?
2 code implementations • 10 Nov 2020 • Jianhui Chang, Zhenghui Zhao, Chuanmin Jia, Shiqi Wang, Lingbo Yang, Qi Mao, Jian Zhang, Siwei Ma
To this end, we propose a novel conceptual compression framework that encodes visual data into compact structure and texture representations, then decodes in a deep synthesis fashion, aiming to achieve better visual reconstruction quality, flexible content manipulation, and potential support for various vision tasks.
1 code implementation • ICCV 2023 • Xiran Wang, Jian Zhang, Lei Qi, Yinghuan Shi
Domain generalization (DG) is proposed to deal with the issue of domain shift, which occurs when statistical differences exist between source and target domains.
1 code implementation • 26 Aug 2023 • Bin Chen, Xuanyu Zhang, Shuai Liu, Yongbing Zhang, Jian Zhang
Compressed sensing (CS) is a promising tool for reducing sampling costs.
2 code implementations • 10 Dec 2020 • Hugues Thomas, Ben Agro, Mona Gridseth, Jian Zhang, Timothy D. Barfoot
We provide insights into our network predictions and show that our approach can also improve the performances of common localization techniques.
1 code implementation • 24 Mar 2022 • Qiankun Gao, Chen Zhao, Bernard Ghanem, Jian Zhang
After RRL, the classification head is refined with global class-balanced classification loss to address the data imbalance issue as well as learn the decision boundaries between new and previous classes.
1 code implementation • 15 Jun 2023 • Zhili He, Wang Chen, Jian Zhang, Yu-Hsing Wang
Cracks provide an essential indicator of infrastructure performance degradation, and achieving high-precision pixel-level crack segmentation is an issue of concern.
1 code implementation • 27 Jul 2023 • Bo Yang, Xinyu Zhang, Jian Zhang, Jun Luo, Mingliang Zhou, Yangjun Pi
To address this problem, we propose a new adaptive threshold focal loss (ATFL) function that decouples the target and the background, and utilizes the adaptive mechanism to adjust the loss weight to force the model to allocate more attention to target features.
1 code implementation • NeurIPS 2020 • Zhibin Li, Jian Zhang, Yongshun Gong, Yazhou Yao, Qiang Wu
We present a model that utilizes linear models with variance and low-rank constraints, to help it generalize better and reduce the number of parameters.
1 code implementation • 23 Dec 2021 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
Beyond the training stage, overfitting could also cause unstable prediction in the test stage.
1 code implementation • 24 Apr 2022 • Jingfen Xie, Jian Zhang, Yongbing Zhang, Xiangyang Ji
Compressed Sensing MRI (CS-MRI) aims at reconstructing de-aliased images from sub-Nyquist sampling k-space data to accelerate MR Imaging, thus presenting two basic issues, i. e., where to sample and how to reconstruct.
1 code implementation • 17 Oct 2023 • Yuxi Wei, Juntong Peng, Tong He, Chenxin Xu, Jian Zhang, Shirui Pan, Siheng Chen
To analyze multivariate time series, most previous methods assume regular subsampling of time series, where the interval between adjacent measurements and the number of samples remain unchanged.
1 code implementation • 7 Sep 2020 • Rongzheng Bian, Yumeng Xue, Liang Zhou, Jian Zhang, Baoquan Chen, Daniel Weiskopf, Yunhai Wang
We propose a visualization method to understand the effect of multidimensional projection on local subspaces, using implicit function differentiation.
1 code implementation • 7 Apr 2019 • Huaxi Huang, Jun-Jie Zhang, Jian Zhang, Qiang Wu, Jingsong Xu
Unlike traditional deep bilinear networks for fine-grained classification, which adopt the self-bilinear pooling to capture the subtle features of images, the proposed model uses a novel pairwise bilinear pooling to compare the nuanced differences between base images and query images for learning a deep distance metric.
1 code implementation • 26 Dec 2023 • Weisong Sun, Chunrong Fang, Yudu You, Yuchen Chen, Yi Liu, Chong Wang, Jian Zhang, Quanjun Zhang, Hanwei Qian, Wei Zhao, Yang Liu, Zhenyu Chen
PromptCS trains a prompt agent that can generate continuous prompts to unleash the potential for LLMs in code summarization.
1 code implementation • 23 Jan 2021 • Huafeng Liu, Chuanyi Zhang, Yazhou Yao, Xiushen Wei, Fumin Shen, Jian Zhang, Zhenmin Tang
Labeling objects at a subordinate level typically requires expert knowledge, which is not always available when using random annotators.
1 code implementation • 26 Apr 2022 • Minghao Zhao, Le Wu, Yile Liang, Lei Chen, Jian Zhang, Qilin Deng, Kai Wang, Xudong Shen, Tangjie Lv, Runze Wu
While conventional CF models are known for facing the challenges of the popularity bias that favors popular items, one may wonder "Whether the existing graph-based CF models alleviate or exacerbate popularity bias of recommender systems?"
1 code implementation • 13 Apr 2024 • Qinghe Ma, Jian Zhang, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
To fully utilize the information within the intermediate domain, we propose a symmetric Guidance training strategy (SymGD), which additionally offers direct guidance to unlabeled data by merging pseudo labels from intermediate samples.
1 code implementation • 11 Oct 2018 • Yucheng Wang, Jialiang Shen, Jian Zhang
In this way, feature information propagates from a single dense block to all subsequent blocks, instead of to a single successor.
1 code implementation • 6 Oct 2020 • Jialiang Shen, Yucheng Wang, Jian Zhang
For SR of small-scales (between 1 and 2), images are constructed by interpolation from a sparse set of precalculated Laplacian pyramid levels.
1 code implementation • 7 Nov 2023 • Zhili He, Yu-Hsing Wang, Jian Zhang
This study proposes a comprehensive solution.
1 code implementation • 12 Feb 2024 • Meng-Chieh Lee, Haiyang Yu, Jian Zhang, Vassilis N. Ioannidis, Xiang Song, Soji Adeshina, Da Zheng, Christos Faloutsos
Given a node-attributed graph, and a graph task (link prediction or node classification), can we tell if a graph neural network (GNN) will perform well?
1 code implementation • ICML 2018 • Mario Srouji, Jian Zhang, Ruslan Salakhutdinov
The proposed Structured Control Net (SCN) splits the generic MLP into two separate sub-modules: a nonlinear control module and a linear control module.
1 code implementation • 3 Nov 2020 • Litao Yu, Yongsheng Gao, Jun Zhou, Jian Zhang
Recent research on deep neural networks (DNNs) has primarily focused on improving the model accuracy.
1 code implementation • 22 Feb 2021 • Tao Chen, GuoSen Xie, Yazhou Yao, Qiong Wang, Fumin Shen, Zhenmin Tang, Jian Zhang
Then we utilize the fused prototype to guide the final segmentation of the query image.
1 code implementation • 26 Apr 2022 • Yuqing Liu, Qi Jia, Jian Zhang, Xin Fan, Shanshe Wang, Siwei Ma, Wen Gao
Existing BDE methods have no unified solution for various BDE situations, and directly learn a mapping for each pixel from LBD image to the desired value in HBD image, which may change the given high-order bits and lead to a huge deviation from the ground truth.
1 code implementation • 10 Jul 2022 • Litao Yu, Zhibin Li, Jian Zhang, Qiang Wu
Scene segmentation in images is a fundamental yet challenging problem in visual content understanding, which is to learn a model to assign every image pixel to a categorical label.
no code implementations • 4 Jun 2018 • Cody Coleman, Daniel Kang, Deepak Narayanan, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Re, Matei Zaharia
In this work, we analyze the entries from DAWNBench, which received optimized submissions from multiple industrial groups, to investigate the behavior of TTA as a metric as well as trends in the best-performing entries.
no code implementations • 16 May 2018 • Guodong Ding, Shanshan Zhang, Salman Khan, Zhenmin Tang, Jian Zhang, Fatih Porikli
Our approach measures the affinity of unlabeled samples with the underlying clusters of labeled data samples using the intermediate feature representations from deep networks.
no code implementations • 17 Aug 2016 • Xiang Zhang, Jiarui Sun, Siwei Ma, Zhouchen Lin, Jian Zhang, Shiqi Wang, Wen Gao
Therefore, introducing an accurate rate-constraint in sparse coding and dictionary learning becomes meaningful, which has not been fully exploited in the context of sparse representation.
no code implementations • 19 Feb 2018 • Emilio Parisotto, Devendra Singh Chaplot, Jian Zhang, Ruslan Salakhutdinov
The ability for an agent to localize itself within an environment is crucial for many real-world applications.
no code implementations • 7 Feb 2018 • Yuxin Peng, Jian Zhang, Zhaoda Ye
Inspired by the sequential decision ability of deep reinforcement learning, we propose a new Deep Reinforcement Learning approach for Image Hashing (DRLIH).
no code implementations • 7 Feb 2018 • Jian Zhang, Yuxin Peng, Mingkuan Yuan
(2) Ignore the rich information contained in the large amount of unlabeled data across different modalities, especially the margin examples that are easily to be incorrectly retrieved, which can help to model the correlations.
no code implementations • 21 Jan 2018 • Yan Huang, Jinsong Xu, Qiang Wu, Zhedong Zheng, Zhao-Xiang Zhang, Jian Zhang
Unlike the traditional label which usually is a single integral number, the virtual label proposed in this work is a set of weight-based values each individual of which is a number in (0, 1] called multi-pseudo label and reflects the degree of relation between each generated data to every pre-defined class of real data.
no code implementations • 1 Dec 2017 • Jian Zhang, Yuxin Peng, Mingkuan Yuan
To address the above problem, in this paper we propose an Unsupervised Generative Adversarial Cross-modal Hashing approach (UGACH), which makes full use of GAN's ability for unsupervised representation learning to exploit the underlying manifold structure of cross-modal data.
no code implementations • 21 Nov 2017 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, Anton Van Den Hengel
Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge.
no code implementations • 19 Nov 2017 • Jun-Jie Zhang, Qi Wu, Jian Zhang, Chunhua Shen, Jianfeng Lu
These comments can be a description of the image, or some objects, attributes, scenes in it, which are normally used as the user-provided tags.
no code implementations • 28 Jul 2016 • Jian Zhang, Yuxin Peng
(2) A semi-supervised deep hashing network is designed to extensively exploit both labeled and unlabeled data, in which we propose an online graph construction method to benefit from the evolving deep features during training to better capture semantic neighbors.
no code implementations • 22 Aug 2017 • Yazhou Yao, Jian Zhang, Fumin Shen, Li Liu, Fan Zhu, Dongxiang Zhang, Heng-Tao Shen
To eliminate manual annotation, in this work, we propose a novel image dataset construction framework by employing multiple textual queries.
no code implementations • 17 Aug 2017 • Thorsten Kurth, Jian Zhang, Nadathur Satish, Ioannis Mitliagkas, Evan Racah, Mostofa Ali Patwary, Tareq Malas, Narayanan Sundaram, Wahid Bhimji, Mikhail Smorkalov, Jack Deslippe, Mikhail Shiryaev, Srinivas Sridharan, Prabhat, Pradeep Dubey
This paper presents the first, 15-PetaFLOP Deep Learning system for solving scientific pattern classification problems on contemporary HPC architectures.
no code implementations • 17 Jun 2017 • Zhiqiang Zeng, Jian Zhang, Xiaodong Wang, Yuming Chen, Chaoyang Zhu
Place recognition is one of the most fundamental topics in computer vision and robotics communities, where the task is to accurately and efficiently recognize the location of a given query image.
no code implementations • 13 Jun 2017 • Cunjing Ge, Feifei Ma, Tian Liu, Jian Zhang
Constrained counting is important in domains ranging from artificial intelligence to software analysis.
no code implementations • 8 Dec 2016 • Jian Zhang, Yuxin Peng
On the other hand, different hash bits actually contribute to the image retrieval differently, and treating them equally greatly affects the retrieval accuracy of image.
no code implementations • 22 Nov 2016 • Yazhou Yao, Jian Zhang, Fumin Shen, Xian-Sheng Hua, Jingsong Xu, Zhenmin Tang
To reduce the cost of manual labelling, there has been increased research interest in automatically constructing image datasets by exploiting web images.
no code implementations • 16 Mar 2017 • Yazhou Yao, Jian Zhang, Fumin Shen, Xian-Sheng Hua, Wankou Yang, Zhenmin Tang
To tackle these problems, in this work, we exploit general corpus information to automatically select and subsequently classify web images into semantic rich (sub-)categories.
no code implementations • 4 Dec 2016 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu
Recent state-of-the-art approaches to multi-label image classification exploit the label dependencies in an image, at global level, largely improving the labeling capacity.
no code implementations • 24 Oct 2016 • Xiaoshui Huang, Jian Zhang, Qiang Wu, Lixin Fan, Chun Yuan
In this paper, different from previous ICP-based methods, and from a statistic view, we propose a effective coarse-to-fine algorithm to detect and register a small scale SFM point cloud in a large scale Lidar point cloud.
no code implementations • 18 Aug 2016 • Xiaoshui Huang, Jian Zhang, Lixin Fan, Qiang Wu, Chun Yuan
We propose a systematic approach for registering cross-source point clouds.
no code implementations • 23 Jun 2016 • Jian Zhang, Christopher De Sa, Ioannis Mitliagkas, Christopher Ré
Consider a number of workers running SGD independently on the same pool of data and averaging the models every once in a while -- a common but not well understood practice.
no code implementations • 1 Jul 2015 • Cunjing Ge, Feifei Ma, Jian Zhang
There are already quite a few tools for solving the Satisfiability Modulo Theories (SMT) problems.
no code implementations • 28 Oct 2014 • Tianfei Zhou, Yao Lu, Feng Lv, Huijun Di, Qingjie Zhao, Jian Zhang
Stochastic sampling based trackers have shown good performance for abrupt motion tracking so that they have gained popularity in recent years.
no code implementations • 11 May 2014 • Jian Zhang, Debin Zhao, Ruiqin Xiong, Siwei Ma, Wen Gao
This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner.
no code implementations • 30 Apr 2014 • Jian Zhang, Chen Zhao, Debin Zhao, Wen Gao
From many fewer acquired measurements than suggested by the Nyquist sampling theory, compressive sensing (CS) theory demonstrates that, a signal can be reconstructed with high probability when it exhibits sparsity in some domain.
no code implementations • 29 Apr 2014 • Jian Zhang, Debin Zhao, Feng Jiang, Wen Gao
Compressive Sensing (CS) theory shows that a signal can be decoded from many fewer measurements than suggested by the Nyquist sampling theory, when the signal is sparse in some domain.
no code implementations • 29 Apr 2014 • Jian Zhang, Debin Zhao, Feng Jiang
At the encoder, for each block of compressive sensing (CS) measurements, the optimal pre-diction is selected from a set of prediction candidates that are generated by four designed directional predictive modes.
no code implementations • 24 May 2013 • Yin Song, Longbing Cao, Xuhui Fan, Wei Cao, Jian Zhang
These sequence-level latent parameters for each sequence are modeled as latent Dirichlet random variables and parameterized by a set of deterministic database-level hyper-parameters.
no code implementations • 18 Sep 2018 • Minghui Liao, Jian Zhang, Zhaoyi Wan, Fengming Xie, Jiajun Liang, Pengyuan Lyu, Cong Yao, Xiang Bai
Inspired by speech recognition, recent state-of-the-art algorithms mostly consider scene text recognition as a sequence prediction problem.
Ranked #30 on Scene Text Recognition on SVT
no code implementations • 31 Oct 2018 • Yi Zhen, Lei Wang, Han Liu, Jian Zhang, Jiantao Pu
Among these CNNs, the DenseNet had the highest classification accuracy (i. e., 75. 50%) based on pre-trained weights when using global ROIs, as compared to 65. 50% when using local ROIs.
no code implementations • 29 Sep 2010 • Sakrapee Paisitkriangkrai, Chunhua Shen, Jian Zhang
There is an abundant literature on face detection due to its important role in many vision applications.
no code implementations • COLING 2016 • Jian Zhang, Xiaofeng Wu, Andy Way, Qun Liu
We show that the neural LM perplexity can be reduced by 7. 395 and 12. 011 using the proposed domain adaptation mechanism on the Penn Treebank and News data, respectively.
no code implementations • COLING 2016 • Jian Zhang, Liangyou Li, Andy Way, Qun Liu
In recent years, neural machine translation (NMT) has demonstrated state-of-the-art machine translation (MT) performance.
no code implementations • NeurIPS 2014 • Jian Zhang, Alex Schwing, Raquel Urtasun
To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields.
no code implementations • NeurIPS 2011 • Dan Zhang, Yan Liu, Luo Si, Jian Zhang, Richard D. Lawrence
Ignoring this structure information limits the performance of existing MIL algorithms.
no code implementations • CVPR 2018 • Huanyu Yu, Shuo Cheng, Bingbing Ni, Minsi Wang, Jian Zhang, Xiaokang Yang
First, to facilitate this novel research of fine-grained video caption, we collected a novel dataset called Fine-grained Sports Narrative dataset (FSN) that contains 2K sports videos with ground-truth narratives from YouTube. com.
no code implementations • ECCV 2018 • Jun-Jie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jianfeng Lu, Anton Van Den Hengel
Despite significant progress in a variety of vision-and-language problems, developing a method capable of asking intelligent, goal-oriented questions about images is proven to be an inscrutable challenge.
no code implementations • 14 Jan 2019 • Yi Zhen, Hang Chen, Xu Zhang, Meng Liu, Xin Meng, Jian Zhang, Jiantao Pu
To investigate whether and to what extent central serous chorioretinopathy (CSC) depicted on color fundus photographs can be assessed using deep learning technology.
no code implementations • CVPR 2015 • Hangfan Liu, Ruiqin Xiong, Jian Zhang, Wen Gao
To estimate the expectation and variance parameters for the transform bands of a particular patch, we exploit the non-local correlation of image and collect a set of similar patches as data samples to form the distribution.
no code implementations • ICCV 2015 • Jian Zhang, Josip Djolonga, Andreas Krause
Higher-order models have been shown to be very useful for a plethora of computer vision tasks.
no code implementations • 26 Feb 2019 • Runsheng Zhang, Jian Zhang, Yaping Huang, Qi Zou
To tackle this issue, we propose a fully unsupervised part mining (UPM) approach to localize the discriminative parts without even image-level annotations, which largely improves the fine-grained classification performance.
no code implementations • 25 Feb 2019 • Fan Fei, Zhan Tu, Jian Zhang, Xinyan Deng
Inspired by the hummingbirds' near-maximal performance during such extreme maneuvers, we developed a flight control strategy and experimentally demonstrated that such maneuverability can be achieved by an at-scale 12-gram hummingbird robot equipped with just two actuators.
no code implementations • 11 Mar 2019 • Xiaoshui Huang, Lixin Fan, Qiang Wu, Jian Zhang, Chun Yuan
Accurate and fast registration of cross-source 3D point clouds from different sensors is an emerged research problem in computer vision.
no code implementations • 6 Apr 2019 • Muming Zhao, Jian Zhang, Chongyang Zhang, Wenjun Zhang
Towards this problem, in this paper we propose a constrained multi-stage Convolutional Neural Networks (CNNs) to jointly pursue locally consistent density map from two aspects.
no code implementations • 16 Apr 2019 • Jun Yu, Jinghan Yao, Jian Zhang, Zhou Yu, DaCheng Tao
In this paper, we propose a one-stage framework, SPRNet, which performs efficient instance segmentation by introducing a single pixel reconstruction (SPR) branch to off-the-shelf one-stage detectors.
no code implementations • 22 Apr 2019 • Jian Zhang, Jun Yu, DaCheng Tao
Next, we exploit an affine transformation to align the local deep features of each neighbourhood with the global features.
no code implementations • 24 Apr 2019 • Nimit S. Sohoni, Christopher R. Aberger, Megan Leszczynski, Jian Zhang, Christopher Ré
In this paper we study a fundamental question: How much memory is actually needed to train a neural network?
no code implementations • 1 May 2019 • Yongshun Gong, Jin-Feng Yi, Dong-Dong Chen, Jian Zhang, Jiayu Zhou, Zhihua Zhou
In this paper, we aim to infer the significance of every item's appearance in consumer decision making and identify the group of items that are suitable for screenless shopping.
no code implementations • 11 May 2019 • Songsong Wu, Yan Yan, Hao Tang, Jianjun Qian, Jian Zhang, Xiao-Yuan Jing
However, the number of labeled source samples are always limited due to expensive annotation cost in practice, making sub-optimal performance been observed.
no code implementations • 23 May 2019 • Qijian Chen, Lihui Wang, Li Wang, Zeyu Deng, Jian Zhang, Yuemin Zhu
Glioma grading before surgery is very critical for the prognosis prediction and treatment plan making.
no code implementations • 7 Jun 2019 • Yazhou Yao, Jian Zhang, Xian-Sheng Hua, Fumin Shen, Zhenmin Tang
Recent successes in visual recognition can be primarily attributed to feature representation, learning algorithms, and the ever-increasing size of labeled training data.
no code implementations • 2 Jul 2019 • Zhibin Li, Jian Zhang, Qiang Wu, Yongshun Gong, Jin-Feng Yi, Christina Kirsch
In this paper, we formulate our prediction task as a multiple kernel learning problem with missing kernels.
no code implementations • 4 Aug 2019 • Huaxi Huang, Jun-Jie Zhang, Jian Zhang, Jingsong Xu, Qiang Wu
A novel low-rank pairwise bilinear pooling operation is proposed to capture the nuanced differences between the support and query images for learning an effective distance metric.
no code implementations • 6 Aug 2019 • Yunxiang Zhang, Chenglong Zhao, Bingbing Ni, Jian Zhang, Haoran Deng
To address the limitations of existing magnitude-based pruning algorithms in cases where model weights or activations are of large and similar magnitude, we propose a novel perspective to discover parameter redundancy among channels and accelerate deep CNNs via channel pruning.
no code implementations • 19 Aug 2019 • Dongming Yang, Yuexian Zou, Jian Zhang, Ge Li
Although two-stage detectors like Faster R-CNN achieved big successes in object detection due to the strategy of extracting region proposals by region proposal network, they show their poor adaption in real-world object detection as a result of without considering mining hard samples during extracting region proposals.
no code implementations • 9 Oct 2019 • Bowen Yang, Jian Zhang, Jonathan Li, Christopher Ré, Christopher R. Aberger, Christopher De Sa
Pipeline parallelism (PP) when training neural networks enables larger models to be partitioned spatially, leading to both lower network communication and overall higher hardware utilization.
no code implementations • ICCV 2019 • Jian Zhang, Chenglong Zhao, Bingbing Ni, Minghao Xu, Xiaokang Yang
We propose a variational Bayesian framework for enhancing few-shot learning performance.
no code implementations • 9 Nov 2019 • Yichuan Charlie Tang, Jian Zhang, Ruslan Salakhutdinov
Recent advances in deep reinforcement learning have demonstrated the capability of learning complex control policies from many types of environments.
no code implementations • 24 Nov 2019 • Jinyin Chen, Jian Zhang, Zhi Chen, Min Du, Qi Xuan
In this work, we present the first study of adversarial attack on dynamic network link prediction (DNLP).
no code implementations • 7 Dec 2019 • Yongshun Gong, Zhibin Li, Jian Zhang, Wei Liu, Jin-Feng Yi
In this paper, this specific problem is termed as potential passenger flow (PPF) prediction, which is a novel and important study connected with urban computing and intelligent transportation systems.
no code implementations • 18 Dec 2019 • Lionel Blondé, Yichuan Charlie Tang, Jian Zhang, Russ Webb
In this work, we introduce a new method for imitation learning from video demonstrations.
no code implementations • ICLR 2020 • Xin-Yu Zhang, Qiang Wang, Jian Zhang, Zhao Zhong
The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization.
Ranked #594 on Image Classification on ImageNet
no code implementations • 30 Dec 2019 • Jie Wu, Ying Peng, Chenghao Zheng, Zongbo Hao, Jian Zhang
Recently, generative adversarial networks (GANs) have shown great advantages in synthesizing images, leading to a boost of explorations of using faked images to augment data.
no code implementations • 18 Jan 2020 • Zhengping Liang, Jian Zhang, Liang Feng, Zexuan Zhu
However, as growing demand for cloud services, the existing EAs fail to implement in large-scale virtual machine placement (LVMP) problem due to the high time complexity and poor scalability.
no code implementations • 11 Mar 2020 • Dongming Yang, Yuexian Zou, Jian Zhang, Ge Li
GID block breaks through the local neighborhoods and captures long-range dependency of pixels both in global-level and instance-level from the scene to help detecting interactions between instances.
no code implementations • 22 Apr 2020 • Yikang Zhang, Jian Zhang, Qiang Wang, Zhao Zhong
On one hand, we can reduce the computation cost remarkably while maintaining the performance.
no code implementations • 13 May 2020 • Lu Zhang, Jian Zhang, Zhibin Li, Jingsong Xu
Inspired by the fact that spreading and collecting information through the Internet becomes the norm, more and more people choose to post for-profit contents (images and texts) in social networks.
no code implementations • ACL 2020 • Simran Arora, Avner May, Jian Zhang, Christopher Ré
We study the settings for which deep contextual embeddings (e. g., BERT) give large improvements in performance relative to classic pretrained embeddings (e. g., GloVe), and an even simpler baseline---random word embeddings---focusing on the impact of the training set size and the linguistic properties of the task.
no code implementations • 28 May 2020 • Huaxi Huang, Jun-Jie Zhang, Jian Zhang, Qiang Wu, Chang Xu
The challenges of high intra-class variance yet low inter-class fluctuations in fine-grained visual categorization are more severe with few labeled samples, \textit{i. e.,} Fine-Grained categorization problems under the Few-Shot setting (FGFS).
no code implementations • 18 Jun 2020 • Shuai Zhang, Xiaoyan Xin, Yang Wang, Yachong Guo, Qiuqiao Hao, Xianfeng Yang, Jun Wang, Jian Zhang, Bing Zhang, Wei Wang
The model provides automated recognition of given scans and generation of reports.
no code implementations • 27 Jun 2020 • Qian Li, Qingyuan Hu, Yong Qi, Saiyu Qi, Jie Ma, Jian Zhang
SBA stochastically decides whether to augment at iterations controlled by the batch scheduler and in which a ''distilled'' dynamic soft label regularization is introduced by incorporating the similarity in the vicinity distribution respect to raw samples.
no code implementations • 3 Jul 2020 • Mengxi Jia, Yunpeng Zhai, Shijian Lu, Siwei Ma, Jian Zhang
RGB-Infrared (IR) cross-modality person re-identification (re-ID), which aims to search an IR image in RGB gallery or vice versa, is a challenging task due to the large discrepancy between IR and RGB modalities.
Cross-Modality Person Re-identification Person Re-Identification
no code implementations • 3 Aug 2020 • Haoqiang Guo, Lu Peng, Jian Zhang, Fang Qi, Lide Duan
Recent studies identify that Deep learning Neural Networks (DNNs) are vulnerable to subtle perturbations, which are not perceptible to human visual system but can fool the DNN models and lead to wrong outputs.
1 code implementation • 6 Aug 2020 • Zeren Sun, Xian-Sheng Hua, Yazhou Yao, Xiu-Shen Wei, Guosheng Hu, Jian Zhang
To this end, we propose a certainty-based reusable sample selection and correction approach, termed as CRSSC, for coping with label noise in training deep FG models with web images.
no code implementations • ECCV 2020 • Ke-Yue Zhang, Taiping Yao, Jian Zhang, Ying Tai, Shouhong Ding, Jilin Li, Feiyue Huang, Haichuan Song, Lizhuang Ma
Face anti-spoofing is crucial to security of face recognition systems.
no code implementations • 3 Sep 2020 • Bin Huang, Yuanyang Du, Shuai Zhang, Wenfei Li, Jun Wang, Jian Zhang
RNAs play crucial and versatile roles in biological processes.
no code implementations • 7 Sep 2020 • Caiqing Jian, Xinyu Cheng, Jian Zhang, Lihui Wang
The experimental results demonstrate that, compared to the traditional chemical bond structure representations, the rotation and translation invariant structure representations proposed in this work can improve the SCC prediction accuracy; with the graph embedded local self-attention, the mean absolute error (MAE) of the prediction model in the validation set decreases from 0. 1603 Hz to 0. 1067 Hz; using the classification based loss function instead of the scaled regression loss, the MAE of the predicted SCC can be decreased to 0. 0963 HZ, which is close to the quantum chemistry standard on CHAMPS dataset.
no code implementations • 1 Jan 2021 • Qing Chen, Jian Zhang
Deep neural networks (DNNs) compute representations in a layer by layer fashion, producing a final representation at the top layer of the pipeline, and classification or regression is made using the final representation.
no code implementations • 1 Jan 2021 • Pedram Zamirai, Jian Zhang, Christopher R Aberger, Christopher De Sa
We ask can we do pure 16-bit training which requires only 16-bit compute units, while still matching the model accuracy attained by 32-bit training.
no code implementations • 1 Jan 2021 • Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua M. Susskind, Jian Zhang, Ruslan Salakhutdinov, Hanlin Goh
Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration.
no code implementations • 13 Oct 2020 • Pedram Zamirai, Jian Zhang, Christopher R. Aberger, Christopher De Sa
State-of-the-art generic low-precision training algorithms use a mix of 16-bit and 32-bit precision, creating the folklore that 16-bit hardware compute units alone are not enough to maximize model accuracy.
no code implementations • 9 Oct 2020 • Yuzhen Chen, Menghan Hu, Chunjun Hua, Guangtao Zhai, Jian Zhang, Qingli Li, Simon X. Yang
Aimed at solving the problem that we don't know which service stage of the mask belongs to, we propose a detection system based on the mobile phone.
no code implementations • NeurIPS 2020 • Yikang Zhang, Jian Zhang, Zhao Zhong
Neural network architecture design mostly focuses on the new convolutional operator or special topological structure of network block, little attention is drawn to the configuration of stacking each block, called Block Stacking Style (BSS).
no code implementations • 20 Oct 2020 • Yunlu Wang, Cheng Yang, Menghan Hu, Jian Zhang, Qingli Li, Guangtao Zhai, Xiao-Ping Zhang
This paper presents an unobtrusive solution that can automatically identify deep breath when a person is walking past the global depth camera.
no code implementations • 3 Nov 2020 • Zhibin Li, Litao Yu, Jian Zhang
In this paper, we present a novel data-distribution-aware margin calibration method for a better generalization of the mIoU over the whole data-distribution, underpinned by a rigid lower bound.
no code implementations • 2 Nov 2020 • Litao Yu, Jian Zhang, Qiang Wu
In this paper, we propose to apply dual attention on pyramid image feature maps to fully explore the visual-semantic correlations and improve the quality of generated sentences.
no code implementations • 4 Nov 2020 • Litao Yu, Yongsheng Gao, Jun Zhou, Jian Zhang, Qiang Wu
The proposed module can auto-select the intermediate visual features to correlate the spatial and semantic information.
Ranked #47 on Semantic Segmentation on NYU Depth v2
no code implementations • 18 Nov 2020 • Jinyin Chen, Yunyi Xie, Jian Zhang, Xincheng Shu, Qi Xuan
In this paper, we introduce time-series snapshot network (TSSN) which is a mixture network to model the interactions among users and developers.
Social and Information Networks
no code implementations • 9 Dec 2020 • Radu Horaud, Florence Forbes, Manuel Yguel, Guillaume Dewaele, Jian Zhang
This paper addresses the issue of matching rigid and articulated shapes through probabilistic point registration.
no code implementations • 20 Dec 2020 • Huaxi Huang, Junjie Zhang, Jian Zhang, Qiang Wu, Chang Xu
Second, the extra unlabeled samples are employed to transfer the knowledge from base classes to novel classes through contrastive learning.
no code implementations • 22 Dec 2020 • Yi Ding, Qiqi Yang, Guozheng Wu, Jian Zhang, Zhiguang Qin
In this paper, a network called Brachial Plexus Multi-instance Segmentation Network (BPMSegNet) is proposed to identify different tissues (nerves, arteries, veins, muscles) in ultrasound images.
no code implementations • 28 Dec 2020 • Jian Zhang, Cunjing Ge, Feifei Ma
Compared with constraint satisfaction problems, counting problems have received less attention.
no code implementations • 1 Feb 2021 • Jian Zhang, Ying Tai, Taiping Yao, Jia Meng, Shouhong Ding, Chengjie Wang, Jilin Li, Feiyue Huang, Rongrong Ji
Face authentication on mobile end has been widely applied in various scenarios.
no code implementations • 16 Feb 2021 • Yunyi Xie, Jie Jin, Jian Zhang, Shanqing Yu, Qi Xuan
With the wide application of blockchain in the financial field, the rise of various types of cybercrimes has brought great challenges to the security of blockchain.
no code implementations • 25 Feb 2021 • Shengran Lin, Changfeng Weng, Yuanjie Yang, Jiaxin Zhao, Yuhang Guo, Jian Zhang, Liren Lou, Wei Zhu, Guanzhong Wang
Nitrogen-vacancy (NV) center in diamond is an ideal candidate for quantum sensors because of its excellent optical and coherence property.
Quantum Physics Mesoscale and Nanoscale Physics
no code implementations • 3 Mar 2021 • Xiaoshui Huang, Guofeng Mei, Jian Zhang, Rana Abbas
This survey conducts a comprehensive survey, including both same-source and cross-source registration methods, and summarize the connections between optimization-based and deep learning methods, to provide further research insight.
no code implementations • 28 Jul 2020 • Baoyan Ma, Jian Zhang, Feng Cao, Yongjun He
We design a fixed proposal module to generate fixed-sized feature maps of nuclei, which allows the new information of nucleus is used for classification.
no code implementations • 12 Mar 2021 • Jianhui Chang, Zhenghui Zhao, Lingbo Yang, Chuanmin Jia, Jian Zhang, Siwei Ma
To this end, we propose a novel end-to-end semantic prior modeling-based conceptual coding scheme towards extremely low bitrate image compression, which leverages semantic-wise deep representations as a unified prior for entropy estimation and texture synthesis.
no code implementations • CVPR 2021 • Yazhou Yao, Zeren Sun, Chuanyi Zhang, Fumin Shen, Qi Wu, Jian Zhang, Zhenmin Tang
Due to the memorization effect in Deep Neural Networks (DNNs), training with noisy labels usually results in inferior model performance.
no code implementations • 26 Mar 2021 • Dewang Hou, Yang Zhao, Yuyao Ye, Jiayu Yang, Jian Zhang, Ronggang Wang
Scaling and lossy coding are widely used in video transmission and storage.
no code implementations • 17 Oct 2019 • Yijie Mao, Bruno Clerckx, Jian Zhang, Victor O. K. Li, Mohammed Arafah
Cooperative Rate-Splitting (CRS) strategy, relying on linearly precoded rate-splitting at the transmitter and opportunistic transmission of the common message by the relaying user, has recently been shown to outperform typical Non-cooperative Rate-Splitting (NRS), Cooperative Non-Orthogonal Multiple Access (C-NOMA) and Space Division Multiple Access (SDMA) in a two-user Multiple Input Single Output (MISO) Broadcast Channel (BC) with user relaying.