1 code implementation • CVPR 2020 • Yecheng Lyu, Xinming Huang, Ziming Zhang
In contrast to the literature where local patterns in 3D point clouds are captured by customized convolutional operators, in this paper we study the problem of how to effectively and efficiently project such point clouds into a 2D image space so that traditional 2D convolutional neural networks (CNNs) such as U-Net can be applied for segmentation.
1 code implementation • CVPR 2021 • Yiming Zhao, Xinming Huang, Ziming Zhang
With those properties, directly updating the Lucas-Kanade algorithm on our feature maps will precisely align image pairs with large appearance changes.
1 code implementation • 17 Apr 2021 • Yiming Zhao, Lin Bai, Ziming Zhang, Xinming Huang
Therefore, it is assumed those pixels share the same surface with the nearest LiDAR point, and their respective depth can be estimated as the nearest LiDAR depth value plus a residual error.
1 code implementation • 20 Jul 2022 • Chengxin Liu, Kewei Wang, Hao Lu, Zhiguo Cao, Ziming Zhang
As the crowd-sourcing labeling process and the ambiguities of the objects may raise noisy bounding box annotations, the object detectors will suffer from the degenerated training data.
1 code implementation • 2 Oct 2020 • Xin Zhang, Yanhua Li, Ziming Zhang, Zhi-Li Zhang
This naturally gives rise to the following question: Given a set of expert demonstrations, which divergence can recover the expert policy more accurately with higher data efficiency?
1 code implementation • ICLR 2020 • Anil Kag, Ziming Zhang, Venkatesh Saligrama
Recurrent neural networks (RNNs) are particularly well-suited for modeling long-term dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate.
1 code implementation • CVPR 2022 • Kaidong Li, Ziming Zhang, Cuncong Zhong, Guanghui Wang
Deep neural networks for 3D point cloud classification, such as PointNet, have been demonstrated to be vulnerable to adversarial attacks.
1 code implementation • ICCV 2021 • Ming Li, Xinming Huang, Ziming Zhang
To learn distinguishable patterns, most of recent works in vehicle re-identification (ReID) struggled to redevelop official benchmarks to provide various supervisions, which requires prohibitive human labors.
1 code implementation • 14 Oct 2021 • M. Caner Tol, Saad Islam, Andrew J. Adiletta, Berk Sunar, Ziming Zhang
To this end, we first investigate the viability of backdoor injection attacks in real-life deployments of DNNs on hardware and address such practical issues in hardware implementation from a novel optimization perspective.
1 code implementation • 5 Feb 2023 • Ming Li, Xinming Huang, Ziming Zhang
To learn distinguishable patterns, most of recent works in vehicle re-identification (ReID) struggled to redevelop official benchmarks to provide various supervisions, which requires prohibitive human labors.
1 code implementation • 13 Sep 2019 • Zudi Lin, Hanspeter Pfister, Ziming Zhang
In this paper, we study the problem of how to defend classifiers against adversarial attacks that fool the classifiers using subtly modified input data.
1 code implementation • 21 Jun 2020 • Yecheng Lyu, Ming Li, Xinming Huang, Ulkuhan Guler, Patrick Schaumont, Ziming Zhang
General graphs are difficult for learning due to their irregular structures.
1 code implementation • 5 Apr 2021 • Yuanwei Wu, Ziming Zhang, Guanghui Wang
In this paper, we propose BPGrad, a novel approximate algorithm for deep nueral network training, based on adaptive estimates of feasible region via branch-and-bound.
no code implementations • 22 May 2018 • Ziming Zhang, Rongmei Lin, Alan Sullivan
In this paper we propose novel Deformable Part Networks (DPNs) to learn {\em pose-invariant} representations for 2D object recognition.
no code implementations • 22 May 2018 • Ziming Zhang
In contrast to previous works, as a learning principle we propose {\em parameterizing} both the gating function for learning kernel combination weights and the multiclass classifier in LMKL using an attentional network (AN) and a multilayer perceptron (MLP), respectively.
no code implementations • NeurIPS 2017 • Ziming Zhang, Matthew Brand
By lifting the ReLU function into a higher dimensional space, we develop a smooth multi-convex formulation for training feed-forward deep neural networks (DNNs).
no code implementations • CVPR 2018 • Ziming Zhang, Yuanwei Wu, Guanghui Wang
Understanding the global optimality in deep learning (DL) has been attracting more and more attention recently.
no code implementations • 14 Nov 2015 • Ziming Zhang, Yun Liu, Xi Chen, Yanjun Zhu, Ming-Ming Cheng, Venkatesh Saligrama, Philip H. S. Torr
We propose a novel object proposal algorithm, BING++, which inherits the virtue of good computational efficiency of BING but significantly improves its proposal localization quality.
no code implementations • 23 Nov 2016 • Ziming Zhang, Venkatesh Saligrama
In this paper we propose a novel framework of learning data-dependent feature transforms for scoring similarity between an arbitrary pair of source and target data instances to account for the wide variability in target domain.
no code implementations • 29 Aug 2016 • Yao Sui, Ziming Zhang, Guanghui Wang, Yafei Tang, Li Zhang
By exploiting the anisotropy of the filter response, three sparsity related loss functions are proposed to alleviate the overfitting issue of previous methods and improve the overall tracking performance.
no code implementations • CVPR 2016 • Ziming Zhang, Venkatesh Saligrama
It takes an arbitrary pair of source and target domain instances as input and predicts whether or not they come from the same class, i. e. whether there is a match.
no code implementations • CVPR 2016 • Ziming Zhang, Yu-Ting Chen, Venkatesh Saligrama
In this paper, we propose training very deep neural networks (DNNs) for supervised learning of hash codes.
no code implementations • ICCV 2015 • Ziming Zhang, Venkatesh Saligrama
In this paper we consider a version of the zero-shot learning problem where seen class source and target domain data are provided.
no code implementations • ICCV 2015 • Ziming Zhang, Yu-Ting Chen, Venkatesh Saligrama
In this context we propose a novel probability model and introduce latent {\em view-specific} and {\em view-shared} random variables to jointly account for the view-specific appearance and cross-view similarities among data instances.
no code implementations • 13 Jun 2014 • Ziming Zhang, Venkatesh Saligrama
From a visual perspective re-id is challenging due to significant changes in visual appearance of individuals in cameras with different pose, illumination and calibration.
no code implementations • 24 Oct 2014 • Ziming Zhang, Yu-Ting Chen, Venkatesh Saligrama
We first map each pixel of an image to a visual word using a codebook, which is learned in an unsupervised manner.
no code implementations • 20 Jul 2014 • Ziming Zhang, Philip H. S. Torr
Specifically, we explain our scale/aspect-ratio quantization scheme, and investigate the effects of combinations of $\ell_1$ and $\ell_2$ regularizers in cascade SVMs with/without ranking constraints in learning.
no code implementations • 13 Jun 2014 • Ziming Zhang, Venkatesh Saligrama
In this paper, we propose a new algorithm to speed-up the convergence of accelerated proximal gradient (APG) methods.
no code implementations • 13 Feb 2014 • Ziming Zhang
In this paper, we are interested in constructing general graph-based regularizers for multiple kernel learning (MKL) given a structure which is used to describe the way of combining basis kernels.
no code implementations • NeurIPS 2011 • Ziming Zhang, Lubor Ladicky, Philip Torr, Amir Saffari
It provides a set of anchor points which form a local coordinate system, such that each data point on the manifold can be approximated by a linear combination of its anchor points, and the linear weights become the local coordinate coding.
no code implementations • CVPR 2014 • Ming-Ming Cheng, Ziming Zhang, Wen-Yan Lin, Philip Torr
Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm.
no code implementations • 2 Mar 2019 • Ziming Zhang, Wenju Xu, Alan Sullivan
In this paper we study the problem of convergence and generalization error bound of stochastic momentum for deep learning from the perspective of regularization.
no code implementations • 2 Mar 2019 • Ziming Zhang, Anil Kag, Alan Sullivan, Venkatesh Saligrama
We show that such self-feedback helps stabilize the hidden state transitions leading to fast convergence during training while efficiently learning discriminative latent features that result in state-of-the-art results on several benchmark datasets at test-time.
no code implementations • 26 Mar 2019 • Esra Ataer-Cansizoglu, Michael Jones, Ziming Zhang, Alan Sullivan
Face super-resolution methods usually aim at producing visually appealing results rather than preserving distinctive features for further face identification.
no code implementations • 22 Aug 2019 • Anil Kag, Ziming Zhang, Venkatesh Saligrama
Recurrent neural networks (RNNs) are particularly well-suited for modeling long-term dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate.
no code implementations • 27 Aug 2019 • Yuanwei Wu, Ziming Zhang, Guanghui Wang
We use pre-trained convenet to extract features for both high- and low-resolution images, and then feed them into a two-layer feature transfer network for knowledge transfer.
no code implementations • 31 Aug 2019 • Xenju Xu, Guanghui Wang, Alan Sullivan, Ziming Zhang
In this paper we propose integrating a priori knowledge into both design and training of convolutional neural networks (CNNs) to learn object representations that are invariant to affine transformations (i. e., translation, scale, rotation).
no code implementations • 26 Sep 2019 • Yecheng Lyu, Xinming Huang, Ziming Zhang
Graph convolutional networks (GCNs) suffer from the irregularity of graphs, while more widely-used convolutional neural networks (CNNs) benefit from regular grids.
no code implementations • 5 Jan 2020 • Ziming Zhang, Wenchi Ma, Yuanwei Wu, Guanghui Wang
In this paper, we investigate the empirical impact of orthogonality regularization (OR) in deep learning, either solo or collaboratively.
no code implementations • 1 Jun 2020 • Mahdi Elhousni, Yecheng Lyu, Ziming Zhang, Xinming Huang
This approach speeds up the process of building and labeling HD maps, which can make meaningful contribution to the deployment of autonomous vehicle.
no code implementations • 1 Sep 2020 • Ce Zheng, Yecheng Lyu, Ming Li, Ziming Zhang
Deep learning based LiDAR odometry (LO) estimation attracts increasing research interests in the field of autonomous driving and robotics.
1 code implementation • 12 Oct 2020 • Yun Yue, Ming Li, Venkatesh Saligrama, Ziming Zhang
We propose to utilize the Frank-Wolfe (FW) algorithm in this context.
no code implementations • NeurIPS 2020 • Xin Zhang, Yanhua Li, Ziming Zhang, Zhi-Li Zhang
This naturally gives rise to the following question: Given a set of expert demonstrations, which divergence can recover the expert policy more accurately with higher data efficiency?
no code implementations • 3 Mar 2021 • Yecheng Lyu, Xinming Huang, Ziming Zhang
In recent years, point cloud representation in 2D space has attracted increasing research interest since it exposes the local geometry features in a 2D space.
no code implementations • 23 May 2021 • Yecheng Lyu, Xinming Huang, Ziming Zhang
Graph convolutional networks (GCNs) are widely used in graph-based applications such as graph classification and segmentation.
no code implementations • 29 Sep 2021 • M. Caner Tol, Saad Islam, Berk Sunar, Ziming Zhang
Recent works focus on software simulation of backdoor injection during the inference phase by modifying network weights, which we find often unrealistic in practice due to the hardware restriction such as bit allocation in memory.
no code implementations • 29 Sep 2021 • Xin Zhang, Yanhua Li, Ziming Zhang, Christopher Brinton, Zhenming Liu, Zhi-Li Zhang, Hui Lu, Zhihong Tian
State-of-the-art imitation learning (IL) approaches, e. g, GAIL, apply adversarial training to minimize the discrepancy between expert and learner behaviors, which is prone to unstable training and mode collapse.
no code implementations • 29 Sep 2021 • Kaifeng Zhang, Rui Zhao, Ziming Zhang, Yang Gao
Reinforcement learning (RL) provides a powerful framework for decision-making, but its application in practice often requires a carefully designed reward function.
no code implementations • 29 Sep 2021 • Guojun Wu, Yun Yue, Yanhua Li, Ziming Zhang
Lightweight neural networks refer to deep networks with small numbers of parameters, which are allowed to be implemented in resource-limited hardware such as embedded systems.
no code implementations • 19 Oct 2021 • Yecheng Lyu, Xinming Huang, Ziming Zhang
In addition, we propose a map based LiDAR localization algorithm that extracts semantic feature points from the LiDAR frames and apply CoFi to estimate the pose on an efficient point cloud map.
no code implementations • NeurIPS 2021 • Ziming Zhang, Yun Yue, Guojun Wu, Yanhua Li, Haichong Zhang
In this paper we consider the training stability of recurrent neural networks (RNNs) and propose a family of RNNs, namely SBO-RNN, that can be formulated using stochastic bilevel optimization (SBO).
no code implementations • 22 Jun 2022 • Kaifeng Zhang, Rui Zhao, Ziming Zhang, Yang Gao
In this work, we propose Auto-Encoding Adversarial Imitation Learning (AEAIL), a robust and scalable AIL framework.
no code implementations • 27 Sep 2022 • Yichen Ding, Ziming Zhang, Yanhua Li, Xun Zhou
Speed-control forecasting, a challenging problem in driver behavior analysis, aims to predict the future actions of a driver in controlling vehicle speed such as braking or acceleration.
no code implementations • 29 Nov 2022 • Keshav Bimbraw, Christopher J. Nycz, Matt Schueler, Ziming Zhang, Haichong K. Zhang
Hand configuration classification and MCP joint angle detection is important for a comprehensive reconstruction of hand motion.
no code implementations • 2 Feb 2023 • Yun Yue, Fangzhou Lin, Kazunori D Yamada, Ziming Zhang
Learning good image representations that are beneficial to downstream tasks is a challenging task in computer vision.
no code implementations • CVPR 2023 • Yiqing Zhang, Xinming Huang, Ziming Zhang
The Lucas-Kanade (LK) method is a classic iterative homography estimation algorithm for image alignment, but often suffers from poor local optimality especially when image pairs have large distortions.
1 code implementation • ICCV 2023 • Fangzhou Lin, Yun Yue, Songlin Hou, Xuechu Yu, Yajun Xu, Kazunori D Yamada, Ziming Zhang
Chamfer distance (CD) is a standard metric to measure the shape dissimilarity between point clouds in point cloud completion, as well as a loss function for (deep) learning.
1 code implementation • 23 Apr 2024 • Yun Yue, Fangzhou Lin, Guanyi Mou, Ziming Zhang
In recent years, there has been a growing trend of incorporating hyperbolic geometry methods into computer vision.