Search Results for author: Mo Zhou

Found 29 papers, 9 papers with code

Deployment Prior Injection for Run-time Calibratable Object Detection

no code implementations27 Feb 2024 Mo Zhou, Yiding Yang, Haoxiang Li, Vishal M. Patel, Gang Hua

With a strong alignment between the training and test distributions, object relation as a context prior facilitates object detection.

Object object-detection +1

MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers

1 code implementation3 Feb 2024 Yatong Bai, Mo Zhou, Vishal M. Patel, Somayeh Sojoudi

Adversarial robustness often comes at the cost of degraded accuracy, impeding the real-life application of robust classification models.

Adversarial Robustness Robust classification

Securing Deep Generative Models with Universal Adversarial Signature

1 code implementation25 May 2023 Yu Zeng, Mo Zhou, Yuan Xue, Vishal M. Patel

Prior research attempted to mitigate these threats by detecting generated images, but the varying traces left by different generative models make it challenging to create a universal detector capable of generalizing to new, unseen generative models.

T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified Visual Modalities

no code implementations24 May 2023 Kangfu Mei, Mo Zhou, Vishal M. Patel

The model can be scaled to generate high-resolution data while unifying multiple modalities.

Depth Separation with Multilayer Mean-Field Networks

no code implementations3 Apr 2023 Yunwei Ren, Mo Zhou, Rong Ge

Depth separation -- why a deeper network is more powerful than a shallower one -- has been a major problem in deep learning theory.

Learning Theory

Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression

no code implementations1 Feb 2023 Mo Zhou, Rong Ge

In this work, we give a different parametrization of the model which leads to a new implicit regularization effect that combines the benefit of $\ell_1$ and $\ell_2$ interpolators.

regression

A Neural Network Warm-Start Approach for the Inverse Acoustic Obstacle Scattering Problem

1 code implementation16 Dec 2022 Mo Zhou, Jiequn Han, Manas Rachh, Carlos Borges

We present a neural network warm-start approach for solving the inverse scattering problem, where an initial guess for the optimization problem is obtained using a trained neural network.

Understanding Edge-of-Stability Training Dynamics with a Minimalist Example

no code implementations7 Oct 2022 Xingyu Zhu, Zixuan Wang, Xiang Wang, Mo Zhou, Rong Ge

Globally we observe that the training dynamics for our example has an interesting bifurcating behavior, which was also observed in the training of neural nets.

Plateau in Monotonic Linear Interpolation -- A "Biased" View of Loss Landscape for Deep Networks

no code implementations3 Oct 2022 Xiang Wang, Annie N. Wang, Mo Zhou, Rong Ge

Monotonic linear interpolation (MLI) - on the line connecting a random initialization with the minimizer it converges to, the loss and accuracy are monotonic - is a phenomenon that is commonly observed in the training of neural networks.

On Trace of PGD-Like Adversarial Attacks

no code implementations19 May 2022 Mo Zhou, Vishal M. Patel

Adversarial attacks pose safety and security concerns to deep learning applications, but their characteristics are under-explored.

Enhancing Adversarial Robustness for Deep Metric Learning

2 code implementations CVPR 2022 Mo Zhou, Vishal M. Patel

Owing to security implications of adversarial vulnerability, adversarial robustness of deep metric learning models has to be improved.

Adversarial Robustness Metric Learning

Single Time-scale Actor-critic Method to Solve the Linear Quadratic Regulator with Convergence Guarantees

no code implementations31 Jan 2022 Mo Zhou, Jianfeng Lu

We propose a single time-scale actor-critic algorithm to solve the linear quadratic regulator (LQR) problem.

Bilevel Optimization

SGCN: Sparse Graph Convolution Network for Pedestrian Trajectory Prediction

no code implementations CVPR 2021 Liushuai Shi, Le Wang, Chengjiang Long, Sanping Zhou, Mo Zhou, Zhenxing Niu, Gang Hua

Specifically, the SGCN explicitly models the sparse directed interaction with a sparse directed spatial graph to capture adaptive interaction pedestrians.

Pedestrian Trajectory Prediction Trajectory Prediction

Understanding Deflation Process in Over-parametrized Tensor Decomposition

no code implementations NeurIPS 2021 Rong Ge, Yunwei Ren, Xiang Wang, Mo Zhou

In this paper we study the training dynamics for gradient flow on over-parametrized tensor decomposition problems.

Tensor Decomposition

Adversarial Attack and Defense in Deep Ranking

1 code implementation7 Jun 2021 Mo Zhou, Le Wang, Zhenxing Niu, Qilin Zhang, Nanning Zheng, Gang Hua

In this paper, we propose two attacks against deep ranking systems, i. e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by adversarial perturbations.

Adversarial Attack Adversarial Robustness

SGCN:Sparse Graph Convolution Network for Pedestrian Trajectory Prediction

2 code implementations4 Apr 2021 Liushuai Shi, Le Wang, Chengjiang Long, Sanping Zhou, Mo Zhou, Zhenxing Niu, Gang Hua

Meanwhile, we use a sparse directed temporal graph to model the motion tendency, thus to facilitate the prediction based on the observed direction.

Pedestrian Trajectory Prediction Trajectory Prediction

Practical Relative Order Attack in Deep Ranking

2 code implementations ICCV 2021 Mo Zhou, Le Wang, Zhenxing Niu, Qilin Zhang, Yinghui Xu, Nanning Zheng, Gang Hua

In this paper, we formulate a new adversarial attack against deep ranking systems, i. e., the Order Attack, which covertly alters the relative order among a selected set of candidates according to an attacker-specified permutation, with limited interference to other unrelated candidates.

Adversarial Attack

A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network

no code implementations4 Feb 2021 Mo Zhou, Rong Ge, Chi Jin

We show that as long as the loss is already lower than a threshold (polynomial in relevant parameters), all student neurons in an over-parameterized two-layer neural network will converge to one of teacher neurons, and the loss will go to 0.

Practical Order Attack in Deep Ranking

no code implementations1 Jan 2021 Mo Zhou, Le Wang, Zhenxing Niu, Qilin Zhang, Xu Yinghui, Nanning Zheng, Gang Hua

The objective of this paper is to formalize and practically implement a new adversarial attack against deep ranking systems, i. e., the Order Attack, which covertly alters the relative order of a selected set of candidates according to a permutation vector predefined by the attacker, with only limited interference to other unrelated candidates.

Adversarial Attack Image Retrieval

Adversarial Ranking Attack and Defense

3 code implementations ECCV 2020 Mo Zhou, Zhenxing Niu, Le Wang, Qilin Zhang, Gang Hua

In this paper, we propose two attacks against deep ranking systems, i. e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by adversarial perturbations.

Adversarial Attack Image Retrieval

Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion Monte Carlo like approach

no code implementations7 Feb 2020 Jiequn Han, Jianfeng Lu, Mo Zhou

We propose a new method to solve eigenvalue problems for linear and semilinear second order differential operators in high dimensions based on deep neural networks.

Ladder Loss for Coherent Visual-Semantic Embedding

2 code implementations18 Nov 2019 Mo Zhou, Zhenxing Niu, Le Wang, Zhanning Gao, Qilin Zhang, Gang Hua

For visual-semantic embedding, the existing methods normally treat the relevance between queries and candidates in a bipolar way -- relevant or irrelevant, and all "irrelevant" candidates are uniformly pushed away from the query by an equal margin in the embedding space, regardless of their various proximity to the query.

Retrieval

Towards Understanding the Importance of Shortcut Connections in Residual Networks

no code implementations NeurIPS 2019 Tianyi Liu, Minshuo Chen, Mo Zhou, Simon S. Du, Enlu Zhou, Tuo Zhao

We show, however, that gradient descent combined with proper normalization, avoids being trapped by the spurious local optimum, and converges to a global optimum in polynomial time, when the weight of the first layer is initialized at 0, and that of the second layer is initialized arbitrarily in a ball.

Towards Understanding the Importance of Noise in Training Neural Networks

no code implementations7 Sep 2019 Mo Zhou, Tianyi Liu, Yan Li, Dachao Lin, Enlu Zhou, Tuo Zhao

Numerous empirical evidence has corroborated that the noise plays a crucial rule in effective and efficient training of neural networks.

Hierarchical Multimodal LSTM for Dense Visual-Semantic Embedding

no code implementations ICCV 2017 Zhenxing Niu, Mo Zhou, Le Wang, Xinbo Gao, Gang Hua

We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space.

Sentence

Ordinal Regression With Multiple Output CNN for Age Estimation

no code implementations CVPR 2016 Zhenxing Niu, Mo Zhou, Le Wang, Xinbo Gao, Gang Hua

To address the non-stationary property of aging patterns, age estimation can be cast as an ordinal regression problem.

Age Estimation Binary Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.