Search Results for author: Jiaxin Zhang

Found 21 papers, 5 papers with code

Atomic structure generation from reconstructing structural fingerprints

1 code implementation27 Jul 2022 Victor Fung, Shuyi Jia, Jiaxin Zhang, Sirui Bi, Junqi Yin, P. Ganesh

These methods would help identify or, in the case of generative models, even create novel crystal structures of materials with a set of specified functional properties to then be synthesized or isolated in the laboratory.

BIG-bench Machine Learning

Marior: Margin Removal and Iterative Content Rectification for Document Dewarping in the Wild

no code implementations23 Jul 2022 Jiaxin Zhang, Canjie Luo, Lianwen Jin, Fengjun Guo, Kai Ding

To address this issue, we propose a novel approach called Marior (Margin Removal and \Iterative Content Rectification).

Optical Character Recognition

Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage

1 code implementation CVPR 2022 Zhuohang Li, Jiaxin Zhang, Luyang Liu, Jian Liu

Federated Learning (FL) framework brings privacy benefits to distributed learning systems by allowing multiple clients to participate in a learning task under the coordination of a central server without exchanging their private data.

Federated Learning

Road-aware Monocular Structure from Motion and Homography Estimation

no code implementations16 Dec 2021 Wei Sui, Teng Chen, Jiaxin Zhang, Jiao Lu, Qian Zhang

The Depth-CNN and Pose-CNN estimate dense depth map and ego-motion respectively, solving SFM, while the Pose-CNN and Ground-CNN followed by a homography layer solve the ground plane estimation problem.

Autonomous Driving Homography Estimation

SPTS: Single-Point Text Spotting

no code implementations15 Dec 2021 Dezhi Peng, Xinyu Wang, Yuliang Liu, Jiaxin Zhang, Mingxin Huang, Songxuan Lai, Shenggao Zhu, Jing Li, Dahua Lin, Chunhua Shen, Xiang Bai, Lianwen Jin

For the first time, we demonstrate that training scene text spotting models can be achieved with an extremely low-cost annotation of a single-point for each instance.

Language Modelling Text Spotting

On the Stochastic Stability of Deep Markov Models

no code implementations NeurIPS 2021 Ján Drgoňa, Sayak Mukherjee, Jiaxin Zhang, Frank Liu, Mahantesh Halappanavar

Deep Markov models (DMM) are generative models that are scalable and expressive generalization of Markov models for representation, learning, and inference problems.

Representation Learning

AutoNF: Automated Architecture Optimization of Normalizing Flows Using a Mixture Distribution Formulation

no code implementations29 Sep 2021 Yu Wang, Jan Drgona, Jiaxin Zhang, Karthik Somayaji NS, Frank Y Liu, Malachi Schram, Peng Li

Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture.

Byzantine-robust Federated Learning through Spatial-temporal Analysis of Local Model Updates

no code implementations3 Jul 2021 Zhuohang Li, Luyang Liu, Jiaxin Zhang, Jian Liu

Federated Learning (FL) enables multiple distributed clients (e. g., mobile devices) to collaboratively train a centralized model while keeping the training data locally on the client.

Federated Learning

Inverse design of two-dimensional materials with invertible neural networks

1 code implementation6 Jun 2021 Victor Fung, Jiaxin Zhang, Guoxiang Hu, P. Ganesh, Bobby G. Sumpter

The ability to readily design novel materials with chosen functional properties on-demand represents a next frontier in materials discovery.

Band Gap

Deep Online Correction for Monocular Visual Odometry

no code implementations18 Mar 2021 Jiaxin Zhang, Wei Sui, Xinggang Wang, Wenming Meng, Hongmei Zhu, Qian Zhang

Second, the poses predicted by CNNs are further improved by minimizing photometric errors via gradient updates of poses during inference phases.

Monocular Visual Odometry online learning

A Hybrid Gradient Method to Designing Bayesian Experiments for Implicit Models

no code implementations14 Mar 2021 Jiaxin Zhang, Sirui Bi, Guannan Zhang

However, the approach in Kleinegesse et al., 2020 requires a pathwise sampling path to compute the gradient of the MI lower bound with respect to the design variables, and such a pathwise sampling path is usually inaccessible for implicit models.

Experimental Design

A Scalable Gradient-Free Method for Bayesian Experimental Design with Implicit Models

no code implementations14 Mar 2021 Jiaxin Zhang, Sirui Bi, Guannan Zhang

However, the approach requires a sampling path to compute the pathwise gradient of the MI lower bound with respect to the design variables, and such a pathwise gradient is usually inaccessible for implicit models.

Experimental Design

Towards Robust Visual Information Extraction in Real World: New Dataset and Novel Solution

1 code implementation24 Jan 2021 Jiapeng Wang, Chongyu Liu, Lianwen Jin, Guozhi Tang, Jiaxin Zhang, Shuaitao Zhang, Qianying Wang, Yaqiang Wu, Mingxiang Cai

Visual information extraction (VIE) has attracted considerable attention recently owing to its various advanced applications such as document understanding, automatic marking and intelligent education.

3D Feature Matching Text Spotting

Thermodynamic Consistent Neural Networks for Learning Material Interfacial Mechanics

no code implementations28 Nov 2020 Jiaxin Zhang, Congjie Wei, Chenglin Wu

In this paper, we propose a thermodynamic consistent neural network (TCNN) approach to build a data-driven model of the TSR with sparse experimental data.

Scalable Deep-Learning-Accelerated Topology Optimization for Additively Manufactured Materials

no code implementations28 Nov 2020 Sirui Bi, Jiaxin Zhang, Guannan Zhang

Unlike the existing studies of DL for TO, our framework accelerates TO by learning the iterative history data and simultaneously training on the mapping between the given design and its gradient.

A Novel Evolution Strategy with Directional Gaussian Smoothing for Blackbox Optimization

1 code implementation7 Feb 2020 Jiaxin Zhang, Hoang Tran, Dan Lu, Guannan Zhang

Standard ES methods with $d$-dimensional Gaussian smoothing suffer from the curse of dimensionality due to the high variance of Monte Carlo (MC) based gradient estimators.

Robust data-driven approach for predicting the configurational energy of high entropy alloys

no code implementations10 Aug 2019 Jiaxin Zhang, Xianglin Liu, Sirui Bi, Junqi Yin, Guannan Zhang, Markus Eisenbach

In this study, a robust data-driven framework based on Bayesian approaches is proposed and demonstrated on the accurate and efficient prediction of configurational energy of high entropy alloys.

feature selection Small Data Image Classification

Learning nonlinear level sets for dimensionality reduction in function approximation

no code implementations NeurIPS 2019 Guannan Zhang, Jiaxin Zhang, Jacob Hinkle

We developed a Nonlinear Level-set Learning (NLL) method for dimensionality reduction in high-dimensional function approximation with small data.

Functional Analysis

PBGen: Partial Binarization of Deconvolution-Based Generators for Edge Intelligence

no code implementations26 Feb 2018 Jinglan Liu, Jiaxin Zhang, Yukun Ding, Xiaowei Xu, Meng Jiang, Yiyu Shi

This work explores the binarization of the deconvolution-based generator in a GAN for memory saving and speedup of image construction.

Binarization

Power-Law Graph Cuts

no code implementations29 Oct 2014 Xiangyang Zhou, Jiaxin Zhang, Brian Kulis

Despite strong performance for a number of clustering tasks, spectral graph cut algorithms still suffer from several limitations: first, they require the number of clusters to be known in advance, but this information is often unknown a priori; second, they tend to produce clusters with uniform sizes.

Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.