Search Results for author: Shihua Zhang

Found 17 papers, 4 papers with code

Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation

1 code implementation10 Feb 2023 Rui Zhang, Qi Meng, Rongchan Zhu, Yue Wang, Wenlei Shi, Shihua Zhang, Zhi-Ming Ma, Tie-Yan Liu

To address these limitations, we propose the Monte Carlo Neural PDE Solver (MCNP Solver) for training unsupervised neural solvers via the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.

Rethinking Influence Functions of Neural Networks in the Over-parameterized Regime

no code implementations15 Dec 2021 Rui Zhang, Shihua Zhang

However, the classic implicit Hessian-vector product (IHVP) method for calculating IF is fragile, and theoretical analysis of IF in the context of neural networks is still lacking.

Information-theoretic Classification Accuracy: A Criterion that Guides Data-driven Combination of Ambiguous Outcome Labels in Multi-class Classification

1 code implementation1 Sep 2021 Chihao Zhang, Yiling Elaine Chen, Shihua Zhang, Jingyi Jessica Li

While practitioners commonly combine ambiguous outcome labels for all data points (instances) in an ad hoc way to improve the accuracy of multi-class classification, there lacks a principled approach to guide the label combination for all data points by any optimality criterion.

Classification Multi-class Classification +1

Adversarial Information Bottleneck

no code implementations28 Feb 2021 Penglong Zhai, Shihua Zhang

The information bottleneck (IB) principle has been adopted to explain deep learning in terms of information compression and prediction, which are balanced by a trade-off hyperparameter.

Adversarial Robustness

A Mathematical Principle of Deep Learning: Learn the Geodesic Curve in the Wasserstein Space

no code implementations18 Feb 2021 Kuo Gai, Shihua Zhang

In a word, we conclude a mathematical principle of deep learning is to learn the geodesic curve in the Wasserstein space; and deep learning is a great engineering realization of continuous transformation in high-dimensional space.

Learnable Graph-regularization for Matrix Decomposition

no code implementations16 Oct 2020 Penglong Zhai, Shihua Zhang

Low-rank approximation models of data matrices have become important machine learning and data mining tools in many fields including computer vision, text mining, bioinformatics and many others.

graph construction

Tessellated Wasserstein Auto-Encoders

no code implementations20 May 2020 Kuo Gai, Shihua Zhang

Non-adversarial generative models such as variational auto-encoder (VAE), Wasserstein auto-encoders with maximum mean discrepancy (WAE-MMD), sliced-Wasserstein auto-encoder (SWAE) are relatively easy to train and have less mode collapse compared to Wasserstein auto-encoder with generative adversarial network (WAE-GAN).

Generative Adversarial Network

Distributed Bayesian Matrix Decomposition for Big Data Mining and Clustering

2 code implementations10 Feb 2020 Chihao Zhang, Yang Yang, Wei zhang, Shihua Zhang

Such a method should scale up well, model the heterogeneous noise, and address the communication issue in a distributed system.

Clustering Distributed Computing

Towards Understanding Residual and Dilated Dense Neural Networks via Convolutional Sparse Coding

no code implementations5 Dec 2019 Zhiyang Zhang, Shihua Zhang

Inspired by these considerations, we propose two novel multi-layer models--residual convolutional sparse coding model (Res-CSC) and mixed-scale dense convolutional sparse coding model (MSD-CSC), which have close relationship with the residual neural network (ResNet) and mixed-scale (dilated) dense neural network (MSDNet), respectively.

Matrix Normal PCA for Interpretable Dimension Reduction and Graphical Noise Modeling

no code implementations25 Nov 2019 Chihao Zhang, Kuo Gai, Shihua Zhang

However, most of the existing methods only assume that the noise is correlated in the feature space while there may exist two-way structured noise.

Dimensionality Reduction

Group-sparse SVD Models and Their Applications in Biological Data

no code implementations28 Jul 2018 Wenwen Min, Juan Liu, Shihua Zhang

We employ an alternating direction method of multipliers (ADMM) to solve the proximal operator.

Variable Selection

Bayesian Joint Matrix Decomposition for Data Integration with Heterogeneous Noise

no code implementations9 Dec 2017 Chihao Zhang, Shihua Zhang

A few of matrix decomposition methods have been extended for such multi-view data integration and pattern discovery.

Bayesian Inference Data Integration

Sparse Weighted Canonical Correlation Analysis

no code implementations13 Oct 2017 Wenwen Min, Juan Liu, Shihua Zhang

Given two data matrices $X$ and $Y$, sparse canonical correlation analysis (SCCA) is to seek two sparse canonical vectors $u$ and $v$ to maximize the correlation between $Xu$ and $Yv$.

Sparse Deep Nonnegative Matrix Factorization

no code implementations28 Jul 2017 Zhenxing Guo, Shihua Zhang

Deep learning, however, with its carefully designed hierarchical structure, is able to combine hidden features to form more representative features for pattern recognition.

Benchmarking Dimensionality Reduction +2

A Unified Joint Matrix Factorization Framework for Data Integration

1 code implementation25 Jul 2017 Lihua Zhang, Shihua Zhang

In this paper, we introduce a sparse multiple relationship data regularized joint matrix factorization (JMF) framework and two adapted prediction models for pattern recognition and data integration.

Data Integration

Network-regularized Sparse Logistic Regression Models for Clinical Risk Prediction and Biomarker Discovery

no code implementations21 Sep 2016 Wenwen Min, Juan Liu, Shihua Zhang

To address it, we introduce a novel network-regularized sparse LR model with a new penalty $\lambda \|\bm{w}\|_1 + \eta|\bm{w}|^T\bm{M}|\bm{w}|$ to consider the difference between the absolute values of the coefficients.

regression

L0-norm Sparse Graph-regularized SVD for Biclustering

no code implementations19 Mar 2016 Wenwen Min, Juan Liu, Shihua Zhang

Motivated by the development of sparse coding and graph-regularized norm, we propose a novel sparse graph-regularized SVD as a powerful biclustering tool for analyzing high-dimensional data.

Blocking

Cannot find the paper you are looking for? You can Submit a new open access paper.