Search Results for author: Xiangxiang Xu

Found 9 papers, 2 papers with code

Operator SVD with Neural Networks via Nested Low-Rank Approximation

1 code implementation6 Feb 2024 J. Jon Ryu, Xiangxiang Xu, H. S. Melihcan Erol, Yuheng Bu, Lizhong Zheng, Gregory W. Wornell

Computing eigenvalue decomposition (EVD) of a given linear operator, or finding its leading eigenvalues and eigenfunctions, is a fundamental task in many machine learning and scientific computing problems.

A Geometric Framework for Neural Feature Learning

1 code implementation18 Sep 2023 Xiangxiang Xu, Lizhong Zheng

We present a novel framework for learning system design based on neural feature extractors.

Density Ratio Estimation

Kernel Subspace and Feature Extraction

no code implementations4 Jan 2023 Xiangxiang Xu, Lizhong Zheng

We study kernel methods in machine learning from the perspective of feature subspace.

A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning

no code implementations NeurIPS 2021 Xinyi Tong, Xiangxiang Xu, Shao-Lun Huang, Lizhong Zheng

Current transfer learning algorithm designs mainly focus on the similarities between source and target tasks, while the impacts of the sample sizes of these tasks are often not sufficiently addressed.

Image Classification Transfer Learning

On Distributed Learning with Constant Communication Bits

no code implementations14 Sep 2021 Xiangxiang Xu, Shao-Lun Huang

Specifically, we consider the distributed hypothesis testing (DHT) problem where two distributed nodes are constrained to transmit a constant number of bits to a central decoder.

Maximum Likelihood Estimation for Multimodal Learning with Missing Modality

no code implementations24 Aug 2021 Fei Ma, Xiangxiang Xu, Shao-Lun Huang, Lin Zhang

Moreover, we develop a generalized form of the softmax function to effectively implement maximum likelihood estimation in an end-to-end manner.

An Information-theoretic Approach to Unsupervised Feature Selection for High-Dimensional Data

no code implementations8 Oct 2019 Shao-Lun Huang, Xiangxiang Xu, Lizhong Zheng

In this paper, we propose an information-theoretic approach to design the functional representations to extract the hidden common structure shared by a set of random variables.

feature selection

An Information Theoretic Interpretation to Deep Neural Networks

no code implementations16 May 2019 Shao-Lun Huang, Xiangxiang Xu, Lizhong Zheng, Gregory W. Wornell

It is commonly believed that the hidden layers of deep neural networks (DNNs) attempt to extract informative features for learning tasks.

feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.