Search Results for author: Ji Xu

Found 26 papers, 3 papers with code

Underwater Acoustic Target Recognition based on Smoothness-inducing Regularization and Spectrogram-based Data Augmentation

no code implementations12 Jun 2023 Ji Xu, Yuan Xie, Wenchao Wang

Underwater acoustic target recognition is a challenging task owing to the intricate underwater environments and limited data availability.

Data Augmentation

Underwater-Art: Expanding Information Perspectives With Text Templates For Underwater Acoustic Target Recognition

no code implementations31 May 2023 Yuan Xie, Jiawei Ren, Ji Xu

In our work, we propose to implement Underwater Acoustic Recognition based on Templates made up of rich relevant information (hereinafter called "UART").

Contrastive Learning Descriptive

Adaptive ship-radiated noise recognition with learnable fine-grained wavelet transform

no code implementations31 May 2023 Yuan Xie, Jiawei Ren, Ji Xu

Background noise and variable channel transmission environment make it complicated to implement accurate ship-radiated noise recognition.

Transfer Learning

Advancing underwater acoustic target recognition via adaptive data pruning and smoothness-inducing regularization

no code implementations24 Apr 2023 Yuan Xie, Tianyu Chen, Ji Xu

Underwater acoustic recognition for ship-radiated signals has high practical application value due to the ability to recognize non-line-of-sight targets.

Determinate Node Selection for Semi-supervised Classification Oriented Graph Convolutional Networks

no code implementations11 Jan 2023 Yao Xiao, Ji Xu, Jing Yang, Shaobo Li

Graph Convolutional Networks (GCNs) have been proved successful in the field of semi-supervised node classification by extracting structural information from graph data.

Node Classification

Semi-supervised Learning with Deterministic Labeling and Large Margin Projection

1 code implementation17 Aug 2022 Ji Xu, Gang Ren, Yao Xiao, Shaobo Li, Guoyin Wang

Optimal leading forest (OLF) has been observed to have the advantage of revealing the difference evolution along a path within a subtree.

Active Learning Attribute

Open Source MagicData-RAMC: A Rich Annotated Mandarin Conversational(RAMC) Speech Dataset

no code implementations31 Mar 2022 Zehui Yang, Yifan Chen, Lei Luo, Runyan Yang, Lingxuan Ye, Gaofeng Cheng, Ji Xu, Yaohui Jin, Qingqing Zhang, Pengyuan Zhang, Lei Xie, Yonghong Yan

As a Mandarin speech dataset designed for dialog scenarios with high quality and rich annotations, MagicData-RAMC enriches the data diversity in the Mandarin speech community and allows extensive research on a series of speech-related tasks, including automatic speech recognition, speaker diarization, topic detection, keyword search, text-to-speech, etc.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Improving CTC-based speech recognition via knowledge transferring from pre-trained language models

1 code implementation22 Feb 2022 Keqi Deng, Songjun Cao, Yike Zhang, Long Ma, Gaofeng Cheng, Ji Xu, Pengyuan Zhang

Recently, end-to-end automatic speech recognition models based on connectionist temporal classification (CTC) have achieved impressive results, especially when fine-tuned from wav2vec2. 0 models.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Towards Designing Optimal Sensing Matrices for Generalized Linear Inverse Problems

no code implementations NeurIPS 2021 Junjie Ma, Ji Xu, Arian Maleki

We consider an inverse problem $\mathbf{y}= f(\mathbf{Ax})$, where $\mathbf{x}\in\mathbb{R}^n$ is the signal of interest, $\mathbf{A}$ is the sensing matrix, $f$ is a nonlinear function and $\mathbf{y} \in \mathbb{R}^m$ is the measurement vector.

Retrieval

Analysis of Sensing Spectral for Signal Recovery under a Generalized Linear Model

no code implementations NeurIPS 2021 Junjie Ma, Ji Xu, Arian Maleki

We define a notion for the spikiness of the spectrum of $\mathbf{A}$ and show the importance of this measure in the performance of the EP.

Retrieval

Weak decays of doubly heavy baryons: four-body nonleptonic decay channels

no code implementations29 Jan 2021 De-Min Li, Xi-Ruo Zhang, Ye Xing, Ji Xu

In this work, we analyze the four-body weak decays of doubly heavy baryons $\Xi_{cc}^{++}, \Xi_{cc}^+$, and $\Omega_{cc}^+$.

High Energy Physics - Phenomenology

On the proliferation of support vectors in high dimensions

no code implementations22 Sep 2020 Daniel Hsu, Vidya Muthukumar, Ji Xu

The support vector machine (SVM) is a well-established classification method whose name refers to the particular training examples, called support vectors, that determine the maximum margin separating hyperplane.

General Classification Vocal Bursts Intensity Prediction

When Does Preconditioning Help or Hurt Generalization?

no code implementations ICLR 2021 Shun-ichi Amari, Jimmy Ba, Roger Grosse, Xuechen Li, Atsushi Nitanda, Taiji Suzuki, Denny Wu, Ji Xu

While second order optimizers such as natural gradient descent (NGD) often speed up optimization, their effect on generalization has been called into question.

regression Second-order methods

On the Optimal Weighted $\ell_2$ Regularization in Overparameterized Linear Regression

no code implementations NeurIPS 2020 Denny Wu, Ji Xu

Finally, we determine the optimal weighting matrix $\mathbf{\Sigma}_w$ for both the ridgeless ($\lambda\to 0$) and optimally regularized ($\lambda = \lambda_{\rm opt}$) case, and demonstrate the advantage of the weighted objective over standard ridge regression and PCR.

regression

On the number of variables to use in principal component regression

no code implementations NeurIPS 2019 Ji Xu, Daniel Hsu

We study least squares linear regression over $N$ uncorrelated Gaussian features that are selected in order of decreasing variance.

regression

Two models of double descent for weak features

no code implementations18 Mar 2019 Mikhail Belkin, Daniel Hsu, Ji Xu

The "double descent" risk curve was proposed to qualitatively describe the out-of-sample prediction accuracy of variably-parameterized machine learning models.

BIG-bench Machine Learning Vocal Bursts Valence Prediction

Consistent Risk Estimation in Moderately High-Dimensional Linear Regression

no code implementations5 Feb 2019 Ji Xu, Arian Maleki, Kamiar Rahnama Rad, Daniel Hsu

This paper studies the problem of risk estimation under the moderately high-dimensional asymptotic setting $n, p \rightarrow \infty$ and $n/p \rightarrow \delta>1$ ($\delta$ is a fixed number), and proves the consistency of three risk estimates that have been successful in numerical studies, i. e., leave-one-out cross validation (LOOCV), approximate leave-one-out (ALO), and approximate message passing (AMP)-based techniques.

regression Vocal Bursts Intensity Prediction

Benefits of over-parameterization with EM

no code implementations NeurIPS 2018 Ji Xu, Daniel Hsu, Arian Maleki

Expectation Maximization (EM) is among the most popular algorithms for maximum likelihood estimation, but it is generally only guaranteed to find its stationary points of the log-likelihood objective.

Approximate message passing for amplitude based optimization

no code implementations ICML 2018 Junjie Ma, Ji Xu, Arian Maleki

We consider an $\ell_2$-regularized non-convex optimization problem for recovering signals from their noisy phaseless observations.

Non-iterative Label Propagation in Optimal Leading Forest

no code implementations25 Sep 2017 Ji Xu, Guoyin Wang

We propose a sound assumption, arguing that: the neighboring data points are not in peer-to-peer relation, but in a partial-ordered relation induced by the local density and distance between the data; and the label of a center can be regarded as the contribution of its followers.

graph construction Relation

An Improved Residual LSTM Architecture for Acoustic Modeling

no code implementations17 Aug 2017 Lu Huang, Jiasong Sun, Ji Xu, Yi Yang

Long Short-Term Memory (LSTM) is the primary recurrent neural networks architecture for acoustic modeling in automatic speech recognition systems.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Global analysis of Expectation Maximization for mixtures of two Gaussians

no code implementations NeurIPS 2016 Ji Xu, Daniel Hsu, Arian Maleki

Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models.

Vocal Bursts Valence Prediction

Leading Tree in DPCLUS and Its Impact on Building Hierarchies

no code implementations12 Jun 2015 Ji Xu, Guoyin Wang

There are two major advantages with the LT: One is dramatically reducing the running time of assigning noncenter data points to their cluster ID, because the assigning process is turned into just disconnecting the links from each center to its parent.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.