Search Results for author: Shaojie Li

Found 16 papers, 6 papers with code

Leveraging Deep Learning and Xception Architecture for High-Accuracy MRI Classification in Alzheimer Diagnosis

no code implementations24 Mar 2024 Shaojie Li, Haichen Qu, Xinqi Dong, Bo Dang, Hengyi Zang, Yulu Gong

Exploring the application of deep learning technologies in the field of medical diagnostics, Magnetic Resonance Imaging (MRI) provides a unique perspective for observing and diagnosing complex neurodegenerative diseases such as Alzheimer Disease (AD).

Image Classification

Utilizing the LightGBM Algorithm for Operator User Credit Assessment Research

no code implementations21 Mar 2024 Shaojie Li, Xinqi Dong, Danqing Ma, Bo Dang, Hengyi Zang, Yulu Gong

First, for the massive data related to user evaluation provided by operators, key features are extracted by data preprocessing and feature engineering methods, and a multi-dimensional feature set with statistical significance is constructed; then, linear regression, decision tree, LightGBM, and other machine learning algorithms build multiple basic models to find the best basic model; finally, integrates Averaging, Voting, Blending, Stacking and other integrated algorithms to refine multiple fusion models, and finally establish the most suitable fusion model for operator user evaluation.

Feature Engineering

BROW: Better featuRes fOr Whole slide image based on self-distillation

no code implementations15 Sep 2023 Yuanfeng Wu, Shaojie Li, Zhiqiang Du, Wentao Zhu

Hence, we proposed BROW, a foundation model for extracting better feature representations for WSIs, which can be conveniently adapted to downstream tasks without or with slight fine-tuning.

Instance Segmentation Semantic Segmentation

High Probability Analysis for Non-Convex Stochastic Optimization with Clipping

no code implementations25 Jul 2023 Shaojie Li, Yong liu

Gradient clipping is a commonly used technique to stabilize the training process of neural networks.

Stochastic Optimization

Open-TransMind: A New Baseline and Benchmark for 1st Foundation Model Challenge of Intelligent Transportation

1 code implementation12 Apr 2023 Yifeng Shi, Feng Lv, Xinliang Wang, Chunlong Xia, Shaojie Li, Shujie Yang, Teng Xi, Gang Zhang

To address these, we designed the 1st Foundation Model Challenge, with the goal of increasing the popularity of foundation model technology in traffic scenarios and promoting the rapid development of the intelligent transportation industry.

2D Object Detection Image Retrieval +1

Understanding the Generalization Performance of Spectral Clustering Algorithms

no code implementations30 Apr 2022 Shaojie Li, Sheng Ouyang, Yong liu

The theoretical analysis of spectral clustering mainly focuses on consistency, while there is relatively little research on its generalization performance.

Clustering

Towards Sharper Generalization Bounds for Structured Prediction

no code implementations NeurIPS 2021 Shaojie Li, Yong liu

In the smoothness scenario, we provide generalization bounds that are not only a logarithmic dependency on the label set cardinality but a faster convergence rate of order $\mathcal{O}(\frac{1}{n})$ on the sample size $n$.

Generalization Bounds Structured Prediction

Learning Rates for Nonconvex Pairwise Learning

no code implementations9 Nov 2021 Shaojie Li, Yong liu

We first successfully establish learning rates for these algorithms in a general nonconvex setting, where the analysis sheds insights on the trade-off between optimization and generalization and the role of early-stopping.

Metric Learning

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme

1 code implementation NeurIPS 2021 Shaojie Li, Jie Wu, Xuefeng Xiao, Fei Chao, Xudong Mao, Rongrong Ji

In this work, we revisit the role of discriminator in GAN compression and design a novel generator-discriminator cooperative compression scheme for GAN compression, termed GCC.

High Probability Generalization Bounds for Minimax Problems with Fast Rates

no code implementations ICLR 2022 Shaojie Li, Yong liu

In this paper, we provide improved generalization analyses for almost all existing generalization measures of minimax problems, which enables the minimax problems to establish sharper bounds of order $\mathcal{O}\left( 1/n \right)$, significantly, with high probability.

Distributed Computing Generalization Bounds +1

Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints

no code implementations19 Jul 2021 Shaojie Li, Yong liu

the sample size $n$ for ERM and SGD with milder assumptions in convex learning and similar high probability rates of order $\mathcal{O} (1/n)$ in nonconvex learning, rather than in expectation.

Learning Theory Stochastic Optimization +1

Distilling a Powerful Student Model via Online Knowledge Distillation

1 code implementation26 Mar 2021 Shaojie Li, Mingbao Lin, Yan Wang, Yongjian Wu, Yonghong Tian, Ling Shao, Rongrong Ji

Besides, a self-distillation module is adopted to convert the feature map of deeper layers into a shallower one.

Knowledge Distillation

Network Pruning using Adaptive Exemplar Filters

1 code implementation20 Jan 2021 Mingbao Lin, Rongrong Ji, Shaojie Li, Yan Wang, Yongjian Wu, Feiyue Huang, Qixiang Ye

Inspired by the face recognition community, we use a message passing algorithm Affinity Propagation on the weight matrices to obtain an adaptive number of exemplars, which then act as the preserved filters.

Face Recognition Network Pruning

Learning Efficient GANs for Image Translation via Differentiable Masks and co-Attention Distillation

1 code implementation17 Nov 2020 Shaojie Li, Mingbao Lin, Yan Wang, Fei Chao, Ling Shao, Rongrong Ji

The latter simultaneously distills informative attention maps from both the generator and discriminator of a pre-trained model to the searched generator, effectively stabilizing the adversarial training of our light-weight model.

Translation

Filter Sketch for Network Pruning

1 code implementation23 Jan 2020 Mingbao Lin, Liujuan Cao, Shaojie Li, Qixiang Ye, Yonghong Tian, Jianzhuang Liu, Qi Tian, Rongrong Ji

Our approach, referred to as FilterSketch, encodes the second-order information of pre-trained weights, which enables the representation capacity of pruned networks to be recovered with a simple fine-tuning procedure.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.