Search Results for author: Fei Feng

Found 12 papers, 3 papers with code

Boosting Large-scale Parallel Training Efficiency with C4: A Communication-Driven Approach

no code implementations7 Jun 2024 Jianbo Dong, Bin Luo, Jun Zhang, Pengcheng Zhang, Fei Feng, Yikai Zhu, Ang Liu, Zian Chen, Yi Shi, Hairong Jiao, Gang Lu, Yu Guan, Ennan Zhai, Wencong Xiao, Hanyu Zhao, Man Yuan, Siran Yang, Xiang Li, Jiamang Wang, Rui Men, Jianwei Zhang, Huang Zhong, Dennis Cai, Yuan Xie, Binzhang Fu

By leveraging this feature, C4 can rapidly identify the faulty components, swiftly isolate the anomaly, and restart the task, thereby avoiding resource wastage caused by delays in anomaly detection.

Anomaly Detection

Pelvic floor MRI segmentation based on semi-supervised deep learning

no code implementations6 Nov 2023 Jianwei Zuo, Fei Feng, Zhuhui Wang, James A. Ashton-Miller, John O. L. Delancey, Jiajia Luo

Recently, deep learning-enabled semantic segmentation has facilitated the three-dimensional geometric reconstruction of pelvic floor organs, providing clinicians with accurate and intuitive diagnostic results.

Deep Learning Diagnostic +5

Neuro-Dynamic State Estimation for Networked Microgrids

no code implementations25 Aug 2022 Fei Feng, Yifan Zhou, Peng Zhang

We devise neuro-dynamic state estimation (Neuro-DSE), a learning-based dynamic state estimation (DSE) algorithm for networked microgrids (NMs) under unknown subsystems.

Image enhancement in acoustic-resolution photoacoustic microscopy enabled by a novel directional algorithm

no code implementations19 Nov 2021 Fei Feng, Siqi Liang, Sung-Liang Chen

The algorithm consists of a Fourier accumulation SAFT (FA-SAFT) and a directional model-based (D-MB) deconvolution method.

Denoising Image Enhancement

Provably Correct Optimization and Exploration with Non-linear Policies

1 code implementation22 Mar 2021 Fei Feng, Wotao Yin, Alekh Agarwal, Lin F. Yang

Policy optimization methods remain a powerful workhorse in empirical Reinforcement Learning (RL), with a focus on neural policies that can easily reason over complex and continuous state and/or action spaces.

Reinforcement Learning (RL)

Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning

1 code implementation NeurIPS 2020 Fei Feng, Ruosong Wang, Wotao Yin, Simon S. Du, Lin F. Yang

Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems [tang2017exploration, bellemare2016unifying], we investigate when this paradigm is provably efficient.

Efficient Exploration reinforcement-learning +2

Adaptive Distraction Context Aware Tracking Based on Correlation Filter

no code implementations24 Dec 2019 Fei Feng, Xiao-Jun Wu, Tianyang Xu, Josef Kittler, Xue-Feng Zhu

In the response map obtained for the previous frame by the CF algorithm, we adaptively find the image blocks that are similar to the target and use them as negative samples.

How Does an Approximate Model Help in Reinforcement Learning?

no code implementations6 Dec 2019 Fei Feng, Wotao Yin, Lin F. Yang

In particular, we provide an algorithm that uses $\widetilde{O}(N/(1-\gamma)^3/\varepsilon^2)$ samples in a generative model to learn an $\varepsilon$-optimal policy, where $\gamma$ is the discount factor and $N$ is the number of near-optimal actions in the approximate model.

reinforcement-learning Reinforcement Learning +2

CSSegNet: Fine-Grained Cardiac Structures Segmentation Using Dilated Pyramid Pooling in U-net

no code implementations2 Jul 2019 Fei Feng, Jiajia Luo

To address this difficult problem, we presented a novel network structure which embedded dilated pyramid pooling block in the skip connections between networks' encoding and decoding stage.

Segmentation

A2BCD: Asynchronous Acceleration with Optimal Complexity

no code implementations ICLR 2019 Robert Hannah, Fei Feng, Wotao Yin

In this paper, we propose the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD).

AsyncQVI: Asynchronous-Parallel Q-Value Iteration for Discounted Markov Decision Processes with Near-Optimal Sample Complexity

1 code implementation3 Dec 2018 Yibo Zeng, Fei Feng, Wotao Yin

In this paper, we propose AsyncQVI, an asynchronous-parallel Q-value iteration for discounted Markov decision processes whose transition and reward can only be sampled through a generative model.

Cannot find the paper you are looking for? You can Submit a new open access paper.