Search Results for author: Yuxuan Zhao

Found 5 papers, 2 papers with code

BrainCog: A Spiking Neural Network based Brain-inspired Cognitive Intelligence Engine for Brain-inspired AI and Brain Simulation

no code implementations18 Jul 2022 Yi Zeng, Dongcheng Zhao, Feifei Zhao, Guobin Shen, Yiting Dong, Enmeng Lu, Qian Zhang, Yinqian Sun, Qian Liang, Yuxuan Zhao, Zhuoya Zhao, Hongjian Fang, Yuwei Wang, Yang Li, Xin Liu, Chengcheng Du, Qingqun Kong, Zizhe Ruan, Weida Bi

These brain-inspired AI models have been effectively validated on various supervised, unsupervised, and reinforcement learning tasks, and they can be used to enable AI models to be with multiple brain-inspired cognitive functions.

Decision Making

DeFRCN: Decoupled Faster R-CNN for Few-Shot Object Detection

1 code implementation ICCV 2021 Limeng Qiao, Yuxuan Zhao, Zhiyuan Li, Xi Qiu, Jianan Wu, Chi Zhang

Few-shot object detection, which aims at detecting novel objects rapidly from extremely few annotated examples of previously unseen classes, has attracted significant research interest in the community.

Classification Few-Shot Object Detection +1

Matrix Completion with Quantified Uncertainty through Low Rank Gaussian Copula

2 code implementations NeurIPS 2020 Yuxuan Zhao, Madeleine Udell

The time required to fit the model scales linearly with the number of rows and the number of columns in the dataset.

Imputation Matrix Completion +1

Multimodal Emotion Recognition Model using Physiological Signals

no code implementations29 Nov 2019 Yuxuan Zhao, Xinyan Cao, Jinlong Lin, Dunshan Yu, Xixin Cao

Compared with the single-modal recognition, the multimodal fusion model improves the accuracy of emotion recognition by 5% ~ 25%, and the fusion result of EEG signals (decomposed into four frequency bands) and peripheral physiological signals get the accuracy of 95. 77%, 97. 27% and 91. 07%, 99. 74% in these two datasets respectively.

EEG Multimodal Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.