Search Results for author: Yi Zhong

Found 16 papers, 9 papers with code

SynthTab: Leveraging Synthesized Data for Guitar Tablature Transcription

no code implementations16 Sep 2023 Yongyi Zang, Yi Zhong, Frank Cwitkowitz, Zhiyao Duan

This dataset is built on tablatures from DadaGP, which offers a vast collection and the degree of specificity we wish to transcribe.


Ariadne's Thread:Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images

1 code implementation8 Jul 2023 Yi Zhong, Mengqiu Xu, Kongming Liang, Kaixin Chen, Ming Wu

Segmentation of the infected areas of the lung is essential for quantifying the severity of lung disease like pulmonary infections.

Image Segmentation Medical Image Segmentation +1

EE-TTS: Emphatic Expressive TTS with Linguistic Information

no code implementations20 May 2023 Yi Zhong, Chen Zhang, Xule Liu, Chenxi Sun, Weishan Deng, Haifeng Hu, Zhongqian Sun

EE-TTS contains an emphasis predictor that can identify appropriate emphasis positions from text and a conditioned acoustic model to synthesize expressive speech with emphasis and linguistic information.

Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid Learning in RNNs

1 code implementation7 Feb 2023 Yu Duan, Zhongfan Jia, Qian Li, Yi Zhong, Kaisheng Ma

Comparing different plasticity rules under the same framework shows that Hebbian plasticity is well-suited for several memory and associative learning tasks; however, it is outperformed by gradient-based plasticity on few-shot regression tasks which require the model to infer the underlying mapping.

Few-Shot Learning

Interference-Limited Ultra-Reliable and Low-Latency Communications: Graph Neural Networks or Stochastic Geometry?

no code implementations11 Jul 2022 Yuhong Liu, Changyang She, Yi Zhong, Wibowo Hardjawana, Fu-Chun Zheng, Branka Vucetic

In this paper, we aim to improve the Quality-of-Service (QoS) of Ultra-Reliability and Low-Latency Communications (URLLC) in interference-limited wireless networks.

Memory Replay with Data Compression for Continual Learning

1 code implementation ICLR 2022 Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing Hong, Shifeng Zhang, Zhenguo Li, Yi Zhong, Jun Zhu

In this work, we propose memory replay with data compression (MRDC) to reduce the storage cost of old training samples and thus increase their amount that can be stored in the memory buffer.

Autonomous Driving class-incremental learning +6

AFEC: Active Forgetting of Negative Transfer in Continual Learning

1 code implementation NeurIPS 2021 Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, Yi Zhong

Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative.

Continual Learning Transfer Learning

Few-shot Continual Learning: a Brain-inspired Approach

no code implementations19 Apr 2021 Liyuan Wang, Qian Li, Yi Zhong, Jun Zhu

Our solution is based on the observation that continual learning of a task sequence inevitably interferes few-shot generalization, which makes it highly nontrivial to extend few-shot learning strategies to continual learning scenarios.

Continual Learning Few-Shot Learning

Clothing Status Awareness for Long-Term Person Re-Identification

no code implementations ICCV 2021 Yan Huang, Qiang Wu, Jingsong Xu, Yi Zhong, Zhaoxiang Zhang

This work argues that these approaches in fact are not aware of clothing status (i. e., change or no-change) of a pedestrian.

Person Re-Identification

Using Deep Convolutional Neural Networks to Diagnose COVID-19 From Chest X-Ray Images

1 code implementation19 Jul 2020 Yi Zhong

The COVID-19 epidemic has become a major safety and health threat worldwide.

Triple Memory Networks: a Brain-Inspired Method for Continual Learning

1 code implementation6 Mar 2020 Liyuan Wang, Bo Lei, Qian Li, Hang Su, Jun Zhu, Yi Zhong

Continual acquisition of novel experience without interfering previously learned knowledge, i. e. continual learning, is critical for artificial neural networks, but limited by catastrophic forgetting.

class-incremental learning Class Incremental Learning +2

Graph-augmented Convolutional Networks on Drug-Drug Interactions Prediction

no code implementations8 Dec 2019 Yi Zhong, Xueyu Chen, Yu Zhao, Xiaoming Chen, Tingfang Gao, Zuquan Weng

We propose an end-to-end model to predict drug-drug interactions (DDIs) by employing graph-augmented convolutional networks.

Drug Discovery

Sequential Convolutional Recurrent Neural Networks for Fast Automatic Modulation Classification

2 code implementations9 Sep 2019 kaisheng Liao, Yaodong Zhao, Jie Gu, Yaping Zhang, Yi Zhong

A representative sequential convolutional recurrent neural network architecture with the two-layer convolutional neural network and subsequent two-layer long short-term memory neural network is developed to suggest the option for fast automatic modulation classification.

Classification Dimensionality Reduction +1

SBSGAN: Suppression of Inter-Domain Background Shift for Person Re-Identification

no code implementations ICCV 2019 Yan Huang, Qiang Wu, JingSong Xu, Yi Zhong

We observe that if backgrounds in the training and testing datasets are very different, it dramatically introduces difficulties to extract robust pedestrian features, and thus compromises the cross-domain person re-ID performance.

Person Re-Identification

Cannot find the paper you are looking for? You can Submit a new open access paper.