Search Results for author: Duzhen Zhang

Found 16 papers, 8 papers with code

Biologically-Plausible Topology Improved Spiking Actor Network for Efficient Deep Reinforcement Learning

no code implementations29 Mar 2024 Duzhen Zhang, Qingyu Wang, Tielin Zhang, Bo Xu

Diverging from the conventional direct linear weighted sum, the BPT-SAN models the local nonlinearities of dendritic trees within the inter-layer connections.

Decision Making

Fourier or Wavelet bases as counterpart self-attention in spikformer for efficient visual classification

no code implementations27 Mar 2024 Qingyu Wang, Duzhen Zhang, Tilelin Zhang, Bo Xu

Energy-efficient spikformer has been proposed by integrating the biologically plausible spiking neural network (SNN) and artificial Transformer, whereby the Spiking Self-Attention (SSA) is used to achieve both higher accuracy and lower computational cost.

MM-LLMs: Recent Advances in MultiModal Large Language Models

no code implementations24 Jan 2024 Duzhen Zhang, Yahan Yu, Chenxing Li, Jiahua Dong, Dan Su, Chenhui Chu, Dong Yu

In the past year, MultiModal Large Language Models (MM-LLMs) have undergone substantial advancements, augmenting off-the-shelf LLMs to support MM inputs or outputs via cost-effective training strategies.

Decision Making

Continual Named Entity Recognition without Catastrophic Forgetting

1 code implementation23 Oct 2023 Duzhen Zhang, Wei Cong, Jiahua Dong, Yahan Yu, Xiuyi Chen, Yonggang Zhang, Zhen Fang

This issue is intensified in CNER due to the consolidation of old entity types from previous steps into the non-entity type at each step, leading to what is known as the semantic shift problem of the non-entity type.

Continual Named Entity Recognition named-entity-recognition +1

Task Relation Distillation and Prototypical Pseudo Label for Incremental Named Entity Recognition

1 code implementation17 Aug 2023 Duzhen Zhang, Hongliu Li, Wei Cong, Rongtao Xu, Jiahua Dong, Xiuyi Chen

However, INER faces the challenge of catastrophic forgetting specific for incremental learning, further aggravated by background shift (i. e., old and future entity types are labeled as the non-entity type in the current task).

Incremental Learning named-entity-recognition +3

Attention-free Spikformer: Mixing Spike Sequences with Simple Linear Transforms

no code implementations2 Aug 2023 Qingyu Wang, Duzhen Zhang, Tielin Zhang, Bo Xu

The results indicate that compared to the SOTA Spikformer with SSA, Spikformer with LT achieves higher Top-1 accuracy on neuromorphic datasets (i. e., CIFAR10-DVS and DVS128 Gesture) and comparable Top-1 accuracy on static datasets (i. e., CIFAR-10 and CIFAR-100).

Image Classification

Federated Incremental Semantic Segmentation

1 code implementation CVPR 2023 Jiahua Dong, Duzhen Zhang, Yang Cong, Wei Cong, Henghui Ding, Dengxin Dai

Moreover, new clients collecting novel classes may join in the global training of FSS, which further exacerbates catastrophic forgetting.

Federated Learning Relation +2

Crucial Semantic Classifier-based Adversarial Learning for Unsupervised Domain Adaptation

no code implementations3 Feb 2023 Yumin Zhang, Yajun Gao, Hongliu Li, Ating Yin, Duzhen Zhang, Xiuyi Chen

Unsupervised Domain Adaptation (UDA), which aims to explore the transferrable features from a well-labeled source domain to a related unlabeled target domain, has been widely progressed.

Unsupervised Domain Adaptation

Tuning Synaptic Connections instead of Weights by Genetic Algorithm in Spiking Policy Network

1 code implementation29 Dec 2022 Duzhen Zhang, Tielin Zhang, Shuncheng Jia, Qingyu Wang, Bo Xu

Learning from the interaction is the primary way biological agents know about the environment and themselves.

HiVLP: Hierarchical Vision-Language Pre-Training for Fast Image-Text Retrieval

no code implementations24 May 2022 Feilong Chen, Xiuyi Chen, Jiaxin Shi, Duzhen Zhang, Jianlong Chang, Qi Tian

It also achieves about +4. 9 AR on COCO and +3. 8 AR on Flickr30K than LightingDot and achieves comparable performance with the state-of-the-art (SOTA) fusion-based model METER.

Cross-Modal Retrieval Retrieval +1

Recent Advances and New Frontiers in Spiking Neural Networks

1 code implementation12 Mar 2022 Duzhen Zhang, Shuncheng Jia, Qingyu Wang

In recent years, spiking neural networks (SNNs) have received extensive attention in brain-inspired intelligence due to their rich spatially-temporal dynamics, various encoding methods, and event-driven characteristics that naturally fit the neuromorphic hardware.

Population-coding and Dynamic-neurons improved Spiking Actor Network for Reinforcement Learning

no code implementations15 Jun 2021 Duzhen Zhang, Tielin Zhang, Shuncheng Jia, Xiang Cheng, Bo Xu

Based on a hybrid learning framework, where a spike actor-network infers actions from states and a deep critic network evaluates the actor, we propose a Population-coding and Dynamic-neurons improved Spiking Actor Network (PDSAN) for efficient state representation from two different scales: input coding and neuronal coding.

OpenAI Gym reinforcement-learning +1

Knowledge Aware Emotion Recognition in Textual Conversations via Multi-Task Incremental Transformer

no code implementations COLING 2020 Duzhen Zhang, Xiuyi Chen, Shuang Xu, Bo Xu

For one thing, speakers often rely on the context and commonsense knowledge to express emotions; for another, most utterances contain neutral emotion in conversations, as a result, the confusion between a few non-neutral utterances and much more neutral ones restrains the emotion recognition performance.

Emotion Recognition Graph Attention +3

Cannot find the paper you are looking for? You can Submit a new open access paper.