Search Results for author: Yi Zeng

Found 110 papers, 29 papers with code

International AI Safety Report

no code implementations29 Jan 2025 Yoshua Bengio, Sören Mindermann, Daniel Privitera, Tamay Besiroglu, Rishi Bommasani, Stephen Casper, Yejin Choi, Philip Fox, Ben Garfinkel, Danielle Goldfarb, Hoda Heidari, Anson Ho, Sayash Kapoor, Leila Khalatbari, Shayne Longpre, Sam Manning, Vasilios Mavroudis, Mantas Mazeika, Julian Michael, Jessica Newman, Kwan Yee Ng, Chinasa T. Okolo, Deborah Raji, Girish Sastry, Elizabeth Seger, Theodora Skeadas, Tobin South, Emma Strubell, Florian Tramèr, Lucia Velasco, Nicole Wheeler, Daron Acemoglu, Olubayo Adekanmbi, David Dalrymple, Thomas G. Dietterich, Edward W. Felten, Pascale Fung, Pierre-Olivier Gourinchas, Fredrik Heintz, Geoffrey Hinton, Nick Jennings, Andreas Krause, Susan Leavy, Percy Liang, Teresa Ludermir, Vidushi Marda, Helen Margetts, John McDermid, Jane Munga, Arvind Narayanan, Alondra Nelson, Clara Neppel, Alice Oh, Gopal Ramchurn, Stuart Russell, Marietje Schaake, Bernhard Schölkopf, Dawn Song, Alvaro Soto, Lee Tiedrich, Gaël Varoquaux, Andrew Yao, Ya-Qin Zhang, Fahad Albalawi, Marwan Alserkal, Olubunmi Ajala, Guillaume Avrin, Christian Busch, André Carlos Ponce de Leon Ferreira de Carvalho, Bronwyn Fox, Amandeep Singh Gill, Ahmet Halit Hatip, Juha Heikkilä, Gill Jolly, Ziv Katzir, Hiroaki Kitano, Antonio Krüger, Chris Johnson, Saif M. Khan, Kyoung Mu Lee, Dominic Vincent Ligot, Oleksii Molchanovskyi, Andrea Monti, Nusu Mwamanzi, Mona Nemer, Nuria Oliver, José Ramón López Portillo, Balaraman Ravindran, Raquel Pezoa Rivera, Hammam Riza, Crystal Rugege, Ciarán Seoighe, Jerry Sheehan, Haroon Sheikh, Denise Wong, Yi Zeng

The first International AI Safety Report comprehensively synthesizes the current evidence on the capabilities, risks, and safety of advanced AI systems.

$SpikePack$: Enhanced Information Flow in Spiking Neural Networks with High Hardware Compatibility

no code implementations24 Jan 2025 Guobin Shen, Jindong Li, Tenglong Li, Dongcheng Zhao, Yi Zeng

${SpikePack}$ achieves constant $\mathcal{O}(1)$ time and space complexity, enabling efficient parallel processing on GPUs and also supporting serial inference on existing SNN hardware accelerators.

Computational Efficiency Image Classification

TriAdaptLoRA: Brain-Inspired Triangular Adaptive Low-Rank Adaptation for Parameter-Efficient Fine-Tuning

no code implementations14 Jan 2025 Yao Liang, Yuwei Wang, Yi Zeng

We propose Triangular Adaptive Low-Rank Adaptation (TriAdaptLoRA), a novel PEFT framework inspired by neuroscience principles, which dynamically optimizes the allocation of trainable parameters.

Natural Language Understanding parameter-efficient fine-tuning

Autonomous Alignment with Human Value on Altruism through Considerate Self-imagination and Theory of Mind

1 code implementation31 Dec 2024 Haibo Tong, Enmeng Lu, Yinqian Sun, Zhengqiang Han, Chao Liu, Feifei Zhao, Yi Zeng

With the widespread application of Artificial Intelligence (AI) in human society, enabling AI to autonomously align with human values has become a pressing issue to ensure its sustainable development and benefit to humanity.

Switchable deep beamformer for high-quality and real-time passive acoustic mapping

no code implementations3 Dec 2024 Yi Zeng, Jinwei Li, Hui Zhu, Shukuan Lu, Jianfeng Li, Xiran Cai

Compared to the data-adaptive beamformers, the deep beamformer reduced the computational cost by three orders of magnitude achieving 10. 5 ms image reconstruction speed in our data, while the image quality was as good as that of the data-adaptive beamformers.

Generative Adversarial Network Image Reconstruction

Mode-conditioned music learning and composition: a spiking neural network inspired by neuroscience and psychology

no code implementations22 Nov 2024 Qian Liang, Yi Zeng, Menghaoran Tang

In this paper, we propose a spiking neural network inspired by brain mechanisms and psychological theories to represent musical modes and keys, ultimately generating musical pieces that incorporate tonality features.

BlueLM-V-3B: Algorithm and System Co-Design for Multimodal Large Language Models on Mobile Devices

no code implementations16 Nov 2024 Xudong Lu, Yinghao Chen, Cheng Chen, Hui Tan, Boheng Chen, Yina Xie, Rui Hu, Guanxin Tan, Renshou Wu, Yan Hu, Yi Zeng, Lei Wu, Liuyang Bian, Zhaoxiong Wang, Long Liu, Yanzhou Yang, Han Xiao, Aojun Zhou, Yafei Wen, Xiaoxin Chen, Shuai Ren, Hongsheng Li

To be specific, we redesign the dynamic resolution scheme adopted by mainstream MLLMs and implement system optimization for hardware-aware deployment to optimize model inference on mobile phones.

Quantization

RedCode: Risky Code Execution and Generation Benchmark for Code Agents

1 code implementation12 Nov 2024 Chengquan Guo, Xun Liu, Chulin Xie, Andy Zhou, Yi Zeng, Zinan Lin, Dawn Song, Bo Li

To provide comprehensive and practical evaluations on the safety of code agents, we propose RedCode, a benchmark for risky code execution and generation: (1) RedCode-Exec provides challenging prompts that could lead to risky code execution, aiming to evaluate code agents' ability to recognize and handle unsafe code.

Evolving Efficient Genetic Encoding for Deep Spiking Neural Networks

no code implementations11 Nov 2024 Wenxuan Pan, Feifei Zhao, Bing Han, Haibo Tong, Yi Zeng

By exploiting discrete signal processing and simulating brain neuron communication, Spiking Neural Networks (SNNs) offer a low-energy alternative to Artificial Neural Networks (ANNs).

LLM-GLOBE: A Benchmark Evaluating the Cultural Values Embedded in LLM Output

1 code implementation9 Nov 2024 Elise Karinshak, Amanda Hu, Kewen Kong, Vishwanatha Rao, Jingren Wang, Jindong Wang, Yi Zeng

Immense effort has been dedicated to minimizing the presence of harmful or biased generative content and better aligning AI output to human intention; however, research investigating the cultural values of LLMs is still in very early stages.

Similarity-based context aware continual learning for spiking neural networks

1 code implementation28 Oct 2024 Bing Han, Feifei Zhao, Yang Li, Qingqun Kong, Xianqi Li, Yi Zeng

Additionally, our algorithm has the capability to adaptively select similar groups of neurons for related tasks, offering a promising approach to enhancing the biological interpretability of efficient continual learning.

class-incremental learning Class Incremental Learning +1

Harnessing Task Overload for Scalable Jailbreak Attacks on Large Language Models

no code implementations5 Oct 2024 Yiting Dong, Guobin Shen, Dongcheng Zhao, Xiang He, Yi Zeng

Existing attack methods are fixed or specifically tailored for certain models and cannot flexibly adjust attack strength, which is critical for generalization when attacking models of various sizes.

Prompt Engineering

Jailbreak Antidote: Runtime Safety-Utility Balance via Sparse Representation Adjustment in Large Language Models

no code implementations3 Oct 2024 Guobin Shen, Dongcheng Zhao, Yiting Dong, Xiang He, Yi Zeng

In this paper, we introduce Jailbreak Antidote, a method that enables real-time adjustment of LLM safety preferences by manipulating a sparse subset of the model's internal states during inference.

Prompt Engineering

StressPrompt: Does Stress Impact Large Language Models and Human Performance Similarly?

no code implementations14 Sep 2024 Guobin Shen, Dongcheng Zhao, Aorigele Bao, Xiang He, Yiting Dong, Yi Zeng

Moreover, this study contributes to the broader AI research community by offering a new perspective on how LLMs handle different scenarios and their similarities to human cognition.

Emotional Intelligence Instruction Following

Brain-Inspired Stepwise Patch Merging for Vision Transformers

no code implementations11 Sep 2024 Yonghao Yu, Dongcheng Zhao, Guobin Shen, Yiting Dong, Yi Zeng

The hierarchical architecture has become a mainstream design paradigm for Vision Transformers (ViTs), with Patch Merging serving as the pivotal component that transforms a columnar architecture into a hierarchical one.

object-detection Object Detection +1

DMRA: An Adaptive Line Spectrum Estimation Method through Dynamical Multi-Resolution of Atoms

no code implementations1 Sep 2024 Mingguang Han, Yi Zeng, Xiaoguang Li, Tiejun Li

To reduce computational complexity and improve frequency estimation accuracy, a two-stage strategy was further introduced to dynamically adjust the number of the optimized degrees of freedom.

Computational Efficiency Super-Resolution

CACE-Net: Co-guidance Attention and Contrastive Enhancement for Effective Audio-Visual Event Localization

1 code implementation4 Aug 2024 Xiang He, Xiangxi Liu, Yang Li, Dongcheng Zhao, Guobin Shen, Qingqun Kong, Xin Yang, Yi Zeng

Specifically, we have enhanced the model's ability to discern subtle differences between event and background and improved the accuracy of event classification in our model.

audio-visual event localization

AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies

no code implementations11 Jul 2024 Yi Zeng, Yu Yang, Andy Zhou, Jeffrey Ziwei Tan, Yuheng Tu, Yifan Mai, Kevin Klyman, Minzhou Pan, Ruoxi Jia, Dawn Song, Percy Liang, Bo Li

However, existing public benchmarks often define safety categories based on previous literature, intuitions, or common sense, leading to disjointed sets of categories for risks specified in recent regulations and policies, which makes it challenging to evaluate and compare FMs across these benchmarks.

Common Sense Reasoning

Directly Training Temporal Spiking Neural Network with Sparse Surrogate Gradient

no code implementations28 Jun 2024 Yang Li, Feifei Zhao, Dongcheng Zhao, Yi Zeng

Brain-inspired Spiking Neural Networks (SNNs) have attracted much attention due to their event-based computing and energy-efficient features.

AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies

no code implementations25 Jun 2024 Yi Zeng, Kevin Klyman, Andy Zhou, Yu Yang, Minzhou Pan, Ruoxi Jia, Dawn Song, Percy Liang, Bo Li

We present a comprehensive AI risk taxonomy derived from eight government policies from the European Union, United States, and China and 16 company policies worldwide, making a significant step towards establishing a unified language for generative AI safety evaluation.

BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Language Models

1 code implementation24 Jun 2024 Yi Zeng, Weiyu Sun, Tran Ngoc Huynh, Dawn Song, Bo Li, Ruoxi Jia

Safety backdoor attacks in large language models (LLMs) enable the stealthy triggering of unsafe behaviors while evading detection during normal interactions.

Code Generation

Fairness-Aware Meta-Learning via Nash Bargaining

no code implementations11 Jun 2024 Yi Zeng, Xuelin Yang, Li Chen, Cristian Canton Ferrer, Ming Jin, Michael I. Jordan, Ruoxi Jia

To address issues of group-level fairness in machine learning, it is natural to adjust model parameters based on specific fairness objectives over a sensitive-attributed validation set.

Fairness Image Classification +2

Spiking Neural Networks with Consistent Mapping Relations Allow High-Accuracy Inference

no code implementations8 Jun 2024 Yang Li, Xiang He, Qingqun Kong, Yi Zeng

Spike-based neuromorphic hardware has demonstrated substantial potential in low energy consumption and efficient inference.

object-detection Object Detection

JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits

1 code implementation6 Jun 2024 Minzhou Pan, Yi Zeng, Xue Lin, Ning Yu, Cho-Jui Hsieh, Peter Henderson, Ruoxi Jia

In this study, we investigate the vulnerability of image watermarks to diffusion-model-based image editing, a challenge exacerbated by the computational cost of accessing gradient information and the closed-source nature of many diffusion models.

Contrastive Learning

AI Risk Management Should Incorporate Both Safety and Security

no code implementations29 May 2024 Xiangyu Qi, Yangsibo Huang, Yi Zeng, Edoardo Debenedetti, Jonas Geiping, Luxi He, Kaixuan Huang, Udari Madhushani, Vikash Sehwag, Weijia Shi, Boyi Wei, Tinghao Xie, Danqi Chen, Pin-Yu Chen, Jeffrey Ding, Ruoxi Jia, Jiaqi Ma, Arvind Narayanan, Weijie J Su, Mengdi Wang, Chaowei Xiao, Bo Li, Dawn Song, Peter Henderson, Prateek Mittal

The exposure of security vulnerabilities in safety-aligned language models, e. g., susceptibility to adversarial attacks, has shed light on the intricate interplay between AI safety and AI security.

Management

Time Cell Inspired Temporal Codebook in Spiking Neural Networks for Enhanced Image Generation

no code implementations23 May 2024 Linghao Feng, Dongcheng Zhao, Sicheng Shen, Yiting Dong, Guobin Shen, Yi Zeng

This paper presents a novel approach leveraging Spiking Neural Networks (SNNs) to construct a Variational Quantized Autoencoder (VQ-VAE) with a temporal codebook inspired by hippocampal time cells.

Image Generation

Neuro-Vision to Language: Enhancing Brain Recording-based Visual Reconstruction and Language Interaction

no code implementations30 Apr 2024 Guobin Shen, Dongcheng Zhao, Xiang He, Linghao Feng, Yiting Dong, Jihang Wang, Qian Zhang, Yi Zeng

Decoding non-invasive brain recordings is pivotal for advancing our understanding of human cognition but faces challenges due to individual differences and complex neural signal representations.

Brain Decoding Image Reconstruction +1

Introducing v0.5 of the AI Safety Benchmark from MLCommons

1 code implementation18 Apr 2024 Bertie Vidgen, Adarsh Agrawal, Ahmed M. Ahmed, Victor Akinwande, Namir Al-Nuaimi, Najla Alfaraj, Elie Alhajjar, Lora Aroyo, Trupti Bavalatti, Max Bartolo, Borhane Blili-Hamelin, Kurt Bollacker, Rishi Bomassani, Marisa Ferrara Boston, Siméon Campos, Kal Chakra, Canyu Chen, Cody Coleman, Zacharie Delpierre Coudert, Leon Derczynski, Debojyoti Dutta, Ian Eisenberg, James Ezick, Heather Frase, Brian Fuller, Ram Gandikota, Agasthya Gangavarapu, Ananya Gangavarapu, James Gealy, Rajat Ghosh, James Goel, Usman Gohar, Sujata Goswami, Scott A. Hale, Wiebke Hutiri, Joseph Marvin Imperial, Surgan Jandial, Nick Judd, Felix Juefei-Xu, Foutse khomh, Bhavya Kailkhura, Hannah Rose Kirk, Kevin Klyman, Chris Knotz, Michael Kuchnik, Shachi H. Kumar, Srijan Kumar, Chris Lengerich, Bo Li, Zeyi Liao, Eileen Peters Long, Victor Lu, Sarah Luger, Yifan Mai, Priyanka Mary Mammen, Kelvin Manyeki, Sean McGregor, Virendra Mehta, Shafee Mohammed, Emanuel Moss, Lama Nachman, Dinesh Jinenhally Naganna, Amin Nikanjam, Besmira Nushi, Luis Oala, Iftach Orr, Alicia Parrish, Cigdem Patlak, William Pietri, Forough Poursabzi-Sangdeh, Eleonora Presani, Fabrizio Puletti, Paul Röttger, Saurav Sahay, Tim Santos, Nino Scherrer, Alice Schoenauer Sebag, Patrick Schramowski, Abolfazl Shahbazi, Vin Sharma, Xudong Shen, Vamsi Sistla, Leonard Tang, Davide Testuggine, Vithursan Thangarasa, Elizabeth Anne Watkins, Rebecca Weiss, Chris Welty, Tyler Wilbers, Adina Williams, Carole-Jean Wu, Poonam Yadav, Xianjun Yang, Yi Zeng, Wenhui Zhang, Fedor Zhdanov, Jiacheng Zhu, Percy Liang, Peter Mattson, Joaquin Vanschoren

We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0. 5 benchmark.

Towards a Novel Perspective on Adversarial Examples Driven by Frequency

no code implementations16 Apr 2024 Zhun Zhang, Yi Zeng, Qihe Liu, Shijie Zhou

In this paper, we seek to demystify this relationship by exploring the characteristics of adversarial perturbations within the frequency domain.

Adversarial Attack

RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content

1 code implementation19 Mar 2024 Zhuowen Yuan, Zidi Xiong, Yi Zeng, Ning Yu, Ruoxi Jia, Dawn Song, Bo Li

The innovative use of constrained optimization and a fusion-based guardrail approach represents a significant step forward in developing more secure and reliable LLMs, setting a new standard for content moderation frameworks in the face of evolving digital threats.

Data Augmentation

Multiple Latent Space Mapping for Compressed Dark Image Enhancement

no code implementations12 Mar 2024 Yi Zeng, Zhengning Wang, Yuxuan Liu, Tianjiao Zeng, Xuhang Liu, Xinglong Luo, Shuaicheng Liu, Shuyuan Zhu, Bing Zeng

Since texture details intertwine with compression artifacts in compressed dark images, detail enhancement and blocking artifacts suppression contradict each other in image space.

Blocking Image Enhancement

Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning

no code implementations12 Mar 2024 Yao Liang, Yuwei Wang, Yang Li, Yi Zeng

In response to this, inspired by the idea that the functions of the brain are shaped by its geometric structure, this paper integrates this idea into LoRA technology and proposes a new matrix transformation-based reparameterization method for efficient fine-tuning, named Matrix-Transformation based Low-Rank Adaptation (MTLoRA).

Natural Language Understanding parameter-efficient fine-tuning +1

TIM: An Efficient Temporal Interaction Module for Spiking Transformer

2 code implementations22 Jan 2024 Sicheng Shen, Dongcheng Zhao, Guobin Shen, Yi Zeng

Spiking Neural Networks (SNNs), as the third generation of neural networks, have gained prominence for their biological plausibility and computational efficiency, especially in processing diverse datasets.

Computational Efficiency Image Classification

A Brain-inspired Computational Model for Human-like Concept Learning

no code implementations12 Jan 2024 Yuwei Wang, Yi Zeng

Concept learning is a fundamental aspect of human cognition and plays a critical role in mental processes such as categorization, reasoning, memory, and decision-making.

Decision Making

How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs

2 code implementations12 Jan 2024 Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, Weiyan Shi

This paper introduces a new perspective to jailbreak LLMs as human-like communicators, to explore this overlooked intersection between everyday language interaction and AI safety.

Are Conventional SNNs Really Efficient? A Perspective from Network Quantization

no code implementations CVPR 2024 Guobin Shen, Dongcheng Zhao, Tenglong Li, Jindong Li, Yi Zeng

This paper introduces a unified perspective illustrating that the time steps in SNNs and quantized bit-widths of activation values present analogous representations.

Fairness Quantization

MatchDet: A Collaborative Framework for Image Matching and Object Detection

no code implementations18 Dec 2023 Jinxiang Lai, Wenlong Wu, Bin-Bin Gao, Jun Liu, Jiawei Zhan, Congchong Nie, Yi Zeng, Chengjie Wang

Image matching and object detection are two fundamental and challenging tasks, while many related applications consider them two individual tasks (i. e. task-individual).

object-detection Object Detection

Is Conventional SNN Really Efficient? A Perspective from Network Quantization

no code implementations17 Nov 2023 Guobin Shen, Dongcheng Zhao, Tenglong Li, Jindong Li, Yi Zeng

This paper introduces a unified perspective, illustrating that the time steps in SNNs and quantized bit-widths of activation values present analogous representations.

Fairness Quantization

STREAM: Social data and knowledge collective intelligence platform for TRaining Ethical AI Models

no code implementations9 Oct 2023 Yuwei Wang, Enmeng Lu, Zizhe Ruan, Yao Liang, Yi Zeng

This paper presents Social data and knowledge collective intelligence platform for TRaining Ethical AI Models (STREAM) to address the challenge of aligning AI models with human moral values, and to provide ethics datasets and knowledge bases to help promote AI models "follow good advice as naturally as a stream follows its course".

Ethics

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!

1 code implementation5 Oct 2023 Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson

Optimizing large language models (LLMs) for downstream use cases often involves the customization of pre-trained LLMs through further fine-tuning.

Red Teaming Safety Alignment

FireFly v2: Advancing Hardware Support for High-Performance Spiking Neural Network with a Spatiotemporal FPGA Accelerator

no code implementations28 Sep 2023 Jindong Li, Guobin Shen, Dongcheng Zhao, Qian Zhang, Yi Zeng

As a further step in supporting high-performance SNNs on specialized hardware, we introduce FireFly v2, an FPGA SNN accelerator that can address the issue of non-spike operation in current SOTA SNN algorithms, which presents an obstacle in the end-to-end deployment onto existing SNN hardware.

Adaptive Reorganization of Neural Pathways for Continual Learning with Spiking Neural Networks

no code implementations18 Sep 2023 Bing Han, Feifei Zhao, Wenxuan Pan, Zhaoya Zhao, Xianqi Li, Qingqun Kong, Yi Zeng

In this paper, we propose a brain-inspired continual learning algorithm with adaptive reorganization of neural pathways, which employs Self-Organizing Regulation networks to reorganize the single and limited Spiking Neural Network (SOR-SNN) into rich sparse neural pathways to efficiently cope with incremental tasks.

Continual Learning

Brain-inspired Evolutionary Architectures for Spiking Neural Networks

no code implementations11 Sep 2023 Wenxuan Pan, Feifei Zhao, Zhuoya Zhao, Yi Zeng

This work explores brain-inspired neural architectures suitable for SNNs and also provides preliminary insights into the evolutionary mechanisms of biological neural networks in the human brain.

Learning the Plasticity: Plasticity-Driven Learning Framework in Spiking Neural Networks

no code implementations23 Aug 2023 Guobin Shen, Dongcheng Zhao, Yiting Dong, Yang Li, Feifei Zhao, Yi Zeng

This shift in focus from weight adjustment to mastering the intricacies of synaptic change offers a more flexible and dynamic pathway for neural networks to evolve and adapt.

FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

no code implementations11 Aug 2023 Karim Lekadir, Aasa Feragen, Abdul Joseph Fofanah, Alejandro F Frangi, Alena Buyx, Anais Emelie, Andrea Lara, Antonio R Porras, An-Wen Chan, Arcadi Navarro, Ben Glocker, Benard O Botwe, Bishesh Khanal, Brigit Beger, Carol C Wu, Celia Cintas, Curtis P Langlotz, Daniel Rueckert, Deogratias Mzurikwao, Dimitrios I Fotiadis, Doszhan Zhussupov, Enzo Ferrante, Erik Meijering, Eva Weicken, Fabio A González, Folkert W Asselbergs, Fred Prior, Gabriel P Krestin, Gary Collins, Geletaw S Tegenaw, Georgios Kaissis, Gianluca Misuraca, Gianna Tsakou, Girish Dwivedi, Haridimos Kondylakis, Harsha Jayakody, Henry C Woodruf, Horst Joachim Mayer, Hugo JWL Aerts, Ian Walsh, Ioanna Chouvarda, Irène Buvat, Isabell Tributsch, Islem Rekik, James Duncan, Jayashree Kalpathy-Cramer, Jihad Zahir, Jinah Park, John Mongan, Judy W Gichoya, Julia A Schnabel, Kaisar Kushibar, Katrine Riklund, Kensaku MORI, Kostas Marias, Lameck M Amugongo, Lauren A Fromont, Lena Maier-Hein, Leonor Cerdá Alberich, Leticia Rittner, Lighton Phiri, Linda Marrakchi-Kacem, Lluís Donoso-Bach, Luis Martí-Bonmatí, M Jorge Cardoso, Maciej Bobowicz, Mahsa Shabani, Manolis Tsiknakis, Maria A Zuluaga, Maria Bielikova, Marie-Christine Fritzsche, Marina Camacho, Marius George Linguraru, Markus Wenzel, Marleen de Bruijne, Martin G Tolsgaard, Marzyeh Ghassemi, Md Ashrafuzzaman, Melanie Goisauf, Mohammad Yaqub, Mónica Cano Abadía, Mukhtar M E Mahmoud, Mustafa Elattar, Nicola Rieke, Nikolaos Papanikolaou, Noussair Lazrak, Oliver Díaz, Olivier Salvado, Oriol Pujol, Ousmane Sall, Pamela Guevara, Peter Gordebeke, Philippe Lambin, Pieta Brown, Purang Abolmaesumi, Qi Dou, Qinghua Lu, Richard Osuala, Rose Nakasi, S Kevin Zhou, Sandy Napel, Sara Colantonio, Shadi Albarqouni, Smriti Joshi, Stacy Carter, Stefan Klein, Steffen E Petersen, Susanna Aussó, Suyash Awate, Tammy Riklin Raviv, Tessa Cook, Tinashe E M Mutsvangwa, Wendy A Rogers, Wiro J Niessen, Xènia Puig-Bosch, Yi Zeng, Yunusa G Mohammed, Yves Saint James Aquino, Zohaib Salahuddin, Martijn P A Starmans

This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.

Fairness

Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks

1 code implementation9 Aug 2023 Bing Han, Feifei Zhao, Yi Zeng, Wenxuan Pan, Guobin Shen

In addition, the overlapping shared structure helps to quickly leverage all acquired knowledge to new tasks, empowering a single network capable of supporting multiple incremental tasks (without the separate sub-network mask for each task).

class-incremental learning Class Incremental Learning +1

Revisiting Data-Free Knowledge Distillation with Poisoned Teachers

1 code implementation4 Jun 2023 Junyuan Hong, Yi Zeng, Shuyang Yu, Lingjuan Lyu, Ruoxi Jia, Jiayu Zhou

Data-free knowledge distillation (KD) helps transfer knowledge from a pre-trained model (known as the teacher model) to a smaller model (known as the student model) without access to the original training data used for training the teacher model.

Backdoor Defense for Data-Free Distillation with Poisoned Teachers Data-free Knowledge Distillation

Alteration-free and Model-agnostic Origin Attribution of Generated Images

no code implementations29 May 2023 Zhenting Wang, Chen Chen, Yi Zeng, Lingjuan Lyu, Shiqing Ma

To overcome this problem, we first develop an alteration-free and model-agnostic origin attribution method via input reverse-engineering on image generation models, i. e., inverting the input of a particular model for a specific image.

Image Generation

Improving Stability and Performance of Spiking Neural Networks through Enhancing Temporal Consistency

no code implementations23 May 2023 Dongcheng Zhao, Guobin Shen, Yiting Dong, Yang Li, Yi Zeng

Notably, our algorithm has achieved state-of-the-art performance on neuromorphic datasets DVS-CIFAR10 and N-Caltech101, and can achieve superior performance in the test phase with timestep T=1.

Dive into the Power of Neuronal Heterogeneity

no code implementations19 May 2023 Guobin Shen, Dongcheng Zhao, Yiting Dong, Yang Li, Yi Zeng

The biological neural network is a vast and diverse structure with high neural heterogeneity.

continuous-control Continuous Control

Spiking Generative Adversarial Network with Attention Scoring Decoding

no code implementations17 May 2023 Linghao Feng, Dongcheng Zhao, Yi Zeng

As it stands, such models are primarily limited to the domain of artificial neural networks.

Generative Adversarial Network

LAVA: Data Valuation without Pre-Specified Learning Algorithms

1 code implementation28 Apr 2023 Hoang Anh Just, Feiyang Kang, Jiachen T. Wang, Yi Zeng, Myeongseob Ko, Ming Jin, Ruoxi Jia

(1) We develop a proxy for the validation performance associated with a training set based on a non-conventional class-wise Wasserstein distance between training and validation sets.

Data Valuation

Multi-scale Evolutionary Neural Architecture Search for Deep Spiking Neural Networks

no code implementations21 Apr 2023 Wenxuan Pan, Feifei Zhao, Guobin Shen, Yi Zeng

The neural motifs topology, modular regional structure and global cross-brain region connection of the human brain are the product of natural evolution and can serve as a perfect reference for designing brain-inspired SNN architecture.

Neural Architecture Search

Temporal Knowledge Sharing enable Spiking Neural Network Learning from Past and Future

no code implementations13 Apr 2023 Yiting Dong, Dongcheng Zhao, Yi Zeng

However, SNNs typically grapple with challenges such as extended time steps, low temporal information utilization, and the requirement for consistent time step between testing and training.

Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks

no code implementations31 Mar 2023 Wenxuan Pan, Feifei Zhao, Yi Zeng, Bing Han

For structural evolution, an adaptive evolvable LSM model is developed to optimize the neural architecture design of liquid layer with separation property.

Decision Making

MSAT: Biologically Inspired Multi-Stage Adaptive Threshold for Conversion of Spiking Neural Networks

no code implementations23 Mar 2023 Xiang He, Yang Li, Dongcheng Zhao, Qingqun Kong, Yi Zeng

The self-adaptation to membrane potential and input allows a timely adjustment of the threshold to fire spike faster and transmit more information.

Sentiment Analysis Sentiment Classification +2

An Efficient Knowledge Transfer Strategy for Spiking Neural Networks from Static to Event Domain

1 code implementation23 Mar 2023 Xiang He, Dongcheng Zhao, Yang Li, Guobin Shen, Qingqun Kong, Yi Zeng

In order to improve the generalization ability of SNNs on event-based datasets, we use static images to assist SNN training on event data.

Transfer Learning

Brain-inspired bodily self-perception model for robot rubber hand illusion

no code implementations22 Mar 2023 Yuxuan Zhao, Enmeng Lu, Yi Zeng

Despite the conceptual descriptions of the mechanisms of bodily self-consciousness and the possible relevant brain areas, the existing theoretical models still lack an explanation of the computational mechanisms by which the brain encodes the perception of one's body and how our subjectively perceived body illusions can be generated by neural networks.

ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms

1 code implementation22 Feb 2023 Minzhou Pan, Yi Zeng, Lingjuan Lyu, Xue Lin, Ruoxi Jia

However, we lack a thorough understanding of the applicability of existing detection methods across a variety of learning settings.

backdoor defense Self-Supervised Learning +1

Exploiting High Performance Spiking Neural Networks with Efficient Spiking Patterns

no code implementations29 Jan 2023 Guobin Shen, Dongcheng Zhao, Yi Zeng

Inspired by spike patterns in biological neurons, this paper introduces the dynamic Burst pattern and designs the Leaky Integrate and Fire or Burst (LIFB) neuron that can make a trade-off between short-time performance and dynamic temporal performance from the perspective of network information capacity.

Vocal Bursts Intensity Prediction

Multi-compartment Neuron and Population Encoding improved Spiking Neural Network for Deep Distributional Reinforcement Learning

no code implementations18 Jan 2023 Yinqian Sun, Yi Zeng, Feifei Zhao, Zhuoya Zhao

In this paper, we proposed a brain-inspired SNN-based deep distributional reinforcement learning algorithm with combination of bio-inspired multi-compartment neuron (MCN) model and population coding method.

Atari Games Distributional Reinforcement Learning +3

A Brain-inspired Memory Transformation based Differentiable Neural Computer for Reasoning-based Question Answering

no code implementations7 Jan 2023 Yao Liang, Hongjian Fang, Yi Zeng, Feifei Zhao

Reasoning and question answering as a basic cognitive function for humans, is nevertheless a great challenge for current artificial intelligence.

Question Answering

FireFly: A High-Throughput Hardware Accelerator for Spiking Neural Networks with Efficient DSP and Memory Optimization

no code implementations5 Jan 2023 Jindong Li, Guobin Shen, Dongcheng Zhao, Qian Zhang, Yi Zeng

To improve memory efficiency, we design a memory system to enable efficient synaptic weights and membrane voltage memory access with reasonable on-chip RAM consumption.

Developmental Plasticity-inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks

no code implementations23 Nov 2022 Bing Han, Feifei Zhao, Yi Zeng, Guobin Shen

Developmental plasticity plays a prominent role in shaping the brain's structure during ongoing learning in response to dynamically changing environments.

Adaptive Sparse Structure Development with Pruning and Regeneration for Spiking Neural Networks

no code implementations22 Nov 2022 Bing Han, Feifei Zhao, Yi Zeng, Wenxuan Pan

Experimental results on spatial (MNIST, CIFAR-10) and temporal neuromorphic (N-MNIST, DVS-Gesture) datasets demonstrate that our method can flexibly learn appropriate compression rate for various tasks and effectively achieve superior performance while massively reducing the network energy consumption.

tSF: Transformer-based Semantic Filter for Few-Shot Learning

1 code implementation2 Nov 2022 Jinxiang Lai, Siqian Yang, Wenlong Liu, Yi Zeng, Zhongyi Huang, Wenlong Wu, Jun Liu, Bin-Bin Gao, Chengjie Wang

Few-Shot Learning (FSL) alleviates the data shortage challenge via embedding discriminative target-aware features among plenty seen (base) and few unseen (novel) labeled samples.

Few-Shot Learning object-detection +1

Abutting Grating Illusion: Cognitive Challenge to Neural Network Models

no code implementations8 Aug 2022 Jinyu Fan, Yi Zeng

Even the state-of-the-art deep learning models lack fundamental abilities compared to humans.

Data Augmentation Deep Learning

BrainCog: A Spiking Neural Network based Brain-inspired Cognitive Intelligence Engine for Brain-inspired AI and Brain Simulation

no code implementations18 Jul 2022 Yi Zeng, Dongcheng Zhao, Feifei Zhao, Guobin Shen, Yiting Dong, Enmeng Lu, Qian Zhang, Yinqian Sun, Qian Liang, Yuxuan Zhao, Zhuoya Zhao, Hongjian Fang, Yuwei Wang, Yang Li, Xin Liu, Chengcheng Du, Qingqun Kong, Zizhe Ruan, Weida Bi

These brain-inspired AI models have been effectively validated on various supervised, unsupervised, and reinforcement learning tasks, and they can be used to enable AI models to be with multiple brain-inspired cognitive functions.

Decision Making

Brain-inspired Graph Spiking Neural Networks for Commonsense Knowledge Representation and Reasoning

no code implementations11 Jul 2022 Hongjian Fang, Yi Zeng, Jianbo Tang, Yuwei Wang, Yao Liang, Xin Liu

For the fields of neuroscience and cognitive science, the work in this paper provided the foundation of computational modeling for further exploration of the way the human brain represents commonsense knowledge.

Spike Calibration: Fast and Accurate Conversion of Spiking Neural Network for Object Detection and Segmentation

no code implementations6 Jul 2022 Yang Li, Xiang He, Yiting Dong, Qingqun Kong, Yi Zeng

Spiking neural network (SNN) has been attached to great importance due to the properties of high biological plausibility and low energy consumption on neuromorphic hardware.

Bayesian Optimization object-detection +1

An Unsupervised STDP-based Spiking Neural Network Inspired By Biologically Plausible Learning Rules and Connections

no code implementations6 Jul 2022 Yiting Dong, Dongcheng Zhao, Yang Li, Yi Zeng

By integrating the above three adaptive mechanisms and STB-STDP, our model greatly accelerates the training of unsupervised spiking neural networks and improves the performance of unsupervised SNNs on complex tasks.

Turning a Curse into a Blessing: Enabling In-Distribution-Data-Free Backdoor Removal via Stabilized Model Inversion

no code implementations14 Jun 2022 Si Chen, Yi Zeng, Jiachen T. Wang, Won Park, Xun Chen, Lingjuan Lyu, Zhuoqing Mao, Ruoxi Jia

Our work is the first to provide a thorough understanding of leveraging model inversion for effective backdoor removal by addressing key questions about reconstructed samples' properties, perceptual similarity, and the potential presence of backdoor triggers.

DPSNN: A Differentially Private Spiking Neural Network with Temporal Enhanced Pooling

no code implementations24 May 2022 Jihang Wang, Dongcheng Zhao, Guobin Shen, Qian Zhang, Yi Zeng

Privacy protection is a crucial issue in machine learning algorithms, and the current privacy protection is combined with traditional artificial neural networks based on real values.

Face Recognition Image Classification +5

EventMix: An Efficient Augmentation Strategy for Event-Based Data

no code implementations24 May 2022 Guobin Shen, Dongcheng Zhao, Yi Zeng

Data augmentation can improve the quantity and quality of the original data by processing more representations from the original data.

Data Augmentation

Efficient and Accurate Conversion of Spiking Neural Network with Burst Spikes

1 code implementation28 Apr 2022 Yang Li, Yi Zeng

Spiking neural network (SNN), as a brain-inspired energy-efficient neural network, has attracted the interest of researchers.

Efficient Neural Network

Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information

3 code implementations11 Apr 2022 Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, Ruoxi Jia

With poisoning equal to or less than 0. 5% of the target-class data and 0. 05% of the training set, we can train a model to classify test examples from arbitrary classes into the target class when the examples are patched with a backdoor trigger.

Backdoor Attack Clean-label Backdoor Attack (0.024%) +1

N-Omniglot, a large-scale neuromorphic dataset for spatio-temporal sparse few-shot learning

1 code implementation25 Dec 2021 Yang Li, Yiting Dong, Dongcheng Zhao, Yi Zeng

Few-shot learning (learning with a few samples) is one of the most important cognitive abilities of the human brain.

Few-Shot Learning

Spiking CapsNet: A Spiking Neural Network With A Biologically Plausible Routing Rule Between Capsules

no code implementations15 Nov 2021 Dongcheng Zhao, Yang Li, Yi Zeng, Jihang Wang, Qian Zhang

Our Spiking CapsNet fully combines the strengthens of SNN and CapsNet, and shows strong robustness to noise and affine transformation.

Backpropagation with Biologically Plausible Spatio-Temporal Adjustment For Training Deep Spiking Neural Networks

no code implementations17 Oct 2021 Guobin Shen, Dongcheng Zhao, Yi Zeng

Secondly, we propose a biologically plausible temporal adjustment making the error propagate across the spikes in the temporal dimension, which overcomes the problem of the temporal dependency within a single spike period of the traditional spiking neurons.

Adversarial Unlearning of Backdoors via Implicit Hypergradient

2 code implementations ICLR 2022 Yi Zeng, Si Chen, Won Park, Z. Morley Mao, Ming Jin, Ruoxi Jia

Particularly, its performance is more robust to the variation on triggers, attack settings, poison ratio, and clean data size.

Towards General Robustness to Bad Training Data

no code implementations29 Sep 2021 Tianhao Wang, Yi Zeng, Ming Jin, Ruoxi Jia

In this paper, we focus on the problem of identifying bad training data when the underlying cause is unknown in advance.

Data Summarization

A Unified Framework for Task-Driven Data Quality Management

no code implementations10 Jun 2021 Tianhao Wang, Yi Zeng, Ming Jin, Ruoxi Jia

High-quality data is critical to train performant Machine Learning (ML) models, highlighting the importance of Data Quality Management (DQM).

Data Summarization Data Valuation +1

BSNN: Towards Faster and Better Conversion of Artificial Neural Networks to Spiking Neural Networks with Bistable Neurons

no code implementations27 May 2021 Yang Li, Yi Zeng, Dongcheng Zhao

Also, when ResNet structure-based ANNs are converted, the information of output neurons is incomplete due to the rapid transmission of the shortcut path.

BackEISNN: A Deep Spiking Neural Network with Adaptive Self-Feedback and Balanced Excitatory-Inhibitory Neurons

no code implementations27 May 2021 Dongcheng Zhao, Yi Zeng, Yang Li

With the combination of the two mechanisms, we propose a deep spiking neural network with adaptive self-feedback and balanced excitatory and inhibitory neurons (BackEISNN).

Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective

1 code implementation ICCV 2021 Yi Zeng, Won Park, Z. Morley Mao, Ruoxi Jia

Acknowledging previous attacks' weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability.

SelfGait: A Spatiotemporal Representation Learning Method for Self-supervised Gait Recognition

1 code implementation27 Mar 2021 Yiqun Liu, Yi Zeng, Jian Pu, Hongming Shan, Peiyang He, Junping Zhang

In this work, we propose a self-supervised gait recognition method, termed SelfGait, which takes advantage of the massive, diverse, unlabeled gait data as a pre-training process to improve the representation abilities of spatiotemporal backbones.

Gait Recognition Representation Learning

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

no code implementations13 Dec 2020 Han Qiu, Yi Zeng, Shangwei Guo, Tianwei Zhang, Meikang Qiu, Bhavani Thuraisingham

In this paper, we investigate the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models' robustness.

Backdoor Attack Data Augmentation

FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques

1 code implementation3 Dec 2020 Han Qiu, Yi Zeng, Tianwei Zhang, Yong Jiang, Meikang Qiu

With more and more advanced adversarial attack methods have been developed, a quantity of corresponding defense solutions were designed to enhance the robustness of DNN models.

Adversarial Attack Data Augmentation

Quantum Superposition Inspired Spiking Neural Network

no code implementations23 Oct 2020 Yinqian Sun, Yi Zeng, Tielin Zhang

Despite advances in artificial intelligence models, neural networks still cannot achieve human performance, partly due to differences in how information is encoded and processed compared to human brain.

BIG-bench Machine Learning

A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks

no code implementations30 Jul 2020 Yi Zeng, Han Qiu, Gerard Memmi, Meikang Qiu

Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be vulnerable to Adversarial Examples (AEs), namely imperceptible perturbations added maliciously to cause wrong classification results.

Data Augmentation

Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques

1 code implementation27 May 2020 Han Qiu, Yi Zeng, Qinkai Zheng, Tianwei Zhang, Meikang Qiu, Gerard Memmi

Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat 11 state-of-the-art defense solutions published in top-tier conferences over the past 2 years.

Responsible Facial Recognition and Beyond

no code implementations19 Sep 2019 Yi Zeng, Enmeng Lu, Yinqian Sun, Ruochen Tian

Facial recognition is changing the way we live in and interact with our society.

Gait Recognition Iris Recognition

TEST: an End-to-End Network Traffic Examination and Identification Framework Based on Spatio-Temporal Features Extraction

no code implementations26 Aug 2019 Yi Zeng, Zihao Qi, Wen-Cheng Chen, Yanzhe Huang, Xingxin Zheng, Han Qiu

With more encrypted network traffic gets involved in the Internet, how to effectively identify network traffic has become a top priority in the field.

Intrusion Detection Traffic Classification

Towards High-Resolution Salient Object Detection

1 code implementation ICCV 2019 Yi Zeng, Pingping Zhang, Jianming Zhang, Zhe Lin, Huchuan Lu

This paper pushes forward high-resolution saliency detection, and contributes a new dataset, named High-Resolution Salient Object Detection (HRSOD).

Ranked #12 on RGB Salient Object Detection on DAVIS-S (using extra training data)

Object object-detection +4

Linking Artificial Intelligence Principles

no code implementations12 Dec 2018 Yi Zeng, Enmeng Lu, Cunqing Huangfu

Artificial Intelligence principles define social and ethical considerations to develop future AI.

Cannot find the paper you are looking for? You can Submit a new open access paper.