no code implementations • 29 Jan 2025 • Yoshua Bengio, Sören Mindermann, Daniel Privitera, Tamay Besiroglu, Rishi Bommasani, Stephen Casper, Yejin Choi, Philip Fox, Ben Garfinkel, Danielle Goldfarb, Hoda Heidari, Anson Ho, Sayash Kapoor, Leila Khalatbari, Shayne Longpre, Sam Manning, Vasilios Mavroudis, Mantas Mazeika, Julian Michael, Jessica Newman, Kwan Yee Ng, Chinasa T. Okolo, Deborah Raji, Girish Sastry, Elizabeth Seger, Theodora Skeadas, Tobin South, Emma Strubell, Florian Tramèr, Lucia Velasco, Nicole Wheeler, Daron Acemoglu, Olubayo Adekanmbi, David Dalrymple, Thomas G. Dietterich, Edward W. Felten, Pascale Fung, Pierre-Olivier Gourinchas, Fredrik Heintz, Geoffrey Hinton, Nick Jennings, Andreas Krause, Susan Leavy, Percy Liang, Teresa Ludermir, Vidushi Marda, Helen Margetts, John McDermid, Jane Munga, Arvind Narayanan, Alondra Nelson, Clara Neppel, Alice Oh, Gopal Ramchurn, Stuart Russell, Marietje Schaake, Bernhard Schölkopf, Dawn Song, Alvaro Soto, Lee Tiedrich, Gaël Varoquaux, Andrew Yao, Ya-Qin Zhang, Fahad Albalawi, Marwan Alserkal, Olubunmi Ajala, Guillaume Avrin, Christian Busch, André Carlos Ponce de Leon Ferreira de Carvalho, Bronwyn Fox, Amandeep Singh Gill, Ahmet Halit Hatip, Juha Heikkilä, Gill Jolly, Ziv Katzir, Hiroaki Kitano, Antonio Krüger, Chris Johnson, Saif M. Khan, Kyoung Mu Lee, Dominic Vincent Ligot, Oleksii Molchanovskyi, Andrea Monti, Nusu Mwamanzi, Mona Nemer, Nuria Oliver, José Ramón López Portillo, Balaraman Ravindran, Raquel Pezoa Rivera, Hammam Riza, Crystal Rugege, Ciarán Seoighe, Jerry Sheehan, Haroon Sheikh, Denise Wong, Yi Zeng
The first International AI Safety Report comprehensively synthesizes the current evidence on the capabilities, risks, and safety of advanced AI systems.
no code implementations • 24 Jan 2025 • Guobin Shen, Jindong Li, Tenglong Li, Dongcheng Zhao, Yi Zeng
${SpikePack}$ achieves constant $\mathcal{O}(1)$ time and space complexity, enabling efficient parallel processing on GPUs and also supporting serial inference on existing SNN hardware accelerators.
no code implementations • 14 Jan 2025 • Yao Liang, Yuwei Wang, Yi Zeng
We propose Triangular Adaptive Low-Rank Adaptation (TriAdaptLoRA), a novel PEFT framework inspired by neuroscience principles, which dynamically optimizes the allocation of trainable parameters.
Natural Language Understanding
parameter-efficient fine-tuning
1 code implementation • 31 Dec 2024 • Haibo Tong, Enmeng Lu, Yinqian Sun, Zhengqiang Han, Chao Liu, Feifei Zhao, Yi Zeng
With the widespread application of Artificial Intelligence (AI) in human society, enabling AI to autonomously align with human values has become a pressing issue to ensure its sustainable development and benefit to humanity.
no code implementations • 3 Dec 2024 • Yi Zeng, Jinwei Li, Hui Zhu, Shukuan Lu, Jianfeng Li, Xiran Cai
Compared to the data-adaptive beamformers, the deep beamformer reduced the computational cost by three orders of magnitude achieving 10. 5 ms image reconstruction speed in our data, while the image quality was as good as that of the data-adaptive beamformers.
no code implementations • 22 Nov 2024 • Qian Liang, Yi Zeng, Menghaoran Tang
In this paper, we propose a spiking neural network inspired by brain mechanisms and psychological theories to represent musical modes and keys, ultimately generating musical pieces that incorporate tonality features.
no code implementations • 16 Nov 2024 • Xudong Lu, Yinghao Chen, Cheng Chen, Hui Tan, Boheng Chen, Yina Xie, Rui Hu, Guanxin Tan, Renshou Wu, Yan Hu, Yi Zeng, Lei Wu, Liuyang Bian, Zhaoxiong Wang, Long Liu, Yanzhou Yang, Han Xiao, Aojun Zhou, Yafei Wen, Xiaoxin Chen, Shuai Ren, Hongsheng Li
To be specific, we redesign the dynamic resolution scheme adopted by mainstream MLLMs and implement system optimization for hardware-aware deployment to optimize model inference on mobile phones.
1 code implementation • 12 Nov 2024 • Chengquan Guo, Xun Liu, Chulin Xie, Andy Zhou, Yi Zeng, Zinan Lin, Dawn Song, Bo Li
To provide comprehensive and practical evaluations on the safety of code agents, we propose RedCode, a benchmark for risky code execution and generation: (1) RedCode-Exec provides challenging prompts that could lead to risky code execution, aiming to evaluate code agents' ability to recognize and handle unsafe code.
no code implementations • 11 Nov 2024 • Wenxuan Pan, Feifei Zhao, Bing Han, Haibo Tong, Yi Zeng
By exploiting discrete signal processing and simulating brain neuron communication, Spiking Neural Networks (SNNs) offer a low-energy alternative to Artificial Neural Networks (ANNs).
no code implementations • 9 Nov 2024 • Yi Zeng, Mingguang Han, Xiaoguang Li, Tiejun Li
Channel estimation and extrapolation are fundamental issues in MIMO communication systems.
1 code implementation • 9 Nov 2024 • Elise Karinshak, Amanda Hu, Kewen Kong, Vishwanatha Rao, Jingren Wang, Jindong Wang, Yi Zeng
Immense effort has been dedicated to minimizing the presence of harmful or biased generative content and better aligning AI output to human intention; however, research investigating the cultural values of LLMs is still in very early stages.
no code implementations • 29 Oct 2024 • Feifei Zhao, Hui Feng, Haibo Tong, Zhengqiang Han, Enmeng Lu, Yinqian Sun, Yi Zeng
In contrast, the intrinsic altruistic motivation based on empathy is more willing, spontaneous, and robust.
1 code implementation • 28 Oct 2024 • Bing Han, Feifei Zhao, Yang Li, Qingqun Kong, Xianqi Li, Yi Zeng
Additionally, our algorithm has the capability to adaptively select similar groups of neurons for related tasks, offering a promising approach to enhancing the biological interpretability of efficient continual learning.
no code implementations • 5 Oct 2024 • Yiting Dong, Guobin Shen, Dongcheng Zhao, Xiang He, Yi Zeng
Existing attack methods are fixed or specifically tailored for certain models and cannot flexibly adjust attack strength, which is critical for generalization when attacking models of various sizes.
no code implementations • 3 Oct 2024 • Guobin Shen, Dongcheng Zhao, Yiting Dong, Xiang He, Yi Zeng
In this paper, we introduce Jailbreak Antidote, a method that enables real-time adjustment of LLM safety preferences by manipulating a sparse subset of the model's internal states during inference.
no code implementations • 14 Sep 2024 • Guobin Shen, Dongcheng Zhao, Aorigele Bao, Xiang He, Yiting Dong, Yi Zeng
Moreover, this study contributes to the broader AI research community by offering a new perspective on how LLMs handle different scenarios and their similarities to human cognition.
no code implementations • 11 Sep 2024 • Yonghao Yu, Dongcheng Zhao, Guobin Shen, Yiting Dong, Yi Zeng
The hierarchical architecture has become a mainstream design paradigm for Vision Transformers (ViTs), with Patch Merging serving as the pivotal component that transforms a columnar architecture into a hierarchical one.
no code implementations • 1 Sep 2024 • Mingguang Han, Yi Zeng, Xiaoguang Li, Tiejun Li
To reduce computational complexity and improve frequency estimation accuracy, a two-stage strategy was further introduced to dynamically adjust the number of the optimized degrees of freedom.
1 code implementation • 4 Aug 2024 • Xiang He, Xiangxi Liu, Yang Li, Dongcheng Zhao, Guobin Shen, Qingqun Kong, Xin Yang, Yi Zeng
Specifically, we have enhanced the model's ability to discern subtle differences between event and background and improved the accuracy of event classification in our model.
no code implementations • 11 Jul 2024 • Yi Zeng, Yu Yang, Andy Zhou, Jeffrey Ziwei Tan, Yuheng Tu, Yifan Mai, Kevin Klyman, Minzhou Pan, Ruoxi Jia, Dawn Song, Percy Liang, Bo Li
However, existing public benchmarks often define safety categories based on previous literature, intuitions, or common sense, leading to disjointed sets of categories for risks specified in recent regulations and policies, which makes it challenging to evaluate and compare FMs across these benchmarks.
no code implementations • 28 Jun 2024 • Yang Li, Feifei Zhao, Dongcheng Zhao, Yi Zeng
Brain-inspired Spiking Neural Networks (SNNs) have attracted much attention due to their event-based computing and energy-efficient features.
no code implementations • 25 Jun 2024 • Bowei Yao, Yi Zeng, Haizhao Dai, Qing Wu, Youshen Xiao, Fei Gao, Yuyao Zhang, Jingyi Yu, Xiran Cai
Photoacoustic tomography is a hybrid biomedical technology, which combines the advantages of acoustic and optical imaging.
no code implementations • 25 Jun 2024 • Yi Zeng, Kevin Klyman, Andy Zhou, Yu Yang, Minzhou Pan, Ruoxi Jia, Dawn Song, Percy Liang, Bo Li
We present a comprehensive AI risk taxonomy derived from eight government policies from the European Union, United States, and China and 16 company policies worldwide, making a significant step towards establishing a unified language for generative AI safety evaluation.
1 code implementation • 24 Jun 2024 • Yi Zeng, Weiyu Sun, Tran Ngoc Huynh, Dawn Song, Bo Li, Ruoxi Jia
Safety backdoor attacks in large language models (LLMs) enable the stealthy triggering of unsafe behaviors while evading detection during normal interactions.
1 code implementation • 20 Jun 2024 • Tinghao Xie, Xiangyu Qi, Yi Zeng, Yangsibo Huang, Udari Madhushani Sehwag, Kaixuan Huang, Luxi He, Boyi Wei, Dacheng Li, Ying Sheng, Ruoxi Jia, Bo Li, Kai Li, Danqi Chen, Peter Henderson, Prateek Mittal
First, existing methods often use coarse-grained taxonomies of unsafe topics, and are over-representing some fine-grained topics.
no code implementations • 11 Jun 2024 • Yi Zeng, Xuelin Yang, Li Chen, Cristian Canton Ferrer, Ming Jin, Michael I. Jordan, Ruoxi Jia
To address issues of group-level fairness in machine learning, it is natural to adjust model parameters based on specific fairness objectives over a sensitive-attributed validation set.
no code implementations • 8 Jun 2024 • Yang Li, Xiang He, Qingqun Kong, Yi Zeng
Spike-based neuromorphic hardware has demonstrated substantial potential in low energy consumption and efficient inference.
1 code implementation • 6 Jun 2024 • Minzhou Pan, Yi Zeng, Xue Lin, Ning Yu, Cho-Jui Hsieh, Peter Henderson, Ruoxi Jia
In this study, we investigate the vulnerability of image watermarks to diffusion-model-based image editing, a challenge exacerbated by the computational cost of accessing gradient information and the closed-source nature of many diffusion models.
no code implementations • 29 May 2024 • Xiangyu Qi, Yangsibo Huang, Yi Zeng, Edoardo Debenedetti, Jonas Geiping, Luxi He, Kaixuan Huang, Udari Madhushani, Vikash Sehwag, Weijia Shi, Boyi Wei, Tinghao Xie, Danqi Chen, Pin-Yu Chen, Jeffrey Ding, Ruoxi Jia, Jiaqi Ma, Arvind Narayanan, Weijie J Su, Mengdi Wang, Chaowei Xiao, Bo Li, Dawn Song, Peter Henderson, Prateek Mittal
The exposure of security vulnerabilities in safety-aligned language models, e. g., susceptibility to adversarial attacks, has shed light on the intricate interplay between AI safety and AI security.
no code implementations • 29 May 2024 • Yiting Dong, Xiang He, Guobin Shen, Dongcheng Zhao, Yang Li, Yi Zeng
However, existing augmentation methods often neglect the preservation of spatial integrity and temporal continuity.
no code implementations • 23 May 2024 • Linghao Feng, Dongcheng Zhao, Sicheng Shen, Yiting Dong, Guobin Shen, Yi Zeng
This paper presents a novel approach leveraging Spiking Neural Networks (SNNs) to construct a Variational Quantized Autoencoder (VQ-VAE) with a temporal codebook inspired by hippocampal time cells.
no code implementations • 30 Apr 2024 • Guobin Shen, Dongcheng Zhao, Xiang He, Linghao Feng, Yiting Dong, Jihang Wang, Qian Zhang, Yi Zeng
Decoding non-invasive brain recordings is pivotal for advancing our understanding of human cognition but faces challenges due to individual differences and complex neural signal representations.
1 code implementation • 18 Apr 2024 • Bertie Vidgen, Adarsh Agrawal, Ahmed M. Ahmed, Victor Akinwande, Namir Al-Nuaimi, Najla Alfaraj, Elie Alhajjar, Lora Aroyo, Trupti Bavalatti, Max Bartolo, Borhane Blili-Hamelin, Kurt Bollacker, Rishi Bomassani, Marisa Ferrara Boston, Siméon Campos, Kal Chakra, Canyu Chen, Cody Coleman, Zacharie Delpierre Coudert, Leon Derczynski, Debojyoti Dutta, Ian Eisenberg, James Ezick, Heather Frase, Brian Fuller, Ram Gandikota, Agasthya Gangavarapu, Ananya Gangavarapu, James Gealy, Rajat Ghosh, James Goel, Usman Gohar, Sujata Goswami, Scott A. Hale, Wiebke Hutiri, Joseph Marvin Imperial, Surgan Jandial, Nick Judd, Felix Juefei-Xu, Foutse khomh, Bhavya Kailkhura, Hannah Rose Kirk, Kevin Klyman, Chris Knotz, Michael Kuchnik, Shachi H. Kumar, Srijan Kumar, Chris Lengerich, Bo Li, Zeyi Liao, Eileen Peters Long, Victor Lu, Sarah Luger, Yifan Mai, Priyanka Mary Mammen, Kelvin Manyeki, Sean McGregor, Virendra Mehta, Shafee Mohammed, Emanuel Moss, Lama Nachman, Dinesh Jinenhally Naganna, Amin Nikanjam, Besmira Nushi, Luis Oala, Iftach Orr, Alicia Parrish, Cigdem Patlak, William Pietri, Forough Poursabzi-Sangdeh, Eleonora Presani, Fabrizio Puletti, Paul Röttger, Saurav Sahay, Tim Santos, Nino Scherrer, Alice Schoenauer Sebag, Patrick Schramowski, Abolfazl Shahbazi, Vin Sharma, Xudong Shen, Vamsi Sistla, Leonard Tang, Davide Testuggine, Vithursan Thangarasa, Elizabeth Anne Watkins, Rebecca Weiss, Chris Welty, Tyler Wilbers, Adina Williams, Carole-Jean Wu, Poonam Yadav, Xianjun Yang, Yi Zeng, Wenhui Zhang, Fedor Zhdanov, Jiacheng Zhu, Percy Liang, Peter Mattson, Joaquin Vanschoren
We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0. 5 benchmark.
no code implementations • 16 Apr 2024 • Zhun Zhang, Yi Zeng, Qihe Liu, Shijie Zhou
In this paper, we seek to demystify this relationship by exploring the characteristics of adversarial perturbations within the frequency domain.
1 code implementation • 19 Mar 2024 • Zhuowen Yuan, Zidi Xiong, Yi Zeng, Ning Yu, Ruoxi Jia, Dawn Song, Bo Li
The innovative use of constrained optimization and a fusion-based guardrail approach represents a significant step forward in developing more secure and reliable LLMs, setting a new standard for content moderation frameworks in the face of evolving digital threats.
no code implementations • 12 Mar 2024 • Yi Zeng, Zhengning Wang, Yuxuan Liu, Tianjiao Zeng, Xuhang Liu, Xinglong Luo, Shuaicheng Liu, Shuyuan Zhu, Bing Zeng
Since texture details intertwine with compression artifacts in compressed dark images, detail enhancement and blocking artifacts suppression contradict each other in image space.
no code implementations • 12 Mar 2024 • Yao Liang, Yuwei Wang, Yang Li, Yi Zeng
In response to this, inspired by the idea that the functions of the brain are shaped by its geometric structure, this paper integrates this idea into LoRA technology and proposes a new matrix transformation-based reparameterization method for efficient fine-tuning, named Matrix-Transformation based Low-Rank Adaptation (MTLoRA).
Natural Language Understanding
parameter-efficient fine-tuning
+1
no code implementations • 7 Mar 2024 • Shayne Longpre, Sayash Kapoor, Kevin Klyman, Ashwin Ramaswami, Rishi Bommasani, Borhane Blili-Hamelin, Yangsibo Huang, Aviya Skowron, Zheng-Xin Yong, Suhas Kotha, Yi Zeng, Weiyan Shi, Xianjun Yang, Reid Southen, Alexander Robey, Patrick Chao, Diyi Yang, Ruoxi Jia, Daniel Kang, Sandy Pentland, Arvind Narayanan, Percy Liang, Peter Henderson
Independent evaluation and red teaming are critical for identifying the risks posed by generative AI systems.
no code implementations • 29 Feb 2024 • Yi Zeng, Feifei Zhao, Yuxuan Zhao, Dongcheng Zhao, Enmeng Lu, Qian Zhang, Yuwei Wang, Hui Feng, Zhuoya Zhao, Jihang Wang, Qingqun Kong, Yinqian Sun, Yang Li, Guobin Shen, Bing Han, Yiting Dong, Wenxuan Pan, Xiang He, Aorigele Bao, Jin Wang
In this paper, we introduce a Brain-inspired and Self-based Artificial Intelligence (BriSe AI) paradigm.
no code implementations • 1 Feb 2024 • Yang Li, Yinqian Sun, Xiang He, Yiting Dong, Dongcheng Zhao, Yi Zeng
Efficient parallel computing has become a pivotal element in advancing artificial intelligence.
2 code implementations • 22 Jan 2024 • Sicheng Shen, Dongcheng Zhao, Guobin Shen, Yi Zeng
Spiking Neural Networks (SNNs), as the third generation of neural networks, have gained prominence for their biological plausibility and computational efficiency, especially in processing diverse datasets.
no code implementations • 12 Jan 2024 • Yuwei Wang, Yi Zeng
Concept learning is a fundamental aspect of human cognition and plays a critical role in mental processes such as categorization, reasoning, memory, and decision-making.
2 code implementations • 12 Jan 2024 • Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, Weiyan Shi
This paper introduces a new perspective to jailbreak LLMs as human-like communicators, to explore this overlooked intersection between everyday language interaction and AI safety.
no code implementations • CVPR 2024 • Guobin Shen, Dongcheng Zhao, Tenglong Li, Jindong Li, Yi Zeng
This paper introduces a unified perspective illustrating that the time steps in SNNs and quantized bit-widths of activation values present analogous representations.
no code implementations • 18 Dec 2023 • Jinxiang Lai, Wenlong Wu, Bin-Bin Gao, Jun Liu, Jiawei Zhan, Congchong Nie, Yi Zeng, Chengjie Wang
Image matching and object detection are two fundamental and challenging tasks, while many related applications consider them two individual tasks (i. e. task-individual).
no code implementations • 12 Dec 2023 • Guobin Shen, Dongcheng Zhao, Yiting Dong, Yang Li, Jindong Li, Kang Sun, Yi Zeng
Within the complex neuroarchitecture of the brain, astrocytes play crucial roles in development, structure, and metabolism.
no code implementations • 17 Nov 2023 • Guobin Shen, Dongcheng Zhao, Tenglong Li, Jindong Li, Yi Zeng
This paper introduces a unified perspective, illustrating that the time steps in SNNs and quantized bit-widths of activation values present analogous representations.
no code implementations • 9 Oct 2023 • Yuwei Wang, Enmeng Lu, Zizhe Ruan, Yao Liang, Yi Zeng
This paper presents Social data and knowledge collective intelligence platform for TRaining Ethical AI Models (STREAM) to address the challenge of aligning AI models with human moral values, and to provide ethics datasets and knowledge bases to help promote AI models "follow good advice as naturally as a stream follows its course".
1 code implementation • 5 Oct 2023 • Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson
Optimizing large language models (LLMs) for downstream use cases often involves the customization of pre-trained LLMs through further fine-tuning.
no code implementations • 28 Sep 2023 • Jindong Li, Guobin Shen, Dongcheng Zhao, Qian Zhang, Yi Zeng
As a further step in supporting high-performance SNNs on specialized hardware, we introduce FireFly v2, an FPGA SNN accelerator that can address the issue of non-spike operation in current SOTA SNN algorithms, which presents an obstacle in the end-to-end deployment onto existing SNN hardware.
no code implementations • 18 Sep 2023 • Bing Han, Feifei Zhao, Wenxuan Pan, Zhaoya Zhao, Xianqi Li, Qingqun Kong, Yi Zeng
In this paper, we propose a brain-inspired continual learning algorithm with adaptive reorganization of neural pathways, which employs Self-Organizing Regulation networks to reorganize the single and limited Spiking Neural Network (SOR-SNN) into rich sparse neural pathways to efficiently cope with incremental tasks.
no code implementations • 11 Sep 2023 • Wenxuan Pan, Feifei Zhao, Zhuoya Zhao, Yi Zeng
This work explores brain-inspired neural architectures suitable for SNNs and also provides preliminary insights into the evolutionary mechanisms of biological neural networks in the human brain.
no code implementations • 23 Aug 2023 • Guobin Shen, Dongcheng Zhao, Yiting Dong, Yang Li, Feifei Zhao, Yi Zeng
This shift in focus from weight adjustment to mastering the intricacies of synaptic change offers a more flexible and dynamic pathway for neural networks to evolve and adapt.
no code implementations • 11 Aug 2023 • Karim Lekadir, Aasa Feragen, Abdul Joseph Fofanah, Alejandro F Frangi, Alena Buyx, Anais Emelie, Andrea Lara, Antonio R Porras, An-Wen Chan, Arcadi Navarro, Ben Glocker, Benard O Botwe, Bishesh Khanal, Brigit Beger, Carol C Wu, Celia Cintas, Curtis P Langlotz, Daniel Rueckert, Deogratias Mzurikwao, Dimitrios I Fotiadis, Doszhan Zhussupov, Enzo Ferrante, Erik Meijering, Eva Weicken, Fabio A González, Folkert W Asselbergs, Fred Prior, Gabriel P Krestin, Gary Collins, Geletaw S Tegenaw, Georgios Kaissis, Gianluca Misuraca, Gianna Tsakou, Girish Dwivedi, Haridimos Kondylakis, Harsha Jayakody, Henry C Woodruf, Horst Joachim Mayer, Hugo JWL Aerts, Ian Walsh, Ioanna Chouvarda, Irène Buvat, Isabell Tributsch, Islem Rekik, James Duncan, Jayashree Kalpathy-Cramer, Jihad Zahir, Jinah Park, John Mongan, Judy W Gichoya, Julia A Schnabel, Kaisar Kushibar, Katrine Riklund, Kensaku MORI, Kostas Marias, Lameck M Amugongo, Lauren A Fromont, Lena Maier-Hein, Leonor Cerdá Alberich, Leticia Rittner, Lighton Phiri, Linda Marrakchi-Kacem, Lluís Donoso-Bach, Luis Martí-Bonmatí, M Jorge Cardoso, Maciej Bobowicz, Mahsa Shabani, Manolis Tsiknakis, Maria A Zuluaga, Maria Bielikova, Marie-Christine Fritzsche, Marina Camacho, Marius George Linguraru, Markus Wenzel, Marleen de Bruijne, Martin G Tolsgaard, Marzyeh Ghassemi, Md Ashrafuzzaman, Melanie Goisauf, Mohammad Yaqub, Mónica Cano Abadía, Mukhtar M E Mahmoud, Mustafa Elattar, Nicola Rieke, Nikolaos Papanikolaou, Noussair Lazrak, Oliver Díaz, Olivier Salvado, Oriol Pujol, Ousmane Sall, Pamela Guevara, Peter Gordebeke, Philippe Lambin, Pieta Brown, Purang Abolmaesumi, Qi Dou, Qinghua Lu, Richard Osuala, Rose Nakasi, S Kevin Zhou, Sandy Napel, Sara Colantonio, Shadi Albarqouni, Smriti Joshi, Stacy Carter, Stefan Klein, Steffen E Petersen, Susanna Aussó, Suyash Awate, Tammy Riklin Raviv, Tessa Cook, Tinashe E M Mutsvangwa, Wendy A Rogers, Wiro J Niessen, Xènia Puig-Bosch, Yi Zeng, Yunusa G Mohammed, Yves Saint James Aquino, Zohaib Salahuddin, Martijn P A Starmans
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
1 code implementation • 9 Aug 2023 • Bing Han, Feifei Zhao, Yi Zeng, Wenxuan Pan, Guobin Shen
In addition, the overlapping shared structure helps to quickly leverage all acquired knowledge to new tasks, empowering a single network capable of supporting multiple incremental tasks (without the separate sub-network mask for each task).
1 code implementation • 4 Jun 2023 • Junyuan Hong, Yi Zeng, Shuyang Yu, Lingjuan Lyu, Ruoxi Jia, Jiayu Zhou
Data-free knowledge distillation (KD) helps transfer knowledge from a pre-trained model (known as the teacher model) to a smaller model (known as the student model) without access to the original training data used for training the teacher model.
Backdoor Defense for Data-Free Distillation with Poisoned Teachers
Data-free Knowledge Distillation
no code implementations • 29 May 2023 • Zhenting Wang, Chen Chen, Yi Zeng, Lingjuan Lyu, Shiqing Ma
To overcome this problem, we first develop an alteration-free and model-agnostic origin attribution method via input reverse-engineering on image generation models, i. e., inverting the input of a particular model for a specific image.
no code implementations • 23 May 2023 • Dongcheng Zhao, Guobin Shen, Yiting Dong, Yang Li, Yi Zeng
Notably, our algorithm has achieved state-of-the-art performance on neuromorphic datasets DVS-CIFAR10 and N-Caltech101, and can achieve superior performance in the test phase with timestep T=1.
no code implementations • 19 May 2023 • Guobin Shen, Dongcheng Zhao, Yiting Dong, Yang Li, Yi Zeng
The biological neural network is a vast and diverse structure with high neural heterogeneity.
no code implementations • 17 May 2023 • Linghao Feng, Dongcheng Zhao, Yi Zeng
As it stands, such models are primarily limited to the domain of artificial neural networks.
1 code implementation • 28 Apr 2023 • Hoang Anh Just, Feiyang Kang, Jiachen T. Wang, Yi Zeng, Myeongseob Ko, Ming Jin, Ruoxi Jia
(1) We develop a proxy for the validation performance associated with a training set based on a non-conventional class-wise Wasserstein distance between training and validation sets.
no code implementations • 21 Apr 2023 • Wenxuan Pan, Feifei Zhao, Guobin Shen, Yi Zeng
The neural motifs topology, modular regional structure and global cross-brain region connection of the human brain are the product of natural evolution and can serve as a perfect reference for designing brain-inspired SNN architecture.
no code implementations • 13 Apr 2023 • Yiting Dong, Dongcheng Zhao, Yi Zeng
However, SNNs typically grapple with challenges such as extended time steps, low temporal information utilization, and the requirement for consistent time step between testing and training.
no code implementations • 31 Mar 2023 • Wenxuan Pan, Feifei Zhao, Yi Zeng, Bing Han
For structural evolution, an adaptive evolvable LSM model is developed to optimize the neural architecture design of liquid layer with separation property.
no code implementations • 23 Mar 2023 • Xiang He, Yang Li, Dongcheng Zhao, Qingqun Kong, Yi Zeng
The self-adaptation to membrane potential and input allows a timely adjustment of the threshold to fire spike faster and transmit more information.
1 code implementation • 23 Mar 2023 • Xiang He, Dongcheng Zhao, Yang Li, Guobin Shen, Qingqun Kong, Yi Zeng
In order to improve the generalization ability of SNNs on event-based datasets, we use static images to assist SNN training on event data.
no code implementations • 22 Mar 2023 • Yuxuan Zhao, Enmeng Lu, Yi Zeng
Despite the conceptual descriptions of the mechanisms of bodily self-consciousness and the possible relevant brain areas, the existing theoretical models still lack an explanation of the computational mechanisms by which the brain encodes the perception of one's body and how our subjectively perceived body illusions can be generated by neural networks.
1 code implementation • 22 Feb 2023 • Minzhou Pan, Yi Zeng, Lingjuan Lyu, Xue Lin, Ruoxi Jia
However, we lack a thorough understanding of the applicability of existing detection methods across a variety of learning settings.
no code implementations • 29 Jan 2023 • Guobin Shen, Dongcheng Zhao, Yi Zeng
Inspired by spike patterns in biological neurons, this paper introduces the dynamic Burst pattern and designs the Leaky Integrate and Fire or Burst (LIFB) neuron that can make a trade-off between short-time performance and dynamic temporal performance from the perspective of network information capacity.
no code implementations • 18 Jan 2023 • Yinqian Sun, Yi Zeng, Feifei Zhao, Zhuoya Zhao
In this paper, we proposed a brain-inspired SNN-based deep distributional reinforcement learning algorithm with combination of bio-inspired multi-compartment neuron (MCN) model and population coding method.
no code implementations • 7 Jan 2023 • Yao Liang, Hongjian Fang, Yi Zeng, Feifei Zhao
Reasoning and question answering as a basic cognitive function for humans, is nevertheless a great challenge for current artificial intelligence.
no code implementations • 5 Jan 2023 • Jindong Li, Guobin Shen, Dongcheng Zhao, Qian Zhang, Yi Zeng
To improve memory efficiency, we design a memory system to enable efficient synaptic weights and membrane voltage memory access with reasonable on-chip RAM consumption.
no code implementations • 23 Nov 2022 • Bing Han, Feifei Zhao, Yi Zeng, Guobin Shen
Developmental plasticity plays a prominent role in shaping the brain's structure during ongoing learning in response to dynamically changing environments.
no code implementations • 22 Nov 2022 • Bing Han, Feifei Zhao, Yi Zeng, Wenxuan Pan
Experimental results on spatial (MNIST, CIFAR-10) and temporal neuromorphic (N-MNIST, DVS-Gesture) datasets demonstrate that our method can flexibly learn appropriate compression rate for various tasks and effectively achieve superior performance while massively reducing the network energy consumption.
1 code implementation • 2 Nov 2022 • Jinxiang Lai, Siqian Yang, Wenlong Liu, Yi Zeng, Zhongyi Huang, Wenlong Wu, Jun Liu, Bin-Bin Gao, Chengjie Wang
Few-Shot Learning (FSL) alleviates the data shortage challenge via embedding discriminative target-aware features among plenty seen (base) and few unseen (novel) labeled samples.
1 code implementation • 12 Oct 2022 • Yi Zeng, Minzhou Pan, Himanshu Jahagirdar, Ming Jin, Lingjuan Lyu, Ruoxi Jia
Most poisoning defenses presume access to a set of clean data (or base set).
no code implementations • 8 Aug 2022 • Jinyu Fan, Yi Zeng
Even the state-of-the-art deep learning models lack fundamental abilities compared to humans.
no code implementations • 18 Jul 2022 • Yi Zeng, Dongcheng Zhao, Feifei Zhao, Guobin Shen, Yiting Dong, Enmeng Lu, Qian Zhang, Yinqian Sun, Qian Liang, Yuxuan Zhao, Zhuoya Zhao, Hongjian Fang, Yuwei Wang, Yang Li, Xin Liu, Chengcheng Du, Qingqun Kong, Zizhe Ruan, Weida Bi
These brain-inspired AI models have been effectively validated on various supervised, unsupervised, and reinforcement learning tasks, and they can be used to enable AI models to be with multiple brain-inspired cognitive functions.
no code implementations • 11 Jul 2022 • Hongjian Fang, Yi Zeng, Jianbo Tang, Yuwei Wang, Yao Liang, Xin Liu
For the fields of neuroscience and cognitive science, the work in this paper provided the foundation of computational modeling for further exploration of the way the human brain represents commonsense knowledge.
no code implementations • 6 Jul 2022 • Yang Li, Xiang He, Yiting Dong, Qingqun Kong, Yi Zeng
Spiking neural network (SNN) has been attached to great importance due to the properties of high biological plausibility and low energy consumption on neuromorphic hardware.
no code implementations • 6 Jul 2022 • Yiting Dong, Dongcheng Zhao, Yang Li, Yi Zeng
By integrating the above three adaptive mechanisms and STB-STDP, our model greatly accelerates the training of unsupervised spiking neural networks and improves the performance of unsupervised SNNs on complex tasks.
no code implementations • 14 Jun 2022 • Si Chen, Yi Zeng, Jiachen T. Wang, Won Park, Xun Chen, Lingjuan Lyu, Zhuoqing Mao, Ruoxi Jia
Our work is the first to provide a thorough understanding of leveraging model inversion for effective backdoor removal by addressing key questions about reconstructed samples' properties, perceptual similarity, and the potential presence of backdoor triggers.
no code implementations • 8 Jun 2022 • Yinqian Sun, Yi Zeng, Yang Li
Brain inspired spiking neural networks (SNNs) have been successfully applied to many pattern recognition domains.
no code implementations • 24 May 2022 • Jihang Wang, Dongcheng Zhao, Guobin Shen, Qian Zhang, Yi Zeng
Privacy protection is a crucial issue in machine learning algorithms, and the current privacy protection is combined with traditional artificial neural networks based on real values.
no code implementations • 24 May 2022 • Guobin Shen, Dongcheng Zhao, Yi Zeng
Data augmentation can improve the quantity and quality of the original data by processing more representations from the original data.
1 code implementation • 28 Apr 2022 • Yang Li, Yi Zeng
Spiking neural network (SNN), as a brain-inspired energy-efficient neural network, has attracted the interest of researchers.
3 code implementations • 11 Apr 2022 • Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, Ruoxi Jia
With poisoning equal to or less than 0. 5% of the target-class data and 0. 05% of the training set, we can train a model to classify test examples from arbitrary classes into the target class when the examples are patched with a backdoor trigger.
Ranked #1 on
Clean-label Backdoor Attack (0.05%)
on Tiny ImageNet
no code implementations • 18 Mar 2022 • Jun Quan, Ze Wei, Qiang Gan, Jingqi Yao, Jingyi Lu, Yuchen Dong, Yiming Liu, Yi Zeng, Chao Zhang, Yongzhi Li, Huang Hu, Yingying He, Yang Yang, Daxin Jiang
The conversational recommender systems (CRSs) have received extensive attention in recent years.
1 code implementation • 25 Dec 2021 • Yang Li, Yiting Dong, Dongcheng Zhao, Yi Zeng
Few-shot learning (learning with a few samples) is one of the most important cognitive abilities of the human brain.
no code implementations • 15 Nov 2021 • Dongcheng Zhao, Yang Li, Yi Zeng, Jihang Wang, Qian Zhang
Our Spiking CapsNet fully combines the strengthens of SNN and CapsNet, and shows strong robustness to noise and affine transformation.
no code implementations • 17 Oct 2021 • Guobin Shen, Dongcheng Zhao, Yi Zeng
Secondly, we propose a biologically plausible temporal adjustment making the error propagate across the spikes in the temporal dimension, which overcomes the problem of the temporal dependency within a single spike period of the traditional spiking neurons.
2 code implementations • ICLR 2022 • Yi Zeng, Si Chen, Won Park, Z. Morley Mao, Ming Jin, Ruoxi Jia
Particularly, its performance is more robust to the variation on triggers, attack settings, poison ratio, and clean data size.
no code implementations • 29 Sep 2021 • Tianhao Wang, Yi Zeng, Ming Jin, Ruoxi Jia
In this paper, we focus on the problem of identifying bad training data when the underlying cause is unknown in advance.
no code implementations • 10 Jun 2021 • Tianhao Wang, Yi Zeng, Ming Jin, Ruoxi Jia
High-quality data is critical to train performant Machine Learning (ML) models, highlighting the importance of Data Quality Management (DQM).
no code implementations • 27 May 2021 • Yang Li, Yi Zeng, Dongcheng Zhao
Also, when ResNet structure-based ANNs are converted, the information of output neurons is incomplete due to the rapid transmission of the shortcut path.
no code implementations • 27 May 2021 • Dongcheng Zhao, Yi Zeng, Yang Li
With the combination of the two mechanisms, we propose a deep spiking neural network with adaptive self-feedback and balanced excitatory and inhibitory neurons (BackEISNN).
1 code implementation • ICCV 2021 • Yi Zeng, Won Park, Z. Morley Mao, Ruoxi Jia
Acknowledging previous attacks' weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability.
1 code implementation • 27 Mar 2021 • Yiqun Liu, Yi Zeng, Jian Pu, Hongming Shan, Peiyang He, Junping Zhang
In this work, we propose a self-supervised gait recognition method, termed SelfGait, which takes advantage of the massive, diverse, unlabeled gait data as a pre-training process to improve the representation abilities of spatiotemporal backbones.
no code implementations • 13 Dec 2020 • Han Qiu, Yi Zeng, Shangwei Guo, Tianwei Zhang, Meikang Qiu, Bhavani Thuraisingham
In this paper, we investigate the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models' robustness.
1 code implementation • 3 Dec 2020 • Han Qiu, Yi Zeng, Tianwei Zhang, Yong Jiang, Meikang Qiu
With more and more advanced adversarial attack methods have been developed, a quantity of corresponding defense solutions were designed to enhance the robustness of DNN models.
no code implementations • 23 Oct 2020 • Yinqian Sun, Yi Zeng, Tielin Zhang
Despite advances in artificial intelligence models, neural networks still cannot achieve human performance, partly due to differences in how information is encoded and processed compared to human brain.
no code implementations • 18 Sep 2020 • Shangwei Guo, Tianwei Zhang, Han Qiu, Yi Zeng, Tao Xiang, Yang Liu
In this paper, we propose a novel watermark removal attack from a different perspective.
no code implementations • 30 Jul 2020 • Yi Zeng, Han Qiu, Gerard Memmi, Meikang Qiu
Deep Neural Networks (DNNs) in Computer Vision (CV) are well-known to be vulnerable to Adversarial Examples (AEs), namely imperceptible perturbations added maliciously to cause wrong classification results.
no code implementations • 7 Jun 2020 • Qingdong He, Zhengning Wang, Hao Zeng, Yi Zeng, Yijun Liu
Accurate 3D object detection from point clouds has become a crucial component in autonomous driving.
Ranked #1 on
3D Object Detection
on KITTI Pedestrians Hard
1 code implementation • 27 May 2020 • Han Qiu, Yi Zeng, Qinkai Zheng, Tianwei Zhang, Meikang Qiu, Gerard Memmi
Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat 11 state-of-the-art defense solutions published in top-tier conferences over the past 2 years.
no code implementations • 19 Sep 2019 • Yi Zeng, Enmeng Lu, Yinqian Sun, Ruochen Tian
Facial recognition is changing the way we live in and interact with our society.
no code implementations • 26 Aug 2019 • Yi Zeng, Zihao Qi, Wen-Cheng Chen, Yanzhe Huang, Xingxin Zheng, Han Qiu
With more encrypted network traffic gets involved in the Internet, how to effectively identify network traffic has become a top priority in the field.
1 code implementation • ICCV 2019 • Yi Zeng, Pingping Zhang, Jianming Zhang, Zhe Lin, Huchuan Lu
This paper pushes forward high-resolution saliency detection, and contributes a new dataset, named High-Resolution Salient Object Detection (HRSOD).
Ranked #12 on
RGB Salient Object Detection
on DAVIS-S
(using extra training data)
no code implementations • 14 Aug 2019 • Hongyin Zhu, Wenpeng Hu, Yi Zeng
Named entity recognition (NER) is a foundational technology for information extraction.
no code implementations • 12 Dec 2018 • Yi Zeng, Enmeng Lu, Cunqing Huangfu
Artificial Intelligence principles define social and ethical considerations to develop future AI.