Search Results for author: Kai Zou

Found 7 papers, 4 papers with code

FTBC: Forward Temporal Bias Correction for Optimizing ANN-SNN Conversion

no code implementations27 Mar 2024 Xiaofeng Wu, Velibor Bojkovic, Bin Gu, Kun Suo, Kai Zou

Spiking Neural Networks (SNNs) offer a promising avenue for energy-efficient computing compared with Artificial Neural Networks (ANNs), closely mirroring biological neural processes.

Inherent limitations of LLMs regarding spatial information

1 code implementation5 Dec 2023 He Yan, Xinyao Hu, Xiangpeng Wan, Chengyu Huang, Kai Zou, Shiqi Xu

Despite the significant advancements in natural language processing capabilities demonstrated by large language models such as ChatGPT, their proficiency in comprehending and processing spatial information, especially within the domains of 2D and 3D route planning, remains notably underdeveloped.

Evaluation Metrics in the Era of GPT-4: Reliably Evaluating Large Language Models on Sequence to Sequence Tasks

1 code implementation20 Oct 2023 Andrea Sottana, Bin Liang, Kai Zou, Zheng Yuan

Large Language Models (LLMs) evaluation is a patchy and inconsistent landscape, and it is becoming clear that the quality of automatic evaluation metrics is not keeping up with the pace of development of generative models.

Grammatical Error Correction Text Simplification

You Are Catching My Attention: Are Vision Transformers Bad Learners Under Backdoor Attacks?

no code implementations CVPR 2023 Zenghui Yuan, Pan Zhou, Kai Zou, Yu Cheng

Vision Transformers (ViTs), which made a splash in the field of computer vision (CV), have shaken the dominance of convolutional neural networks (CNNs).

Backdoor Attack

M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design

1 code implementation26 Oct 2022 Hanxue Liang, Zhiwen Fan, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang

However, when deploying MTL onto those real-world systems that are often resource-constrained or latency-sensitive, two prominent challenges arise: (i) during training, simultaneously optimizing all tasks is often difficult due to gradient conflicts across tasks; (ii) at inference, current MTL regimes have to activate nearly the entire model even to just execute a single task.

Multi-Task Learning

Unsupervised Temporal Video Grounding with Deep Semantic Clustering

no code implementations14 Jan 2022 Daizong Liu, Xiaoye Qu, Yinzhen Wang, Xing Di, Kai Zou, Yu Cheng, Zichuan Xu, Pan Zhou

Temporal video grounding (TVG) aims to localize a target segment in a video according to a given sentence query.

Clustering Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.