1 code implementation • 19 Feb 2024 • Zihan Qiu, Zeyu Huang, Youcheng Huang, Jie Fu
The feed-forward networks (FFNs) in transformers are recognized as a group of key-value neural memories to restore abstract high-level knowledge.
1 code implementation • 17 Oct 2023 • Zihan Qiu, Zeyu Huang, Jie Fu
Despite the benefits of modularity, most Language Models (LMs) are still treated as monolithic models in the pre-train and fine-tune paradigm, with their emergent modularity locked and underutilized.
1 code implementation • 24 Jan 2023 • Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie zhou, Wenge Rong, Zhang Xiong
Our method outperforms previous fine-tuning and HyperNetwork-based methods and achieves state-of-the-art performance for Sequential Model Editing (SME).
1 code implementation • CVPR 2023 • Yizhi Wang, Zeyu Huang, Ariel Shamir, Hui Huang, Hao Zhang, Ruizhen Hu
We introduce anchored radial observations (ARO), a novel shape encoding for learning implicit field representation of 3D shapes that is category-agnostic and generalizable amid significant shape variations.
no code implementations • 20 Oct 2022 • Zeyu Huang, Juzhan Xu, Sisi Dai, Kai Xu, Hao Zhang, Hui Huang, Ruizhen Hu
Given a few object manipulation demos, NIFT guides the generation of the interaction imitation for a new object instance by matching the Neural Interaction Template (NIT) extracted from the demos in the target Neural Interaction Field (NIF) defined for the new object.
1 code implementation • 11 Oct 2022 • Xiaofeng Zhang, Yikang Shen, Zeyu Huang, Jie zhou, Wenge Rong, Zhang Xiong
This paper proposes the Mixture of Attention Heads (MoA), a new architecture that combines multi-head attention with the MoE mechanism.
no code implementations • 22 Jul 2020 • José Rodríguez-Piñeiro, Tomás Domínguez-Bolaño, Xuesong Cai, Zeyu Huang, Xuefeng Yin
Due to the decrease in cost, size and weight, \acp{UAV} are becoming more and more popular for general-purpose civil and commercial applications.
no code implementations • 27 Apr 2020 • Ruizhen Hu, Zeyu Huang, Yuhan Tang, Oliver van Kaick, Hao Zhang, Hui Huang
The core component of our learning framework is a deep neural network, Graph2Plan, which converts a layout graph, along with a building boundary, into a floorplan that fulfills both the layout and boundary constraints.
no code implementations • 21 Nov 2019 • Zeyi Wen, Zeyu Huang, Rui Zhang
Entity extraction is an important task in text mining and natural language processing.