no code implementations • 20 Oct 2024 • Xunzhao Yin, Hamza Errahmouni Barkam, Franz Müller, Yuxiao Jiang, Mohsen Imani, Sukhrob Abdulazhanov, Alptekin Vardar, Nellie Laleni, Zijian Zhao, Jiahui Duan, Zhiguo Shi, Siddharth Joshi, Michael Niemier, Xiaobo Sharon Hu, Cheng Zhuo, Thomas Kämpfe, Kai Ni
To address these challenges-and mitigate the typical data-transfer bottleneck of classical Von Neumann architectures-we propose a ferroelectric charge-domain compute-in-memory (CiM) array as the foundational processing element for neuro-symbolic AI.
no code implementations • 6 Sep 2024 • Haolong Chen, Hanzhi Chen, Zijian Zhao, Kaifeng Han, Guangxu Zhu, Yichen Zhao, Ying Du, Wei Xu, Qingjiang Shi
The impressive performance of ChatGPT and other foundation-model-based products in human language understanding has prompted both academia and industry to explore how these models can be tailored for specific industries and application scenarios.
no code implementations • 20 Aug 2024 • Zijian Zhao, TingWei Chen, Zhijie Cai, Xiaoyang Li, Hang Li, Qimei Chen, Guangxu Zhu
Extensive research has been conducted in this field, focusing on areas such as gesture recognition, people identification, and fall detection.
Ranked #1 on Action Classification (zero-shot) on WiGesture
1 code implementation • 11 Jul 2024 • Zijian Zhao
Recently, pre-trained language models have been widely adopted in SMU because the symbolic music shares a huge similarity with natural language, and the pre-trained manner also helps make full use of limited music data.
1 code implementation • 26 Jun 2024 • Xiao Liang, Zijian Zhao, Weichao Zeng, Yutong He, Fupeng He, Yiyi Wang, Chengying Gao
Learning musical structures and composition patterns is necessary for both music generation and understanding, but current methods do not make uniform use of learned features to generate and comprehend music simultaneously.
no code implementations • 13 Jun 2024 • TingWei Chen, Yantao Wang, Hanzhi Chen, Zijian Zhao, Xinhao Li, Nicola Piovesan, Guangxu Zhu, Qingjiang Shi
The introduction of fifth-generation (5G) radio technology has revolutionized communications, bringing unprecedented automation, capacity, connectivity, and ultra-fast, reliable communications.
1 code implementation • 29 May 2024 • Ge Zhang, Scott Qu, Jiaheng Liu, Chenchen Zhang, Chenghua Lin, Chou Leuang Yu, Danny Pan, Esther Cheng, Jie Liu, Qunshu Lin, Raven Yuan, Tuney Zheng, Wei Pang, Xinrun Du, Yiming Liang, Yinghao Ma, Yizhi Li, Ziyang Ma, Bill Lin, Emmanouil Benetos, Huan Yang, Junting Zhou, Kaijing Ma, Minghao Liu, Morry Niu, Noah Wang, Quehry Que, Ruibo Liu, Sine Liu, Shawn Guo, Soren Gao, Wangchunshu Zhou, Xinyue Zhang, Yizhi Zhou, YuBo Wang, Yuelin Bai, Yuhan Zhang, Yuxiang Zhang, Zenith Wang, Zhenzhu Yang, Zijian Zhao, Jiajun Zhang, Wanli Ouyang, Wenhao Huang, Wenhu Chen
To improve the transparency of LLMs, the research community has formed to open-source truly open LLMs (e. g., Pythia, Amber, OLMo), where more details (e. g., pre-training corpus and training code) are being provided.
1 code implementation • 19 Mar 2024 • Zijian Zhao, TingWei Chen, Fanyi Meng, Hang Li, Xiaoyang Li, Guangxu Zhu
Despite the development of various deep learning methods for Wi-Fi sensing, package loss often results in noncontinuous estimation of the Channel State Information (CSI), which negatively impacts the performance of the learning models.
Ranked #2 on Person Identification on WiGesture
1 code implementation • 24 May 2020 • Chen Liu, Su Zhu, Zijian Zhao, Ruisheng Cao, Lu Chen, Kai Yu
In this paper, a novel BERT based SLU model (WCN-BERT SLU) is proposed to encode WCNs and the dialogue context jointly.
no code implementations • 22 Mar 2020 • Su Zhu, Zijian Zhao, Rao Ma, Kai Yu
The proposed approaches are evaluated on three datasets.
1 code implementation • IJCNLP 2019 • Zijian Zhao, Su Zhu, Kai Yu
The atomic templates produce exemplars for fine-grained constituents of semantic representations.
1 code implementation • 9 Apr 2019 • Zijian Zhao, Su Zhu, Kai Yu
In the paper, we focus on spoken language understanding from unaligned data whose annotation is a set of act-slot-value triples.
1 code implementation • 30 Apr 2018 • Shuai Shao, Zijian Zhao, Boxun Li, Tete Xiao, Gang Yu, Xiangyu Zhang, Jian Sun
There are a total of $470K$ human instances from the train and validation subsets, and $~22. 6$ persons per image, with various kinds of occlusions in the dataset.
Ranked #7 on Pedestrian Detection on Caltech (using extra training data)