1 code implementation • 18 Apr 2024 • Yizhuo Wu, Ang Li, Mohammadreza Beikmirza, Gagan Deep Singh, Qinyu Chen, Leo C. N. de Vreede, Morteza Alavi, Chang Gao
Applied to a 160MHz-BW 1024-QAM OFDM signal from a digital RF PA, MP-DPD gives no performance loss against 32-bit floating-point precision DPDs, while achieving -43. 75 (L)/-45. 27 (R) dBc in Adjacent Channel Power Ratio (ACPR) and -38. 72 dB in Error Vector Magnitude (EVM).
1 code implementation • 17 Apr 2024 • Zuowen Wang, Chang Gao, Zongwei Wu, Marcos V. Conde, Radu Timofte, Shih-Chii Liu, Qinyu Chen, Zheng-Jun Zha, Wei Zhai, Han Han, Bohao Liao, Yuliang Wu, Zengyu Wan, Zhong Wang, Yang Cao, Ganchao Tan, Jinze Chen, Yan Ru Pei, Sasskia Brüers, Sébastien Crouzet, Douglas McLelland, Oliver Coenen, Baoheng Zhang, Yizhao Gao, Jingyuan Li, Hayden Kwok-Hay So, Philippe Bich, Chiara Boretti, Luciano Prono, Mircea Lică, David Dinucu-Jianu, Cătălin Grîu, Xiaopeng Lin, Hongwei Ren, Bojun Cheng, Xinan Zhang, Valentin Vial, Anthony Yezzi, James Tsai
This survey reviews the AIS 2024 Event-Based Eye Tracking (EET) Challenge.
2 code implementations • 9 Apr 2024 • Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang
The rapid development of large language model (LLM) evaluation methodologies and datasets has led to a profound challenge: integrating state-of-the-art evaluation techniques cost-effectively while ensuring reliability, reproducibility, and efficiency.
no code implementations • 23 Mar 2024 • Rui Xie, Zhengran Zeng, Zhuohao Yu, Chang Gao, Shikun Zhang, Wei Ye
Through this process, We have curated 100 billion high-quality pre-training data from GitHub.
no code implementations • 12 Mar 2024 • Qibing Ren, Chang Gao, Jing Shao, Junchi Yan, Xin Tan, Yu Qiao, Wai Lam, Lizhuang Ma
The rapid advancement of Large Language Models (LLMs) has brought about remarkable generative capabilities but also raised concerns about their potential misuse.
2 code implementations • 23 Feb 2024 • Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Wei Ye, Jindong Wang, Xing Xie, Yue Zhang, Shikun Zhang
Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness.
no code implementations • 21 Jan 2024 • Qinyu Chen, Congyi Sun, Chang Gao, Shih-Chii Liu
Epilepsy is a common disease of the nervous system.
1 code implementation • 16 Jan 2024 • Yizhuo Wu, Gagan Deep Singh, Mohammadreza Beikmirza, Leo C. N. de Vreede, Morteza Alavi, Chang Gao
With the rise in communication capacity, deep neural networks (DNN) for digital pre-distortion (DPD) to correct non-linearity in wideband power amplifiers (PAs) have become prominent.
no code implementations • 14 Dec 2023 • Xi Chen, Chang Gao, Zuowen Wang, Longbiao Cheng, Sheng Zhou, Shih-Chii Liu, Tobi Delbruck
Implementing online training of RNNs on the edge calls for optimized algorithms for an efficient deployment on hardware.
no code implementations • 15 Nov 2023 • Chang Gao, Haiyun Jiang, Deng Cai, Shuming Shi, Wai Lam
Most existing chain-of-thought (CoT) prompting methods suffer from the issues of generalizability and consistency, as they often rely on instance-specific solutions that may not be applicable to other cases and lack task-level consistency in their reasoning steps.
no code implementations • 13 Oct 2023 • Chang Gao, Xi Lin, Fang He, Xindi Tang
This study proposes an innovative model-based modular approach (MMA) to dynamically optimize order matching and vehicle relocation in a ride-hailing platform.
1 code implementation • 4 Oct 2023 • Chang Gao, Wenxuan Zhang, Guizhen Chen, Wai Lam
Instruction tuning has emerged as a crucial process for harnessing the capabilities of large language models (LLMs) by providing explicit task instructions, leading to improved performance in various tasks.
1 code implementation • 22 Aug 2023 • Qinyu Chen, Zuowen Wang, Shih-Chii Liu, Chang Gao
This paper presents a sparse Change-Based Convolutional Long Short-Term Memory (CB-ConvLSTM) model for event-based eye tracking, key for next-generation wearable healthcare technology such as AR/VR headsets.
1 code implementation • 27 Jun 2023 • Fabrizio Ottati, Chang Gao, Qinyu Chen, Giovanni Brignone, Mario R. Casu, Jason K. Eshraghian, Luciano Lavagno
The power efficiency of the biological brain outperforms any large-scale deep learning ( DL ) model; thus, neuromorphic computing tries to mimic the brain operations, such as spike-based information processing, to improve the efficiency of DL models.
1 code implementation • NeurIPS 2023 • Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia, Lidong Bing
M3Exam exhibits three unique characteristics: (1) multilingualism, encompassing questions from multiple countries that require strong multilingual proficiency and cultural knowledge; (2) multimodality, accounting for the multimodal nature of many exam questions to test the model's multimodal understanding capability; and (3) multilevel structure, featuring exams from three critical educational periods to comprehensively assess a model's proficiency at different levels.
1 code implementation • 16 May 2023 • Chang Gao, Wenxuan Zhang, Wai Lam, Lidong Bing
Information extraction (IE) systems aim to automatically extract structured information, such as named entities, relations between entities, and events, from unstructured texts.
1 code implementation • 23 Oct 2022 • Chang Gao, Bowen Li, Wenxuan Zhang, Wai Lam, Binhua Li, Fei Huang, Luo Si, Yongbin Li
Text-to-SQL parsing tackles the problem of mapping natural language questions to executable SQL queries.
1 code implementation • 14 Oct 2022 • Yingxiu Zhao, Yinhe Zheng, Zhiliang Tian, Chang Gao, Bowen Yu, Haiyang Yu, Yongbin Li, Jian Sun, Nevin L. Zhang
Lifelong learning (LL) is vital for advanced task-oriented dialogue (ToD) systems.
no code implementations • 15 Jun 2022 • Chang Gao, Shu-Fu Shih, J. Paul Finn, Xiaodong Zhong
However, non-Cartesian trajectories such as the radial trajectory need to be transformed onto a Cartesian grid in each iteration of the network training, slowing down the training process and posing inconvenience and delay during training.
1 code implementation • ACL 2022 • Chang Gao, Wenxuan Zhang, Wai Lam
The goal-oriented document-grounded dialogue aims at responding to the user query based on the dialogue context and supporting document.
no code implementations • 14 Mar 2022 • Qinyu Chen, Chang Gao, Xinyuan Fang, Haitao Luan
Spiking Neural Networks (SNNs) are developed as a promising alternative to Artificial Neural networks (ANNs) due to their more realistic brain-inspired computing models.
no code implementations • 14 Feb 2022 • Ilya Kiselev, Chang Gao, Shih-Chii Liu
The bandpass filter gain of a channel is adapted dynamically to the input amplitude so that the average output spike rate stays within a defined range.
no code implementations • 4 Aug 2021 • Chang Gao, Tobi Delbruck, Shih-Chii Liu
The pruned networks running on Spartus hardware achieve weight sparsity levels of up to 96% and 94% with negligible accuracy loss on the TIMIT and the Librispeech datasets.
no code implementations • CUHK Course IERG5350 2020 • Chang Gao
Ranking items in large-scale item search engines such as Amazon and Taobao is a typical multi-step decision-making problem.
no code implementations • 8 Feb 2020 • Chang Gao, Rachel Gehlhar, Aaron D. Ames, Shih-Chii Liu, Tobi Delbruck
Lower leg prostheses could improve the life quality of amputees by increasing comfort and reducing energy to locomote, but currently control methods are limited in modulating behaviors based upon the human's experience.
no code implementations • 22 Dec 2019 • Chang Gao, Antonio Rios-Navarro, Xi Chen, Tobi Delbruck, Shih-Chii Liu
This paper presents a Gated Recurrent Unit (GRU) based recurrent neural network (RNN) accelerator called EdgeDRNN designed for portable edge computing.
8 code implementations • 3 Jul 2018 • Chang Gao, Derun Gu, Fangjun Zhang, Yizhou Yu
Image style transfer models based on convolutional neural networks usually suffer from high temporal inconsistency when applied to videos.
Ranked #4 on Semantic Segmentation on FMB Dataset
no code implementations • 7 Jun 2017 • Xiaoguang Han, Chang Gao, Yizhou Yu
This system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features.