Search Results for author: Chang Gao

Found 12 papers, 4 papers with code

Towards Generalizable and Robust Text-to-SQL Parsing

1 code implementation23 Oct 2022 Chang Gao, Bowen Li, Wenxuan Zhang, Wai Lam, Binhua Li, Fei Huang, Luo Si, Yongbin Li

Text-to-SQL parsing tackles the problem of mapping natural language questions to executable SQL queries.

SQL Parsing Text-To-Sql

A Projection-Based K-space Transformer Network for Undersampled Radial MRI Reconstruction with Limited Training Subjects

no code implementations15 Jun 2022 Chang Gao, Shu-Fu Shih, J. Paul Finn, Xiaodong Zhong

However, non-Cartesian trajectories such as the radial trajectory need to be transformed onto a Cartesian grid in each iteration of the network training, slowing down the training process and posing inconvenience and delay during training.

Data Augmentation MRI Reconstruction

UniGDD: A Unified Generative Framework for Goal-Oriented Document-Grounded Dialogue

1 code implementation ACL 2022 Chang Gao, Wenxuan Zhang, Wai Lam

The goal-oriented document-grounded dialogue aims at responding to the user query based on the dialogue context and supporting document.

Multi-Task Learning Response Generation

Skydiver: A Spiking Neural Network Accelerator Exploiting Spatio-Temporal Workload Balance

no code implementations14 Mar 2022 Qinyu Chen, Chang Gao, Xinyuan Fang, Haitao Luan

Spiking Neural Networks (SNNs) are developed as a promising alternative to Artificial Neural networks (ANNs) due to their more realistic brain-inspired computing models.

Image Segmentation Semantic Segmentation

Spiking Cochlea with System-level Local Automatic Gain Control

no code implementations14 Feb 2022 Ilya Kiselev, Chang Gao, Shih-Chii Liu

The bandpass filter gain of a channel is adapted dynamically to the input amplitude so that the average output spike rate stays within a defined range.

regression

Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-Temporal Sparsity

no code implementations4 Aug 2021 Chang Gao, Tobi Delbruck, Shih-Chii Liu

The pruned networks running on Spartus hardware achieve weight sparsity levels of up to 96% and 94% with negligible accuracy loss on the TIMIT and the Librispeech datasets.

speech-recognition Speech Recognition

Ranking Items in Large-Scale Item Search Engines with Reinforcement Learning

no code implementations CUHK Course IERG5350 2020 Chang Gao

Ranking items in large-scale item search engines such as Amazon and Taobao is a typical multi-step decision-making problem.

Decision Making reinforcement-learning

Recurrent Neural Network Control of a Hybrid Dynamic Transfemoral Prosthesis with EdgeDRNN Accelerator

no code implementations8 Feb 2020 Chang Gao, Rachel Gehlhar, Aaron D. Ames, Shih-Chii Liu, Tobi Delbruck

Lower leg prostheses could improve the life quality of amputees by increasing comfort and reducing energy to locomote, but currently control methods are limited in modulating behaviors based upon the human's experience.

EdgeDRNN: Enabling Low-latency Recurrent Neural Network Edge Inference

no code implementations22 Dec 2019 Chang Gao, Antonio Rios-Navarro, Xi Chen, Tobi Delbruck, Shih-Chii Liu

This paper presents a Gated Recurrent Unit (GRU) based recurrent neural network (RNN) accelerator called EdgeDRNN designed for portable edge computing.

Edge-computing

ReCoNet: Real-time Coherent Video Style Transfer Network

7 code implementations3 Jul 2018 Chang Gao, Derun Gu, Fangjun Zhang, Yizhou Yu

Image style transfer models based on convolutional neural networks usually suffer from high temporal inconsistency when applied to videos.

Style Transfer Video Style Transfer

DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling

no code implementations7 Jun 2017 Xiaoguang Han, Chang Gao, Yizhou Yu

This system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features.

Caricature

Cannot find the paper you are looking for? You can Submit a new open access paper.