Search Results for author: Lu Zeng

Found 5 papers, 1 papers with code

Sub 8-Bit Quantization of Streaming Keyword Spotting Models for Embedded Chipsets

no code implementations13 Jul 2022 Lu Zeng, Sree Hari Krishnan Parthasarathi, Yuzong Liu, Alex Escott, Santosh Kumar Cheekatmalla, Nikko Strom, Shiv Vitaladevuni

We organize our results in two embedded chipset settings: a) with commodity ARM NEON instruction set and 8-bit containers, we present accuracy, CPU, and memory results using sub 8-bit weights (4, 5, 8-bit) and 8-bit quantization of rest of the network; b) with off-the-shelf neural network accelerators, for a range of weight bit widths (1 and 5-bit), while presenting accuracy results, we project reduction in memory utilization.

Keyword Spotting Quantization

Wakeword Detection under Distribution Shifts

no code implementations13 Jul 2022 Sree Hari Krishnan Parthasarathi, Lu Zeng, Christin Jose, Joseph Wang

To train effectively with a mix of human and teacher labeled data, we develop a teacher labeling strategy based on confidence heuristics to reduce entropy on the label distribution from the teacher model; the data is then sampled to match the marginal distribution on the labels.

Keyword Spotting

N-Best Hypotheses Reranking for Text-To-SQL Systems

no code implementations19 Oct 2022 Lu Zeng, Sree Hari Krishnan Parthasarathi, Dilek Hakkani-Tur

Text-to-SQL task maps natural language utterances to structured queries that can be issued to a database.

Text-To-SQL

Conversational Text-to-SQL: An Odyssey into State-of-the-Art and Challenges Ahead

no code implementations21 Feb 2023 Sree Hari Krishnan Parthasarathi, Lu Zeng, Dilek Hakkani-Tur

Conversational, multi-turn, text-to-SQL (CoSQL) tasks map natural language utterances in a dialogue to SQL queries.

Text-To-SQL

Feature Noise Boosts DNN Generalization under Label Noise

1 code implementation3 Aug 2023 Lu Zeng, Xuan Chen, Xiaoshuang Shi, Heng Tao Shen

In this study, we introduce and theoretically demonstrate a simple feature noise method, which directly adds noise to the features of training data, can enhance the generalization of DNNs under label noise.

Cannot find the paper you are looking for? You can Submit a new open access paper.