1 code implementation • 3 Aug 2023 • Lu Zeng, Xuan Chen, Xiaoshuang Shi, Heng Tao Shen
In this study, we introduce and theoretically demonstrate a simple feature noise method, which directly adds noise to the features of training data, can enhance the generalization of DNNs under label noise.
no code implementations • 21 Feb 2023 • Sree Hari Krishnan Parthasarathi, Lu Zeng, Dilek Hakkani-Tur
Conversational, multi-turn, text-to-SQL (CoSQL) tasks map natural language utterances in a dialogue to SQL queries.
no code implementations • 19 Oct 2022 • Lu Zeng, Sree Hari Krishnan Parthasarathi, Dilek Hakkani-Tur
Text-to-SQL task maps natural language utterances to structured queries that can be issued to a database.
no code implementations • 13 Jul 2022 • Sree Hari Krishnan Parthasarathi, Lu Zeng, Christin Jose, Joseph Wang
To train effectively with a mix of human and teacher labeled data, we develop a teacher labeling strategy based on confidence heuristics to reduce entropy on the label distribution from the teacher model; the data is then sampled to match the marginal distribution on the labels.
no code implementations • 13 Jul 2022 • Lu Zeng, Sree Hari Krishnan Parthasarathi, Yuzong Liu, Alex Escott, Santosh Kumar Cheekatmalla, Nikko Strom, Shiv Vitaladevuni
We organize our results in two embedded chipset settings: a) with commodity ARM NEON instruction set and 8-bit containers, we present accuracy, CPU, and memory results using sub 8-bit weights (4, 5, 8-bit) and 8-bit quantization of rest of the network; b) with off-the-shelf neural network accelerators, for a range of weight bit widths (1 and 5-bit), while presenting accuracy results, we project reduction in memory utilization.