no code implementations • 28 Sep 2022 • Benjamin Doerr, Zhongdi Qu
Due to the more complicated population dynamics of the NSGA-II, none of the existing runtime guarantees for this algorithm is accompanied by a non-trivial lower bound.
no code implementations • 18 Aug 2022 • Benjamin Doerr, Zhongdi Qu
Very recently, the first mathematical runtime analyses for the NSGA-II, the most common multi-objective evolutionary algorithm, have been conducted.
no code implementations • 28 Apr 2022 • Benjamin Doerr, Zhongdi Qu
Very recently, the first mathematical runtime analyses of the multi-objective evolutionary optimizer NSGA-II have been conducted.
no code implementations • Findings (EMNLP) 2021 • Massimo Nicosia, Zhongdi Qu, Yasemin Altun
While multilingual pretrained language models (LMs) fine-tuned on a single language have shown substantial cross-lingual task transfer capabilities, there is still a wide performance gap in semantic parsing tasks when target language supervision is available.
no code implementations • 24 Sep 2018 • Parisa Haghani, Arun Narayanan, Michiel Bacchiani, Galen Chuang, Neeraj Gaur, Pedro Moreno, Rohit Prabhavalkar, Zhongdi Qu, Austin Waters
Conventional spoken language understanding systems consist of two main components: an automatic speech recognition module that converts audio to a transcript, and a natural language understanding module that transforms the resulting text (or top N hypotheses) into a set of domains, intents, and arguments.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4