no code implementations • 11 Nov 2024 • Young-Min Cho, Raphael Shu, Nilaksh Das, Tamer Alkhouli, Yi-An Lai, Jason Cai, Monica Sunkara, Yi Zhang
This study investigates the efficacy of Multi-Agent Systems in eliciting cross-agent communication and enhancing collective intelligence through group decision-making in a decentralized setting.
no code implementations • 5 Mar 2024 • Bryan Li, Tamer Alkhouli, Daniele Bonadiman, Nikolaos Pappas, Saab Mansour
xSTREET exposes a gap in base LLM performance between English and non-English reasoning tasks.
no code implementations • 20 Dec 2022 • Raphael Shu, Elman Mansimov, Tamer Alkhouli, Nikolaos Pappas, Salvatore Romeo, Arshit Gupta, Saab Mansour, Yi Zhang, Dan Roth
The conversational model interacts with the environment by generating and executing programs triggering a set of pre-defined APIs.
no code implementations • WS 2020 • Parnia Bahar, Patrick Wilken, Tamer Alkhouli, Andreas Guta, Pavel Golik, Evgeny Matusov, Christian Herold
AppTek and RWTH Aachen University team together to participate in the offline and simultaneous speech translation tracks of IWSLT 2020.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+5
no code implementations • WS 2020 • Patrick Wilken, Tamer Alkhouli, Evgeny Matusov, Pavel Golik
In simultaneous machine translation, the objective is to determine when to produce a partial translation given a continuous stream of source words, with a trade-off between latency and quality.
no code implementations • WS 2018 • Tamer Alkhouli, Gabriel Bretschner, Hermann Ney
This work investigates the alignment problem in state-of-the-art multi-head attention models based on the transformer architecture.
no code implementations • ACL 2018 • Weiyue Wang, Derui Zhu, Tamer Alkhouli, Zixuan Gan, Hermann Ney
Attention-based neural machine translation (NMT) models selectively focus on specific source positions to produce a translation, which brings significant improvements over pure encoder-decoder sequence-to-sequence models.
3 code implementations • ACL 2018 • Albert Zeyer, Tamer Alkhouli, Hermann Ney
We compare the fast training and decoding speed of RETURNN of attention models for translation, due to fast CUDA LSTM kernels, and a fast pure TensorFlow beam search decoder.
no code implementations • ACL 2017 • Weiyue Wang, Tamer Alkhouli, Derui Zhu, Hermann Ney
Recently, the neural machine translation systems showed their promising performance and surpassed the phrase-based systems for most translation tasks.
no code implementations • WS 2016 • Jan-Thorsten Peter, Tamer Alkhouli, Hermann Ney, Matthias Huck, Fabienne Braune, Alex Fraser, er, Ale{\v{s}} Tamchyna, Ond{\v{r}}ej Bojar, Barry Haddow, Rico Sennrich, Fr{\'e}d{\'e}ric Blain, Lucia Specia, Jan Niehues, Alex Waibel, Alex Allauzen, re, Lauriane Aufrant, Franck Burlot, Elena Knyazeva, Thomas Lavergne, Fran{\c{c}}ois Yvon, M{\=a}rcis Pinnis, Stella Frank
Ranked #12 on
Machine Translation
on WMT2016 English-Romanian