Search Results for author: Tom Ouyang

Found 6 papers, 0 papers with code

Handling Compounding in Mobile Keyboard Input

no code implementations17 Jan 2022 Andreas Kabel, Keith Hall, Tom Ouyang, David Rybach, Daan van Esch, Françoise Beaufays

This paper proposes a framework to improve the typing experience of mobile users in morphologically rich languages.

End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds

no code implementations15 Oct 2019 Yin Zhou, Pei Sun, Yu Zhang, Dragomir Anguelov, Jiyang Gao, Tom Ouyang, James Guo, Jiquan Ngiam, Vijay Vasudevan

In this paper, we aim to synergize the birds-eye view and the perspective view and propose a novel end-to-end multi-view fusion (MVF) algorithm, which can effectively learn to utilize the complementary information from both.

3D Object Detection object-detection

Federated Learning Of Out-Of-Vocabulary Words

no code implementations26 Mar 2019 Mingqing Chen, Rajiv Mathews, Tom Ouyang, Françoise Beaufays

We demonstrate that a character-level recurrent neural network is able to learn out-of-vocabulary (OOV) words under federated learning settings, for the purpose of expanding the vocabulary of a virtual keyboard for smartphones without exporting sensitive text to servers.

Federated Learning

Mobile Keyboard Input Decoding with Finite-State Transducers

no code implementations13 Apr 2017 Tom Ouyang, David Rybach, Françoise Beaufays, Michael Riley

We describe the general framework of what we call for short the keyboard "FST decoder" as well as the implementation details that are new compared to a speech FST decoder.

speech-recognition Speech Recognition

Learning from Neighboring Strokes: Combining Appearance and Context for Multi-Domain Sketch Recognition

no code implementations NeurIPS 2009 Tom Ouyang, Randall Davis

We propose a new sketch recognition framework that combines a rich representation of low level visual appearance with a graphical model for capturing high level relationships between symbols.

Sketch Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.