Search Results for author: Shanqing Cai

Found 7 papers, 2 papers with code

Can Capacitive Touch Images Enhance Mobile Keyboard Decoding?

1 code implementation3 Oct 2024 Piyawat Lertvittayakumjorn, Shanqing Cai, Billy Dou, Cedric Ho, Shumin Zhai

Capacitive touch sensors capture the two-dimensional spatial profile (referred to as a touch heatmap) of a finger's contact with a mobile touchscreen.

Decoder

Proofread: Fixes All Errors with One Tap

no code implementations6 Jun 2024 Renjie Liu, Yanxiang Zhang, Yun Zhu, Haicheng Sun, Yuanbo Zhang, Michael Xuelin Huang, Shanqing Cai, Lei Meng, Shumin Zhai

To obtain models with sufficient quality, we implement a careful data synthetic pipeline tailored to online use cases, design multifaceted metrics, employ a two-stage tuning approach to acquire the dedicated LLM for the feature: the Supervised Fine Tuning (SFT) for foundational quality, followed by the Reinforcement Learning (RL) tuning approach for targeted refinement.

Quantization Reinforcement Learning (RL) +2

Parameter Efficient Tuning Allows Scalable Personalization of LLMs for Text Entry: A Case Study on Abbreviation Expansion

no code implementations21 Dec 2023 Katrin Tomanek, Shanqing Cai, Subhashini Venugopalan

Abbreviation expansion is a strategy used to speed up communication by limiting the amount of typing and using a language model to suggest expansions.

Language Modeling Language Modelling +1

Using Large Language Models to Accelerate Communication for Users with Severe Motor Impairments

no code implementations3 Dec 2023 Shanqing Cai, Subhashini Venugopalan, Katie Seaver, Xiang Xiao, Katrin Tomanek, Sri Jalasutram, Meredith Ringel Morris, Shaun Kane, Ajit Narayanan, Robert L. MacDonald, Emily Kornman, Daniel Vance, Blair Casey, Steve M. Gleason, Philip Q. Nelson, Michael P. Brenner

A pilot study with 19 non-AAC participants typing on a mobile device by hand demonstrated gains in motor savings in line with the offline simulation, while introducing relatively small effects on overall typing speed.

Context-Aware Abbreviation Expansion Using Large Language Models

no code implementations NAACL 2022 Shanqing Cai, Subhashini Venugopalan, Katrin Tomanek, Ajit Narayanan, Meredith Ringel Morris, Michael P. Brenner

Motivated by the need for accelerating text entry in augmentative and alternative communication (AAC) for people with severe motor impairments, we propose a paradigm in which phrases are abbreviated aggressively as primarily word-initial letters.

TensorFlow Eager: A Multi-Stage, Python-Embedded DSL for Machine Learning

1 code implementation27 Feb 2019 Akshay Agrawal, Akshay Naresh Modi, Alexandre Passos, Allen Lavoie, Ashish Agarwal, Asim Shankar, Igor Ganichev, Josh Levenberg, Mingsheng Hong, Rajat Monga, Shanqing Cai

TensorFlow Eager is a multi-stage, Python-embedded domain-specific language for hardware-accelerated machine learning, suitable for both interactive research and production.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.