Search Results for author: Tuan Dinh

Found 8 papers, 5 papers with code

Investigating Training Strategies and Model Robustness of Low-Rank Adaptation for Language Modeling in Speech Recognition

no code implementations19 Jan 2024 Yu Yu, Chao-Han Huck Yang, Tuan Dinh, Sungho Ryu, Jari Kolehmainen, Roger Ren, Denis Filimonov, Prashanth G. Shivakumar, Ankur Gandhe, Ariya Rastow, Jia Xu, Ivan Bulyko, Andreas Stolcke

The use of low-rank adaptation (LoRA) with frozen pretrained language models (PLMs) has become increasing popular as a mainstream, resource-efficient modeling approach for memory-constrained hardware.

Language Modelling speech-recognition +1

Large Language Models of Code Fail at Completing Code with Potential Bugs

1 code implementation NeurIPS 2023 Tuan Dinh, Jinman Zhao, Samson Tan, Renato Negrinho, Leonard Lausen, Sheng Zha, George Karypis

We find that the presence of potential bugs significantly degrades the generation performance of the high-performing Code-LLMs.

Code Completion

LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks

1 code implementation14 Jun 2022 Tuan Dinh, Yuchen Zeng, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, Kangwook Lee

LIFT does not make any changes to the model architecture or loss function, and it solely relies on the natural language interface, enabling "no-code machine learning with LMs."

BIG-bench Machine Learning General Classification +2

Improved Input Reprogramming for GAN Conditioning

1 code implementation7 Jan 2022 Tuan Dinh, Daewon Seo, Zhixu Du, Liang Shang, Kangwook Lee

Motivated by real-world scenarios with scarce labeled data, we focus on the input reprogramming approach and carefully analyze the existing algorithm.

Coded-InvNet for Resilient Prediction Serving Systems

no code implementations11 Jun 2021 Tuan Dinh, Kangwook Lee

Inspired by a new coded computation algorithm for invertible functions, we propose Coded-InvNet a new approach to design resilient prediction serving systems that can gracefully handle stragglers or node failures.

Translation

Constrained Deep Learning using Conditional Gradient and Applications in Computer Vision

1 code implementation17 Mar 2018 Sathya N. Ravi, Tuan Dinh, Vishnu Sai Rao Lokhande, Vikas Singh

We provide convergence guarantees and show a suite of immediate benefits that are possible -- from training ResNets with fewer layers but better accuracy simply by substituting in our version of CG to faster training of GANs with 50% fewer epochs in image inpainting applications to provably better generalization guarantees using efficiently implementable forms of recently proposed regularizers.

Image Inpainting

Cannot find the paper you are looking for? You can Submit a new open access paper.