Search Results for author: Hai Pham

Found 12 papers, 5 papers with code

Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities

2 code implementations19 Dec 2018 Hai Pham, Paul Pu Liang, Thomas Manzini, Louis-Philippe Morency, Barnabas Poczos

Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input.

Machine Translation Multimodal Sentiment Analysis +1

StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer

2 code implementations NAACL 2021 Yiwei Lyu, Paul Pu Liang, Hai Pham, Eduard Hovy, Barnabás Póczos, Ruslan Salakhutdinov, Louis-Philippe Morency

Many of the existing style transfer benchmarks primarily focus on individual high-level semantic changes (e. g. positive to negative), which enable controllability at a high level but do not offer fine-grained control involving sentence structure, emphasis, and content of the sentence.

Benchmarking Sentence +2

Revisiting the Sample Complexity of Sparse Spectrum Approximation of Gaussian Processes

1 code implementation NeurIPS 2020 Quang Minh Hoang, Trong Nghia Hoang, Hai Pham, David P. Woodruff

We introduce a new scalable approximation for Gaussian processes with provable guarantees which hold simultaneously over its entire parameter space.

Gaussian Processes

Policy Optimization In the Face of Uncertainty

no code implementations25 Sep 2019 Tung-Long Vuong, Han Nguyen, Hai Pham, Kenneth Tran

Under this framework, the objective function can represented end-to-end as a single computational graph, which allows seamless policy gradient computation via backpropagation through the models.

Continuous Control Model-based Reinforcement Learning

Understanding Long Documents with Different Position-Aware Attentions

no code implementations17 Aug 2022 Hai Pham, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang

Despite several successes in document understanding, the practical task for long document understanding is largely under-explored due to several challenges in computation and how to efficiently absorb long multimodal input.

document understanding Position

On the Algorithmic Stability and Generalization of Adaptive Optimization Methods

no code implementations8 Nov 2022 Han Nguyen, Hai Pham, Sashank J. Reddi, Barnabás Póczos

Despite their popularity in deep learning and machine learning in general, the theoretical properties of adaptive optimizers such as Adagrad, RMSProp, Adam or AdamW are not yet fully understood.

SciFix: Outperforming GPT3 on Scientific Factual Error Correction

1 code implementation24 May 2023 Dhananjay Ashok, Atharva Kulkarni, Hai Pham, Barnabás Póczos

Our method outperforms the very LLM that was used to generate the annotated dataset -- with Few-Shot Prompting on GPT3. 5 achieving 58%, 61%, and 64% on the respective datasets, a consistently lower correction accuracy, despite using nearly 800 times as many parameters as our model.

Task-Based MoE for Multitask Multilingual Machine Translation

no code implementations30 Aug 2023 Hai Pham, Young Jin Kim, Subhabrata Mukherjee, David P. Woodruff, Barnabas Poczos, Hany Hassan Awadalla

Mixture-of-experts (MoE) architecture has been proven a powerful method for diverse tasks in training deep models in many applications.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.