Search Results for author: Da Tang

Found 12 papers, 4 papers with code

Correlated Variational Auto-Encoders

2 code implementations ICLR Workshop DeepGenStruct 2019 Da Tang, Dawen Liang, Tony Jebara, Nicholas Ruozzi

Variational Auto-Encoders (VAEs) are capable of learning latent representations for high dimensional data.

Clustering Link Prediction

The Variational Predictive Natural Gradient

1 code implementation7 Mar 2019 Da Tang, Rajesh Ranganath

Unlike traditional natural gradients for variational inference, this natural gradient accounts for the relationship between model parameters and variational parameters.

General Classification Variational Inference

Subgoal Discovery for Hierarchical Dialogue Policy Learning

no code implementations EMNLP 2018 Da Tang, Xiujun Li, Jianfeng Gao, Chong Wang, Lihong Li, Tony Jebara

Experiments with simulated and real users show that our approach performs competitively against a state-of-the-art method that requires human-defined subgoals.

Hierarchical Reinforcement Learning

Initialization and Coordinate Optimization for Multi-way Matching

no code implementations2 Nov 2016 Da Tang, Tony Jebara

We consider the problem of consistently matching multiple sets of elements to each other, which is a common task in fields such as computer vision.

Item Recommendation with Variational Autoencoders and Heterogenous Priors

no code implementations17 Jul 2018 Giannis Karamanolakis, Kevin Raji Cherian, Ananth Ravi Narayan, Jie Yuan, Da Tang, Tony Jebara

In recent years, Variational Autoencoders (VAEs) have been shown to be highly effective in both standard collaborative filtering applications and extensions such as incorporation of implicit feedback.

Collaborative Filtering

Learning Correlated Latent Representations with Adaptive Priors

no code implementations14 Jun 2019 Da Tang, Dawen Liang, Nicholas Ruozzi, Tony Jebara

Variational Auto-Encoders (VAEs) have been widely applied for learning compact, low-dimensional latent representations of high-dimensional data.

Clustering Link Prediction

Active Multitask Learning with Committees

no code implementations24 Mar 2021 Jingxi Xu, Da Tang, Tony Jebara

The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches.

Transfer Learning

Balancing Specialized and General Skills in LLMs: The Impact of Modern Tuning and Data Strategy

no code implementations7 Oct 2023 Zheng Zhang, Chen Zheng, Da Tang, Ke Sun, Yukun Ma, Yingtong Bu, Xun Zhou, Liang Zhao

This paper introduces a multifaceted methodology for fine-tuning and evaluating large language models (LLMs) for specialized monetization tasks.

ICE-GRT: Instruction Context Enhancement by Generative Reinforcement based Transformers

no code implementations4 Jan 2024 Chen Zheng, Ke Sun, Da Tang, Yukun Ma, Yuyu Zhang, Chenguang Xi, Xun Zhou

The emergence of Large Language Models (LLMs) such as ChatGPT and LLaMA encounter limitations in domain-specific tasks, with these models often lacking depth and accuracy in specialized areas, and exhibiting a decrease in general capabilities when fine-tuned, particularly analysis ability in small sized models.

Cannot find the paper you are looking for? You can Submit a new open access paper.