no code implementations • 4 Jan 2024 • Chen Zheng, Ke Sun, Da Tang, Yukun Ma, Yuyu Zhang, Chenguang Xi, Xun Zhou
The emergence of Large Language Models (LLMs) such as ChatGPT and LLaMA encounter limitations in domain-specific tasks, with these models often lacking depth and accuracy in specialized areas, and exhibiting a decrease in general capabilities when fine-tuned, particularly analysis ability in small sized models.
no code implementations • 7 Oct 2023 • Zheng Zhang, Chen Zheng, Da Tang, Ke Sun, Yukun Ma, Yingtong Bu, Xun Zhou, Liang Zhao
This paper introduces a multifaceted methodology for fine-tuning and evaluating large language models (LLMs) for specialized monetization tasks.
1 code implementation • 16 Sep 2022 • Zhuoran Liu, Leqi Zou, Xuan Zou, Caihua Wang, Biao Zhang, Da Tang, Bolin Zhu, Yijie Zhu, Peng Wu, Ke Wang, Youlong Cheng
In this paper, we present Monolith, a system tailored for online training.
1 code implementation • 13 Apr 2022 • Zangwei Zheng, Pengtai Xu, Xuan Zou, Da Tang, Zhen Li, Chenguang Xi, Peng Wu, Leqi Zou, Yijie Zhu, Ming Chen, Xiangzhuo Ding, Fuzhao Xue, Ziheng Qin, Youlong Cheng, Yang You
Our experiments show that previous scaling rules fail in the training of CTR prediction neural networks.
no code implementations • ACL 2022 • Haochen Liu, Joseph Thekinen, Sinem Mollaoglu, Da Tang, Ji Yang, Youlong Cheng, Hui Liu, Jiliang Tang
We conduct experiments on both synthetic and real-world datasets.
no code implementations • 24 Mar 2021 • Jingxi Xu, Da Tang, Tony Jebara
The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches.
no code implementations • 14 Jun 2019 • Da Tang, Dawen Liang, Nicholas Ruozzi, Tony Jebara
Variational Auto-Encoders (VAEs) have been widely applied for learning compact, low-dimensional latent representations of high-dimensional data.
2 code implementations • ICLR Workshop DeepGenStruct 2019 • Da Tang, Dawen Liang, Tony Jebara, Nicholas Ruozzi
Variational Auto-Encoders (VAEs) are capable of learning latent representations for high dimensional data.
1 code implementation • 7 Mar 2019 • Da Tang, Rajesh Ranganath
Unlike traditional natural gradients for variational inference, this natural gradient accounts for the relationship between model parameters and variational parameters.
no code implementations • 17 Jul 2018 • Giannis Karamanolakis, Kevin Raji Cherian, Ananth Ravi Narayan, Jie Yuan, Da Tang, Tony Jebara
In recent years, Variational Autoencoders (VAEs) have been shown to be highly effective in both standard collaborative filtering applications and extensions such as incorporation of implicit feedback.
no code implementations • EMNLP 2018 • Da Tang, Xiujun Li, Jianfeng Gao, Chong Wang, Lihong Li, Tony Jebara
Experiments with simulated and real users show that our approach performs competitively against a state-of-the-art method that requires human-defined subgoals.
no code implementations • 2 Nov 2016 • Da Tang, Tony Jebara
We consider the problem of consistently matching multiple sets of elements to each other, which is a common task in fields such as computer vision.