1 code implementation • DeeLIO (ACL) 2022 • Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen
In this work, we investigate whether there are more effective strategies for judiciously selecting in-context examples (relative to random sampling) that better leverage GPT-3’s in-context learning capabilities. Inspired by the recent success of leveraging a retrieval module to augment neural networks, we propose to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt.
1 code implementation • 21 Nov 2023 • Chloe Qinyu Zhu, Muhang Tian, Lesia Semenova, Jiachang Liu, Jack Xu, Joseph Scarpa, Cynthia Rudin
Both of these have disadvantages: black box models are unacceptable for use in hospitals, whereas manual creation of models (including hand-tuning of logistic regression parameters) relies on humans to perform high-dimensional constrained optimization, which leads to a loss in performance.
no code implementations • 17 Sep 2023 • Xiangrui Su, Qi Zhang, Chongyang Shi, Jiachang Liu, Liang Hu
Existing VQA methods integrate vision modeling and language understanding to explore the deep semantics of the question.
no code implementations • 23 May 2023 • Jiachang Liu, Qi Zhang, Chongyang Shi, Usman Naseem, Shoujin Wang, Ivor Tsang
Abstractive related work generation has attracted increasing attention in generating coherent related work that better helps readers grasp the background in the current research.
1 code implementation • NeurIPS 2023 • Jiachang Liu, Sam Rosen, Chudi Zhong, Cynthia Rudin
We consider an important problem in scientific discovery, namely identifying sparse governing equations for nonlinear dynamical systems.
1 code implementation • NeurIPS 2023 • Chudi Zhong, Zhi Chen, Jiachang Liu, Margo Seltzer, Cynthia Rudin
In real applications, interaction between machine learning models and domain experts is critical; however, the classical machine learning paradigm that usually produces only a single model does not facilitate such interaction.
1 code implementation • 12 Oct 2022 • Jiachang Liu, Chudi Zhong, Boxuan Li, Margo Seltzer, Cynthia Rudin
Specifically, our approach produces a pool of almost-optimal sparse continuous solutions, each with a different support set, using a beam-search algorithm.
2 code implementations • 23 Feb 2022 • Jiachang Liu, Chudi Zhong, Margo Seltzer, Cynthia Rudin
For fast sparse logistic regression, our computational speed-up over other best-subset search techniques owes to linear and quadratic surrogate cuts for the logistic loss that allow us to efficiently screen features for elimination, as well as use of a priority queue that favors a more uniform exploration of features.
3 code implementations • 17 Jan 2021 • Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen
Inspired by the recent success of leveraging a retrieval module to augment large-scale neural network models, we propose to retrieve examples that are semantically-similar to a test sample to formulate its corresponding prompt.
2 code implementations • ICML 2020 • Pengyu Cheng, Weituo Hao, Shuyang Dai, Jiachang Liu, Zhe Gan, Lawrence Carin
In this paper, we propose a novel Contrastive Log-ratio Upper Bound (CLUB) of mutual information.