1 code implementation • NAACL 2022 • Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang
We develop AcTune, a new framework that improves the label efficiency of active PLM fine-tuning by unleashing the power of unlabeled data via self-training.
no code implementations • 11 Sep 2024 • Paula Rodriguez-Diaz, Lingkai Kong, Kai Wang, David Alvarez-Melis, Milind Tambe
Comparing datasets is a fundamental task in machine learning, essential for various learning paradigms; from evaluating train and test datasets for model generalization to using dataset similarity for detecting data drift.
1 code implementation • 6 Sep 2024 • Chen Chen, Lingkai Kong, Gongjie Li, Molei Tao
Transit timing variation (TTV) provides rich information about the mass and orbital properties of exoplanets, which are often obtained by solving an inverse problem via Markov Chain Monte Carlo (MCMC).
no code implementations • 22 Aug 2024 • Shresth Verma, Niclas Boehmer, Lingkai Kong, Milind Tambe
In the presence of multiple agents, altering the reward function based on human preferences can impact subpopulations very differently, leading to complex tradeoffs and a multi-objective resource allocation problem.
no code implementations • 2 Jul 2024 • Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodriguez, Chao Zhang, B Aditya Prakash
Recent works model the relations between time-series as graphs and have shown that propagating information over the relation graph can improve time series forecasting.
1 code implementation • 23 Jun 2024 • Haorui Wang, Marta Skreta, Cher-Tian Ser, Wenhao Gao, Lingkai Kong, Felix Strieth-Kalthoff, Chenru Duan, Yuchen Zhuang, Yue Yu, Yanqiao Zhu, Yuanqi Du, Alán Aspuru-Guzik, Kirill Neklyudov, Chao Zhang
Molecular discovery, when formulated as an optimization problem, presents significant computational challenges because optimization objectives can be non-differentiable.
1 code implementation • 13 Jun 2024 • Haoxin Liu, Harshavardhan Kamarthi, Lingkai Kong, Zhiyuan Zhao, Chao Zhang, B. Aditya Prakash
In this paper, we aim to alleviate the inherent OOD problem in TSF via invariant learning.
2 code implementations • 12 Jun 2024 • Haoxin Liu, Shangqing Xu, Zhiyuan Zhao, Lingkai Kong, Harshavardhan Kamarthi, Aditya B. Sasanur, Megha Sharma, Jiaming Cui, Qingsong Wen, Chao Zhang, B. Aditya Prakash
To overcome this obstacle, we introduce Time-MMD, the first multi-domain, multimodal time series dataset covering 9 primary data domains.
1 code implementation • 10 Jun 2024 • Lingkai Kong, Haorui Wang, Wenhao Mu, Yuanqi Du, Yuchen Zhuang, Yifei Zhou, Yue Song, Rongzhi Zhang, Kai Wang, Chao Zhang
To achieve alignment for specific objectives, we introduce external control signals into the state space of this language dynamical system.
no code implementations • 30 May 2024 • Lingkai Kong, Molei Tao
Explicit, momentum-based dynamics that optimize functions defined on Lie groups can be constructed via variational optimization and momentum trivialization.
no code implementations • 25 May 2024 • Yuchen Zhu, Tianrong Chen, Lingkai Kong, Evangelos A. Theodorou, Molei Tao
However, our trivialization technique creates to a new momentum variable that stays in a simple $\textbf{fixed vector space}$.
no code implementations • 18 Mar 2024 • Lingkai Kong, Molei Tao
Explicit, momentum-based dynamics for optimizing functions defined on Lie groups was recently constructed, based on techniques such as variational optimization and left trivialization.
no code implementations • 28 Feb 2024 • Lingkai Kong, Yuanqi Du, Wenhao Mu, Kirill Neklyudov, Valentin De Bortoli, Dongxia Wu, Haorui Wang, Aaron Ferber, Yi-An Ma, Carla P. Gomes, Chao Zhang
To constrain the optimization process to the data manifold, we reformulate the original optimization problem as a sampling problem from the product of the Boltzmann distribution defined by the objective function and the data distribution learned by the diffusion model.
no code implementations • 24 Jan 2024 • Haorui Wang, Rongzhi Zhang, Yinghao Li, Lingkai Kong, Yuchen Zhuang, Xiusi Chen, Chao Zhang
The teacher LLM generates problem-solving instructions and corrective principles based on the student LLM's errors.
1 code implementation • 17 Oct 2023 • Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash
We close both these gap and propose PROFHiT, which is a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
no code implementations • 11 Aug 2023 • Lingkai Kong, Wenhao Mu, Jiaming Cui, Yuchen Zhuang, B. Aditya Prakash, Bo Dai, Chao Zhang
However, existing end-to-end DFL methods are hindered by three significant bottlenecks: model mismatch error, sample average approximation error, and gradient approximation error.
1 code implementation • 17 Jul 2023 • Lingkai Kong, Jiaming Cui, Haotian Sun, Yuchen Zhuang, B. Aditya Prakash, Chao Zhang
However, existing diffusion-based graph generative models are mostly one-shot generative models that apply Gaussian diffusion in the dequantized adjacency matrix space.
2 code implementations • 14 Jun 2023 • Yinghao Li, Lingkai Kong, Yuanqi Du, Yue Yu, Yuchen Zhuang, Wenhao Mu, Chao Zhang
While some studies have included UQ to improve molecular pre-trained models, the process of selecting suitable backbone and UQ methods for reliable molecular uncertainty estimation remains underexplored.
1 code implementation • 30 May 2023 • Yuchen Zhuang, Yue Yu, Lingkai Kong, Xiang Chen, Chao Zhang
Most existing methods for learning from noisy labels use static input features for denoising, but these methods are limited by the information they can provide on true label distributions and can result in biased or incorrect predictions.
1 code implementation • NeurIPS 2023 • Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai, Chao Zhang
We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback.
1 code implementation • 25 Nov 2022 • Lingkai Kong, Jiaming Cui, Yuchen Zhuang, Rui Feng, B. Aditya Prakash, Chao Zhang
Decision-focused learning (DFL) was recently proposed for stochastic optimization problems that involve unknown parameters.
1 code implementation • 16 Jun 2022 • Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash
We close both these gap and propose PROFHiT, which is a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
1 code implementation • 27 May 2022 • Lingkai Kong, Yuqing Wang, Molei Tao
The problem of optimization on Stiefel manifold, i. e., minimizing functions of (not necessarily square) matrices that satisfy orthogonality constraints, has been extensively studied.
1 code implementation • 16 Dec 2021 • Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang
We propose {\ours}, a new framework that leverages unlabeled data to improve the label efficiency of active PLM fine-tuning.
1 code implementation • 15 Sep 2021 • Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash
We use CAMul for multiple domains with varied sources and modalities and show that CAMul outperforms other state-of-art probabilistic forecasting models by over 25\% in accuracy and calibration.
1 code implementation • NeurIPS 2021 • Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP, which directly models the probability density of the forecast value.
1 code implementation • EMNLP 2020 • Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, Chao Zhang
Fine-tuned pre-trained language models can suffer from severe miscalibration for both in-distribution and out-of-distribution (OOD) data due to over-parameterization.
2 code implementations • ICML 2020 • Lingkai Kong, Jimeng Sun, Chao Zhang
We propose a new method for quantifying uncertainties of DNNs from a dynamical system perspective.
no code implementations • NeurIPS 2020 • Lingkai Kong, Molei Tao
This article suggests that deterministic Gradient Descent, which does not use any stochastic gradient approximation, can still exhibit stochastic behaviors.
no code implementations • 22 Jul 2018 • Yisen Wang, Bo Dai, Lingkai Kong, Sarah Monazam Erfani, James Bailey, Hongyuan Zha
Learning nonlinear dynamics from diffusion data is a challenging problem since the individuals observed may be different at different time points, generally following an aggregate behaviour.