Search Results for author: Lingkai Kong

Found 20 papers, 14 papers with code

AdaPlanner: Adaptive Planning from Feedback with Language Models

1 code implementation NeurIPS 2023 Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai, Chao Zhang

We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback.

Decision Making Hallucination

Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data

1 code implementation EMNLP 2020 Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, Chao Zhang

Fine-tuned pre-trained language models can suffer from severe miscalibration for both in-distribution and out-of-distribution (OOD) data due to over-parameterization.

Language Modelling Out of Distribution (OOD) Detection +2

When Rigidity Hurts: Soft Consistency Regularization for Probabilistic Hierarchical Time Series Forecasting

1 code implementation16 Jun 2022 Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash

We close both these gap and propose PROFHiT, which is a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.

Time Series Time Series Forecasting

When Rigidity Hurts: Soft Consistency Regularization for Probabilistic Hierarchical Time Series Forecasting

1 code implementation17 Oct 2023 Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash

We close both these gap and propose PROFHiT, which is a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.

Time Series Time Series Forecasting

CAMul: Calibrated and Accurate Multi-view Time-Series Forecasting

1 code implementation15 Sep 2021 Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash

We use CAMul for multiple domains with varied sources and modalities and show that CAMul outperforms other state-of-art probabilistic forecasting models by over 25\% in accuracy and calibration.

Decision Making Probabilistic Time Series Forecasting +1

End-to-End Stochastic Optimization with Energy-Based Model

1 code implementation25 Nov 2022 Lingkai Kong, Jiaming Cui, Yuchen Zhuang, Rui Feng, B. Aditya Prakash, Chao Zhang

Decision-focused learning (DFL) was recently proposed for stochastic optimization problems that involve unknown parameters.

Scheduling Stochastic Optimization

AcTune: Uncertainty-Based Active Self-Training for Active Fine-Tuning of Pretrained Language Models

1 code implementation NAACL 2022 Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang

We develop AcTune, a new framework that improves the label efficiency of active PLM fine-tuning by unleashing the power of unlabeled data via self-training.

Active Learning text-classification +1

When in Doubt: Neural Non-Parametric Uncertainty Quantification for Epidemic Forecasting

1 code implementation NeurIPS 2021 Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash

We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP, which directly models the probability density of the forecast value.

Time Series Time Series Forecasting +1

Momentum Stiefel Optimizer, with Applications to Suitably-Orthogonal Attention, and Optimal Transport

1 code implementation27 May 2022 Lingkai Kong, Yuqing Wang, Molei Tao

The problem of optimization on Stiefel manifold, i. e., minimizing functions of (not necessarily square) matrices that satisfy orthogonality constraints, has been extensively studied.

DyGen: Learning from Noisy Labels via Dynamics-Enhanced Generative Modeling

1 code implementation30 May 2023 Yuchen Zhuang, Yue Yu, Lingkai Kong, Xiang Chen, Chao Zhang

Most existing methods for learning from noisy labels use static input features for denoising, but these methods are limited by the information they can provide on true label distributions and can result in biased or incorrect predictions.

Denoising

MUBen: Benchmarking the Uncertainty of Molecular Representation Models

2 code implementations14 Jun 2023 Yinghao Li, Lingkai Kong, Yuanqi Du, Yue Yu, Yuchen Zhuang, Wenhao Mu, Chao Zhang

While some studies have included UQ to improve molecular pre-trained models, the process of selecting suitable backbone and UQ methods for reliable molecular uncertainty estimation remains underexplored.

Benchmarking Drug Discovery +4

Autoregressive Diffusion Model for Graph Generation

1 code implementation17 Jul 2023 Lingkai Kong, Jiaming Cui, Haotian Sun, Yuchen Zhuang, B. Aditya Prakash, Chao Zhang

However, existing diffusion-based graph generative models are mostly one-shot generative models that apply Gaussian diffusion in the dequantized adjacency matrix space.

Denoising Graph Generation

Learning Deep Hidden Nonlinear Dynamics from Aggregate Data

no code implementations22 Jul 2018 Yisen Wang, Bo Dai, Lingkai Kong, Sarah Monazam Erfani, James Bailey, Hongyuan Zha

Learning nonlinear dynamics from diffusion data is a challenging problem since the individuals observed may be different at different time points, generally following an aggregate behaviour.

Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function

no code implementations NeurIPS 2020 Lingkai Kong, Molei Tao

This article suggests that deterministic Gradient Descent, which does not use any stochastic gradient approximation, can still exhibit stochastic behaviors.

DF2: Distribution-Free Decision-Focused Learning

no code implementations11 Aug 2023 Lingkai Kong, Wenhao Mu, Jiaming Cui, Yuchen Zhuang, B. Aditya Prakash, Bo Dai, Chao Zhang

However, existing end-to-end DFL methods are hindered by three significant bottlenecks: model mismatch error, sample average approximation error, and gradient approximation error.

TPD: Enhancing Student Language Model Reasoning via Principle Discovery and Guidance

no code implementations24 Jan 2024 Haorui Wang, Rongzhi Zhang, Yinghao Li, Lingkai Kong, Yuchen Zhuang, Xiusi Chen, Chao Zhang

The teacher LLM generates problem-solving instructions and corrective principles based on the student LLM's errors.

Language Modelling

Diffusion Models as Constrained Samplers for Optimization with Unknown Constraints

no code implementations28 Feb 2024 Lingkai Kong, Yuanqi Du, Wenhao Mu, Kirill Neklyudov, Valentin De Bortol, Haorui Wang, Dongxia Wu, Aaron Ferber, Yi-An Ma, Carla P. Gomes, Chao Zhang

To constrain the optimization process to the data manifold, we reformulate the original optimization problem as a sampling problem from the product of the Boltzmann distribution defined by the objective function and the data distribution learned by the diffusion model.

Convergence of Kinetic Langevin Monte Carlo on Lie groups

no code implementations18 Mar 2024 Lingkai Kong, Molei Tao

Explicit, momentum-based dynamics for optimizing functions defined on Lie groups was recently constructed, based on techniques such as variational optimization and left trivialization.

Cannot find the paper you are looking for? You can Submit a new open access paper.