Search Results for author: Juncheng Dong

Found 15 papers, 2 papers with code

Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning

no code implementations22 May 2025 Korel Gundem, Juncheng Dong, Dennis Zhang, Vahid Tarokh, Zhengling Qi

While calibration techniques are proposed to mitigate these biases, we show that, in the logit space, many of these methods are equivalent to merely shifting the LLM's decision boundary without having the ability to alter its orientation.

In-Context Learning

Teleportation With Null Space Gradient Projection for Optimization Acceleration

no code implementations17 Feb 2025 Zihao Wu, Juncheng Dong, Ahmed Aloui, Vahid Tarokh

Optimization techniques have become increasingly critical due to the ever-growing model complexity and data scale.

S2TX: Cross-Attention Multi-Scale State-Space Transformer for Time Series Forecasting

no code implementations17 Feb 2025 Zihao Wu, Juncheng Dong, Haoming Yang, Vahid Tarokh

Time series forecasting has recently achieved significant progress with multi-scale models to address the heterogeneity between long and short range patterns.

Mamba Time Series +1

Score-Based Metropolis-Hastings Algorithms

no code implementations31 Dec 2024 Ahmed Aloui, Ali Hasan, Juncheng Dong, Zihao Wu, Vahid Tarokh

In this paper, we introduce a new approach for integrating score-based models with the Metropolis-Hastings algorithm.

Offline Stochastic Optimization of Black-Box Objective Functions

no code implementations3 Dec 2024 Juncheng Dong, Zihao Wu, Hamid Jafarkhani, Ali Pezeshki, Vahid Tarokh

Many challenges in science and engineering, such as drug discovery and communication network design, involve optimizing complex and expensive black-box functions across vast search spaces.

Drug Discovery Stochastic Optimization

Robust Reinforcement Learning through Efficient Adversarial Herding

no code implementations12 Jun 2023 Juncheng Dong, Hao-Lun Hsu, Qitong Gao, Vahid Tarokh, Miroslav Pajic

In this work, we extend the two-player game by introducing an adversarial herd, which involves a group of adversaries, in order to address ($\textit{i}$) the difficulty of the inner optimization problem, and ($\textit{ii}$) the potential over pessimism caused by the selection of a candidate adversary set that may include unlikely scenarios.

MuJoCo reinforcement-learning +2

Mode-Aware Continual Learning for Conditional Generative Adversarial Networks

no code implementations19 May 2023 Cat P. Le, Juncheng Dong, Ahmed Aloui, Vahid Tarokh

To this end, we introduce a new continual learning approach for conditional generative adversarial networks by leveraging a mode-affinity score specifically designed for generative modeling.

Continual Learning

Domain Adaptation via Rebalanced Sub-domain Alignment

no code implementations3 Feb 2023 Yiling Liu, Juncheng Dong, Ziyang Jiang, Ahmed Aloui, Keyu Li, Hunter Klein, Vahid Tarokh, David Carlson

To address this limitation, we propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.

Unsupervised Domain Adaptation

Transfer Learning for Individual Treatment Effect Estimation

no code implementations1 Oct 2022 Ahmed Aloui, Juncheng Dong, Cat P. Le, Vahid Tarokh

To this end, we theoretically assess the feasibility of transferring ITE knowledge and present a practical framework for efficient transfer.

Causal Inference counterfactual +1

Multi-Agent Adversarial Attacks for Multi-Channel Communications

no code implementations22 Jan 2022 Juncheng Dong, Suya Wu, Mohammadreza Sultani, Vahid Tarokh

In particular, by modeling the adversaries as learning agents, we show that the proposed MAAS is able to successfully choose the transmitted channel(s) and their respective allocated power(s) without any prior knowledge of the sender strategy.

channel selection Reinforcement Learning (RL)

Task Affinity with Maximum Bipartite Matching in Few-Shot Learning

1 code implementation ICLR 2022 Cat P. Le, Juncheng Dong, Mohammadreza Soltani, Vahid Tarokh

We propose an asymmetric affinity score for representing the complexity of utilizing the knowledge of one task for learning another one.

Few-Shot Learning

Fisher Task Distance and Its Application in Neural Architecture Search

1 code implementation23 Mar 2021 Cat P. Le, Mohammadreza Soltani, Juncheng Dong, Vahid Tarokh

Next, we construct an online neural architecture search framework using the Fisher task distance, in which we have access to the past learned tasks.

Neural Architecture Search Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.