Search Results for author: Sen Lin

Found 26 papers, 3 papers with code

Learning from A Single Graph is All You Need for Near-Shortest Path Routing in Wireless Networks

no code implementations18 Aug 2023 Yung-Fu Chen, Sen Lin, Anish Arora

We propose a learning algorithm for local routing policies that needs only a few data samples obtained from a single graph while generalizing to all random graphs in a standard model of wireless networks.

Non-Convex Bilevel Optimization with Time-Varying Objective Functions

no code implementations7 Aug 2023 Sen Lin, Daouda Sow, Kaiyi Ji, Yingbin Liang, Ness Shroff

In this work, we study online bilevel optimization (OBO) where the functions can be time-varying and the agent continuously updates the decisions with online streaming data.

Bilevel Optimization

Doubly Robust Instance-Reweighted Adversarial Training

no code implementations1 Aug 2023 Daouda Sow, Sen Lin, Zhangyang Wang, Yingbin Liang

Experiments on standard classification datasets demonstrate that our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance, and at the same time improves the robustness against attacks on the weakest data points.

Kernelized Offline Contextual Dueling Bandits

no code implementations21 Jul 2023 Viraj Mehta, Ojash Neopane, Vikramjeet Das, Sen Lin, Jeff Schneider, Willie Neiswanger

Preference-based feedback is important for many applications where direct evaluation of a reward function is not feasible.

Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback

no code implementations NeurIPS 2021 Hang Wang, Sen Lin, Junshan Zhang

It is known that the estimation bias hinges heavily on the ensemble size (i. e., the number of Q-function approximators used in the target), and that determining the `right' ensemble size is highly nontrivial, because of the time-varying nature of the function approximation errors during the learning process.


Warm-Start Actor-Critic: From Approximation Error to Sub-optimality Gap

no code implementations20 Jun 2023 Hang Wang, Sen Lin, Junshan Zhang

To this end, the primary objective of this work is to build a fundamental understanding on ``\textit{whether and when online learning can be significantly accelerated by a warm-start policy from offline RL?}''.

Offline RL Reinforcement Learning (RL)

Generalization Performance of Transfer Learning: Overparameterized and Underparameterized Regimes

no code implementations8 Jun 2023 Peizhong Ju, Sen Lin, Mark S. Squillante, Yingbin Liang, Ness B. Shroff

For example, when the total number of features in the source task's learning model is fixed, we show that it is more advantageous to allocate a greater number of redundant features to the task-specific part rather than the common part.

Transfer Learning

Efficient Self-supervised Continual Learning with Progressive Task-correlated Layer Freezing

no code implementations13 Mar 2023 Li Yang, Sen Lin, Fan Zhang, Junshan Zhang, Deliang Fan

Inspired by the success of Self-supervised learning (SSL) in learning visual representations from unlabeled data, a few recent works have studied SSL in the context of continual learning (CL), where multiple tasks are learned sequentially, giving rise to a new paradigm, namely self-supervised continual learning (SSCL).

Continual Learning Self-Supervised Learning

Theory on Forgetting and Generalization of Continual Learning

no code implementations12 Feb 2023 Sen Lin, Peizhong Ju, Yingbin Liang, Ness Shroff

In particular, there is a lack of understanding on what factors are important and how they affect "catastrophic forgetting" and generalization performance.

Continual Learning

CLARE: Conservative Model-Based Reward Learning for Offline Inverse Reinforcement Learning

no code implementations9 Feb 2023 Sheng Yue, Guanbo Wang, Wei Shao, Zhaofeng Zhang, Sen Lin, Ju Ren, Junshan Zhang

This work aims to tackle a major challenge in offline Inverse Reinforcement Learning (IRL), namely the reward extrapolation error, where the learned reward function may fail to explain the task correctly and misguide the agent in unseen environments due to the intrinsic covariate shift.

Continuous Control reinforcement-learning +1

Algorithm Design for Online Meta-Learning with Task Boundary Detection

no code implementations2 Feb 2023 Daouda Sow, Sen Lin, Yingbin Liang, Junshan Zhang

More specifically, we first propose two simple but effective detection mechanisms of task switches and distribution shift based on empirical observations, which serve as a key building block for more elegant online model updates in our algorithm: the task switch detection mechanism allows reusing of the best model available for the current task at hand, and the distribution shift detection mechanism differentiates the meta model update in order to preserve the knowledge for in-distribution tasks and quickly learn the new knowledge for out-of-distribution tasks.

Boundary Detection Meta-Learning

Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer

no code implementations1 Nov 2022 Sen Lin, Li Yang, Deliang Fan, Junshan Zhang

By learning a sequence of tasks continually, an agent in continual learning (CL) can improve the learning performance of both a new task and `old' tasks by leveraging the forward knowledge transfer and the backward knowledge transfer, respectively.

Continual Learning Transfer Learning

TRGP: Trust Region Gradient Projection for Continual Learning

1 code implementation ICLR 2022 Sen Lin, Li Yang, Deliang Fan, Junshan Zhang

To tackle this challenge, we propose Trust Region Gradient Projection (TRGP) for continual learning to facilitate the forward knowledge transfer based on an efficient characterization of task correlation.

Continual Learning Transfer Learning

Model-Based Offline Meta-Reinforcement Learning with Regularization

no code implementations ICLR 2022 Sen Lin, Jialin Wan, Tengyu Xu, Yingbin Liang, Junshan Zhang

In particular, we devise a new meta-Regularized model-based Actor-Critic (RAC) method for within-task policy optimization, as a key building block of MerPO, using conservative policy evaluation and regularized policy improvement; and the intrinsic tradeoff therein is achieved via striking the right balance between two regularizers, one based on the behavior policy and the other on the meta-policy.

Meta Reinforcement Learning reinforcement-learning +2

Approximation of Images via Generalized Higher Order Singular Value Decomposition over Finite-dimensional Commutative Semisimple Algebra

1 code implementation1 Feb 2022 Liang Liao, Sen Lin, Lun Li, Xiuwei Zhang, Song Zhao, Yan Wang, Xinqiang Wang, Qi Gao, Jingyu Wang

Higher order singular value decomposition (HOSVD) extends the SVD and can approximate higher order data using sums of a few rank-one components.

GROWN: GRow Only When Necessary for Continual Learning

no code implementations3 Oct 2021 Li Yang, Sen Lin, Junshan Zhang, Deliang Fan

To address this issue, continual learning has been developed to learn new tasks sequentially and perform knowledge transfer from the old tasks to the new ones without forgetting.

Continual Learning Transfer Learning

Generalized Image Reconstruction over T-Algebra

1 code implementation17 Jan 2021 Liang Liao, Xuechun Zhang, Xinqiang Wang, Sen Lin, Xin Liu

We also show in our experiments that the performance of TPCA increases when the order of compounded pixels increases.

Data Compression Dimensionality Reduction +1

Distributed Q-Learning with State Tracking for Multi-agent Networked Control

no code implementations22 Dec 2020 Hang Wang, Sen Lin, Hamid Jafarkhani, Junshan Zhang

Specifically, we assume that agents maintain local estimates of the global state based on their local information and communications with neighbors.


Inexact-ADMM Based Federated Meta-Learning for Fast and Continual Edge Learning

no code implementations16 Dec 2020 Sheng Yue, Ju Ren, Jiang Xin, Sen Lin, Junshan Zhang

To overcome these challenges, we explore continual edge learning capable of leveraging the knowledge transfer from previous tasks.

Meta-Learning Transfer Learning

Accelerating Distributed Online Meta-Learning via Multi-Agent Collaboration under Limited Communication

no code implementations15 Dec 2020 Sen Lin, Mehmet Dedeoglu, Junshan Zhang

By characterizing the upper bound of the agent-task-averaged regret, we show that the performance of multi-agent online meta-learning depends heavily on how much an agent can benefit from the distributed network-level OCO for meta-model updates via limited communication, which however is not well understood.


MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning

no code implementations25 Nov 2020 Sen Lin, Li Yang, Zhezhi He, Deliang Fan, Junshan Zhang

In this work, we advocate a holistic approach to jointly train the backbone network and the channel gating which enables dynamical selection of a subset of filters for more efficient local computation given the data input.

Meta-Learning Quantization

System Identification via Meta-Learning in Linear Time-Varying Environments

no code implementations27 Oct 2020 Sen Lin, Hang Wang, Junshan Zhang

System identification is a fundamental problem in reinforcement learning, control theory and signal processing, and the non-asymptotic analysis of the corresponding sample complexity is challenging and elusive, even for linear time-varying (LTV) systems.


Underwater Image Enhancement Based on Structure-Texture Reconstruction

no code implementations11 Apr 2020 Sen Lin, Kaichen Chi

Firstly, the color equalization of the degraded image is realized by the automatic color enhancement algorithm; Secondly, the relative total variation is introduced to decompose the image into the structure layer and texture layer; Then, the best background light point is selected based on brightness, gradient discrimination, and hue judgment, the transmittance of the backscatter component is obtained by the red dark channel prior, which is substituted into the imaging model to remove the fogging phenomenon in the structure layer.

Image Enhancement

Real-Time Edge Intelligence in the Making: A Collaborative Learning Framework via Federated Meta-Learning

no code implementations9 Jan 2020 Sen Lin, Guang Yang, Junshan Zhang

Further, we investigate the convergence of the proposed federated meta-learning algorithm under mild conditions on node similarity and the adaptation performance at the target edge.


Hyperspectral City V1.0 Dataset and Benchmark

no code implementations24 Jul 2019 Shaodi You, Erqi Huang, Shuaizhe Liang, Yongrong Zheng, Yunxiang Li, Fan Wang, Sen Lin, Qiu Shen, Xun Cao, Diming Zhang, Yuanjiang Li, Yu Li, Ying Fu, Boxin Shi, Feng Lu, Yinqiang Zheng, Robby T. Tan

This document introduces the background and the usage of the Hyperspectral City Dataset and the benchmark.

Cannot find the paper you are looking for? You can Submit a new open access paper.