no code implementations • 14 Dec 2022 • Lin Chen, Xun Yu Zhou
We study a continuous-time Markowitz mean-variance portfolio selection model in which a naive agent, unaware of the underlying time-inconsistency, continuously reoptimizes over time.
no code implementations • 13 Nov 2022 • Yuhang Yao, Mohammad Mahdi Kamani, Zhongwei Cheng, Lin Chen, Carlee Joe-Wong, Tianqiang Liu
Much of the value that IoT (Internet-of-Things) devices bring to ``smart'' homes lies in their ability to automatically trigger other devices' actions: for example, a smart camera triggering a smart lock to unlock a door.
no code implementations • 29 Sep 2022 • Mohammadhossein Bateni, Lin Chen, Matthew Fahrbach, Gang Fu, Vahab Mirrokni, Taisuke Yasuda
Feature selection is the problem of selecting a subset of features for a machine learning model that maximizes model quality subject to a resource budget constraint.
no code implementations • 20 Sep 2022 • Jiaqi Xue, Lei Xu, Lin Chen, Weidong Shi, Kaidi Xu, Qian Lou
(ii) How to design a robust PNet given the encrypted input without decryption?
1 code implementation • 16 Sep 2022 • Hao Cheng, Mengmeng Liu, Lin Chen, Hellward Broszio, Monika Sester, Michael Ying Yang
Our model achieves performance on par with the state-of-the-art models at a much higher prediction speed tested on multiple open datasets.
1 code implementation • 16 Sep 2022 • Lin Chen, Zhixiang Wei, Xin Jin, Huaian Chen, Miao Zheng, Kai Chen, Yi Jin
In this work, we resort to data mixing to establish a deliberated domain bridging (DDB) for DASS, through which the joint distributions of source and target domains are aligned and interacted with each in the intermediate space.
Ranked #1 on
Domain Adaptation
on GTAV to Cityscapes+Mapillary
no code implementations • 6 Sep 2022 • Li Ge, Xue Jiang, Lin Chen, Qibo Qin, Xingzhao Liu
With the scale of antenna arrays and the bandwidth increasing, many existing narrowband channel estimation methods ignoring the effect of beam squint may face severe performance degradation in wideband millimeter-wave (mmWave) communication systems.
no code implementations • 25 Jul 2022 • Guangjing Huang, Xu Chen, Tao Ouyang, Qian Ma, Lin Chen, Junshan Zhang
To coordinate the selfish and heterogeneous participants, we propose a novel analytic framework for incentivizing effective and efficient collaborations for participant-centric FL.
no code implementations • 24 May 2022 • Hung-Min Hsu, Xinyu Yuan, Baohua Zhu, Zhongwei Cheng, Lin Chen
Package theft detection has been a challenging task mainly due to lack of training data and a wide variety of package theft cases in reality.
1 code implementation • 17 Apr 2022 • Yuhang He, Lin Chen, Junkun Xie, Long Chen
This motivates us to conduct a "task transfer" paradigm so that 3D semantic segmentation benefits from aggregating 2D semantic cues, albeit pose noises are contained in 2D image observations.
1 code implementation • CVPR 2022 • Lin Chen, Huaian Chen, Zhixiang Wei, Xin Jin, Xiao Tan, Yi Jin, Enhong Chen
Such NWD can be coupled with the classifier to serve as a discriminator satisfying the K-Lipschitz constraint without the requirements of additional weight clipping or gradient penalty strategy.
Ranked #1 on
Domain Adaptation
on ImageCLEF-DA
no code implementations • EMNLP 2020 • Rongsheng Zhang, Xiaoxi Mao, Le Li, Lin Jiang, Lin Chen, Zhiwei Hu, Yadong Xi, Changjie Fan, Minlie Huang
In the lyrics generation process, \textit{Youling} supports traditional one pass full-text generation mode as well as an interactive generation mode, which allows users to select the satisfactory sentences from generated candidates conditioned on preceding context.
no code implementations • 26 Dec 2021 • Longfeng Zhao, Chao Wang, Gang-Jin Wang, H. Eugene Stanley, Lin Chen
Community detection methods can be used to explore the structure of complex systems.
no code implementations • 22 Nov 2021 • Yajie Yang, Longfeng Zhao, Lin Chen, Chao Wang, Jihui Han
The two risks are measured by the idiosyncratic variance and the network clustering coefficient derived from the asset correlation networks, respectively.
1 code implementation • 19 Oct 2021 • Sumanth Chennupati, Mohammad Mahdi Kamani, Zhongwei Cheng, Lin Chen
Despite this advancement in different techniques for distilling the knowledge, the aggregation of different paths for distillation has not been studied comprehensively.
Ranked #17 on
Knowledge Distillation
on ImageNet
no code implementations • 29 Sep 2021 • Lin Chen, Song Mei
Moreover, we theoretically show that the ridge estimator with optimal regularization can result in a monotone generalization risk curve and thereby eliminate multiple descent under some assumptions.
no code implementations • 5 Jul 2021 • Lin Chen, Hossein Esfandiari, Gang Fu, Vahab S. Mirrokni, Qian Yu
First, we show that it is not possible to provide an $n^{1/\log\log n}$-approximation algorithm for this problem unless the exponential time hypothesis fails.
no code implementations • 26 Mar 2021 • Zhongjie Yu, Gaoang Wang, Lin Chen, Sebastian Raschka, Jiebo Luo
We employ a transfer-learning framework to effectively train the video object detector on a large number of base-class objects and a few video clips of novel-class objects.
no code implementations • 17 Mar 2021 • Lin Chen, Bruno Scherrer, Peter L. Bartlett
In this regime, for any $q\in[\gamma^{2}, 1]$, we can construct a hard instance such that the smallest eigenvalue of its feature covariance matrix is $q/d$ and it requires $\Omega\left(\frac{d}{\gamma^{2}\left(q-\gamma^{2}\right)\varepsilon^{2}}\exp\left(\Theta\left(d\gamma^{2}\right)\right)\right)$ samples to approximate the value function up to an additive error $\varepsilon$.
1 code implementation • 7 Mar 2021 • Linghan Meng, Yanhui Li, Lin Chen, Zhi Wang, Di wu, Yuming Zhou, Baowen Xu
To tackle this problem, we propose Sample Discrimination based Selection (SDS) to select efficient samples that could discriminate multiple models, i. e., the prediction behaviors (right/wrong) of these samples would be helpful to indicate the trend of model performance.
no code implementations • 24 Feb 2021 • Lin Chen, Xirong Liu, Ling-Yan Hung
In fact we demonstrate that a unique (up to normalizations) emergent graph Einstein equation is satisfied by the geometric data encoded in the tensor network, and the graph Einstein tensor automatically recovers the known proposal in the mathematics literature, at least perturbatively order by order in the deformation away from the pure Bruhat-Tits Tree geometry dual to pure CFTs.
High Energy Physics - Theory Strongly Correlated Electrons General Relativity and Quantum Cosmology Mathematical Physics Mathematical Physics Quantum Physics
no code implementations • 24 Feb 2021 • Lin Chen, Xirong Liu, Ling-Yan Hung
We take the tensor network describing explicit p-adic CFT partition functions proposed in [1], and considered boundary conditions of the network describing a deformed Bruhat-Tits (BT) tree geometry.
Tensor Networks
High Energy Physics - Theory
Strongly Correlated Electrons
General Relativity and Quantum Cosmology
Mathematical Physics
Mathematical Physics
Quantum Physics
no code implementations • 24 Feb 2021 • Lin Chen, Xirong Liu, Ling-Yan Hung
In this sequel to [1], we take up a second approach in bending the Bruhat-Tits tree.
High Energy Physics - Theory Strongly Correlated Electrons General Relativity and Quantum Cosmology Mathematical Physics Mathematical Physics Quantum Physics
no code implementations • 1 Feb 2021 • Mengyao Hu, Lin Chen
Genuineness and distillability of entanglement play a key role in quantum information tasks, and they are easily disturbed by the noise.
Quantum Physics
no code implementations • 14 Jan 2021 • Mudabbir Kaleem, Keshav Kasichainula, Rabimba Karanjai, Lei Xu, Zhimin Gao, Lin Chen, Weidong Shi
This paper presents EDSC, a novel smart contract platform design based on the event-driven execution model as opposed to the traditionally employed transaction-driven execution model.
Distributed, Parallel, and Cluster Computing
no code implementations • 14 Jan 2021 • Gaoang Wang, Lin Chen, Tianqiang Liu, Mingwei He, Jiebo Luo
To solve the first issue of identity overlapping, we propose a dataset-aware loss for multi-dataset training by reducing the penalty when the same person appears in multiple datasets.
no code implementations • 7 Dec 2020 • Lin Chen, Anastasios Giovanidis, Wei Wang, Lin Shan
We formulate and analyze a generic sequential resource access problem arising in a variety of engineering fields, where a user disposes a number of heterogeneous computing, communication, or storage resources, each characterized by the probability of successfully executing the user's task and the related access delay and cost, and seeks an optimal access strategy to maximize her utility within a given time horizon, defined as the expected reward minus the access cost.
Networking and Internet Architecture
no code implementations • ICLR 2021 • Lin Chen, Sheng Xu
We prove that the reproducing kernel Hilbert spaces (RKHS) of a deep neural tangent kernel and the Laplace kernel include the same set of functions, when both kernels are restricted to the sphere $\mathbb{S}^{d-1}$.
1 code implementation • 8 Sep 2020 • Zhiyu Xue, Lixin Duan, Wen Li, Lin Chen, Jiebo Luo
For that, in this work, we propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works as in a neural network as well as to find out specific regions that are related to each other in images coming from the query and support sets.
Ranked #32 on
Few-Shot Image Classification
on CIFAR-FS 5-way (5-shot)
no code implementations • NeurIPS 2021 • Lin Chen, Yifei Min, Mikhail Belkin, Amin Karbasi
This paper explores the generalization loss of linear regression in variably parameterized families of models, both under-parameterized and over-parameterized.
no code implementations • 15 Jul 2020 • Xin Tang, Xu Chen, Liekang Zeng, Shuai Yu, Lin Chen
With the assistance of edge servers, user equipments (UEs) are able to run deep neural network (DNN) based AI applications, which are generally resource-hungry and compute-intensive, such that an individual UE can hardly afford by itself in real time.
no code implementations • 11 Jul 2020 • Jinhai Yang, Hua Yang, Lin Chen
Few-shot learning aims at rapidly adapting to novel categories with only a handful of samples at test time, which has been predominantly tackled with the idea of meta-learning.
no code implementations • 19 Jun 2020 • Ruitu Xu, Lin Chen, Amin Karbasi
In this paper, we establish the ordinary differential equation (ODE) that underlies the training dynamics of Model-Agnostic Meta-Learning (MAML).
no code implementations • 14 Mar 2020 • Hadi Mansourifar, Lin Chen, Weidong Shi
In this paper, we propose a novel hybrid pump and dump detection method based on distance and density metrics.
no code implementations • 25 Feb 2020 • Yifei Min, Lin Chen, Amin Karbasi
In the medium adversary regime, with more training data, the generalization loss exhibits a double descent curve, which implies the existence of an intermediate stage where more training data hurts the generalization.
1 code implementation • 20 Feb 2020 • Qing Zhu, Lin Chen, Han Hu, Binzhi Xu, Yeting Zhang, Haifeng Li
The second uses a scale attention mechanism to guide the up-sampling of features from the coarse level by a learned weight map.
no code implementations • ICML 2020 • Lin Chen, Yifei Min, Mingrui Zhang, Amin Karbasi
Despite remarkable success in practice, modern machine learning models have been found to be susceptible to adversarial attacks that make human-imperceptible perturbations to the data, but result in serious and potentially dangerous prediction errors.
no code implementations • 19 Dec 2019 • Zhiying Xu, Shuyu Shi, Alex X. Liu, Jun Zhao, Lin Chen
ADADP significantly reduces the privacy cost by improving the convergence speed with an adaptive learning rate and mitigates the negative effect of differential privacy upon the model accuracy by introducing adaptive noise.
no code implementations • CVPR 2020 • Zhongjie Yu, Lin Chen, Zhongwei Cheng, Jiebo Luo
Under the proposed framework, we develop a novel method for semi-supervised few-shot learning called TransMatch by instantiating the three components with Imprinting and MixMatch.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Yiming Xu, Lin Chen, Zhongwei Cheng, Lixin Duan, Jiebo Luo
A straightforward solution is to fine-tune a pre-trained source model by using those limited labeled target data, but it usually cannot work well due to the considerable difference between the data distributions of the source and target domains.
no code implementations • NeurIPS 2019 • Mingrui Zhang, Lin Chen, Hamed Hassani, Amin Karbasi
In this paper, we propose three online algorithms for submodular maximisation.
no code implementations • NeurIPS 2019 • Lin Chen, Hossein Esfandiari, Thomas Fu, Vahab S. Mirrokni
In this paper, we aim to develop LSH schemes for distance functions that measure the distance between two probability distributions, particularly for f-divergences as well as a generalization to capture mutual information loss.
no code implementations • NeurIPS 2020 • Lin Chen, Qian Yu, Hannah Lawrence, Amin Karbasi
To establish the dimension-independent upper bound, we next show that a mini-batching algorithm provides an $ O(\frac{T}{\sqrt{K}}) $ upper bound, and therefore conclude that the minimax regret of switching-constrained OCO is $ \Theta(\frac{T}{\sqrt{K}}) $ for any $K$.
no code implementations • 23 May 2019 • Lin Chen, Haizhao Yang
This paper introduces a novel generative encoder (GE) model for generative imaging and image processing with applications in compressed sensing and imaging, image compression, denoising, inpainting, deblurring, and super-resolution.
1 code implementation • 3 May 2019 • Qing Lian, Wen Li, Lin Chen, Lixin Duan
Particularly, in open set domain adaptation, we allow the classes from the source and target domains to be partially overlapped.
no code implementations • 30 Apr 2019 • Mohammadhossein Bateni, Lin Chen, Hossein Esfandiari, Thomas Fu, Vahab S. Mirrokni, Afshin Rostamizadeh
To achieve this, we introduce a novel re-parametrization of the mutual information objective, which we prove is submodular, and design a data structure to query the submodular function in amortized $O(\log n )$ time (where $n$ is the input vocabulary size).
no code implementations • 19 Feb 2019 • Chen Change Loy, Dahua Lin, Wanli Ouyang, Yuanjun Xiong, Shuo Yang, Qingqiu Huang, Dongzhan Zhou, Wei Xia, Quanquan Li, Ping Luo, Junjie Yan, Jian-Feng Wang, Zuoxin Li, Ye Yuan, Boxun Li, Shuai Shao, Gang Yu, Fangyun Wei, Xiang Ming, Dong Chen, Shifeng Zhang, Cheng Chi, Zhen Lei, Stan Z. Li, Hongkai Zhang, Bingpeng Ma, Hong Chang, Shiguang Shan, Xilin Chen, Wu Liu, Boyan Zhou, Huaxiong Li, Peng Cheng, Tao Mei, Artem Kukharenko, Artem Vasenin, Nikolay Sergievskiy, Hua Yang, Liangqi Li, Qiling Xu, Yuan Hong, Lin Chen, Mingjun Sun, Yirong Mao, Shiying Luo, Yongjun Li, Ruiping Wang, Qiaokang Xie, Ziyang Wu, Lei Lu, Yiheng Liu, Wengang Zhou
This paper presents a review of the 2018 WIDER Challenge on Face and Pedestrian.
no code implementations • 17 Feb 2019 • Mingrui Zhang, Lin Chen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi
How can we efficiently mitigate the overhead of gradient communications in distributed optimization?
no code implementations • 28 Jan 2019 • Lin Chen, Mingrui Zhang, Hamed Hassani, Amin Karbasi
In this paper, we consider the problem of black box continuous submodular maximization where we only have access to the function values and no information about the derivatives is provided.
no code implementations • 15 Nov 2018 • Lin Chen, Moran Feldman, Amin Karbasi
In this paper, we consider the unconstrained submodular maximization problem.
no code implementations • 7 Nov 2018 • Lin Chen, Lei Xu, Shouhuai Xu, Zhimin Gao, Weidong Shi
In this paper, we introduce a novel variant of the bribery problem, "Election with Bribed Voter Uncertainty" or BVU for short, accommodating the uncertainty that the vote of a bribed voter may or may not be counted.
no code implementations • 18 May 2018 • Lin Chen, Mingrui Zhang, Amin Karbasi
In this paper, we propose the first computationally efficient projection-free algorithm for bandit convex optimization (BCO).
no code implementations • ICML 2018 • Lin Chen, Christopher Harshaw, Hamed Hassani, Amin Karbasi
We also propose One-Shot Frank-Wolfe, a simpler algorithm which requires only a single stochastic gradient estimate in each round and achieves an $O(T^{2/3})$ stochastic regret bound for convex and continuous submodular optimization.
no code implementations • 20 Feb 2018 • Ehsan Kazemi, Lin Chen, Sanjoy Dasgupta, Amin Karbasi
More specifically, we aim at devising efficient algorithms to locate a target object in a database equipped with a dissimilarity metric via invocation of the weak comparison oracle.
no code implementations • 16 Feb 2018 • Lin Chen, Hamed Hassani, Amin Karbasi
For such settings, we then propose an online stochastic gradient ascent algorithm that also achieves a regret bound of $O(\sqrt{T})$ regret, albeit against a weaker $1/2$-approximation to the best feasible solution in hindsight.
no code implementations • NeurIPS 2017 • Lin Chen, Andreas Krause, Amin Karbasi
We then receive a noisy feedback about the utility of the action (e. g., ratings) which we model as a submodular function over the context-action space.
no code implementations • ICML 2018 • Lin Chen, Moran Feldman, Amin Karbasi
In this paper, we prove that a randomized version of the greedy algorithm (previously used by Buchbinder et al. (2014) for a different problem) achieves an approximation ratio of $(1 + 1/\gamma)^{-2}$ for the maximization of a weakly submodular function subject to a general matroid constraint, where $\gamma$ is a parameter measuring the distance of the function from submodularity.
no code implementations • NeurIPS 2016 • Lin Chen, Amin Karbasi, Forrest W. Crawford
In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks.
no code implementations • 29 Mar 2016 • Lin Chen, Forrest W. Crawford, Amin Karbasi
In real-world and online social networks, individuals receive and transmit information in real time.
no code implementations • 11 Mar 2016 • Lin Chen, Hamed Hassani, Amin Karbasi
This problem has recently gained a lot of interest in automated science and adversarial reverse engineering for which only heuristic algorithms are known.
no code implementations • 13 Nov 2015 • Lin Chen, Forrest W. Crawford, Amin Karbasi
Learning about the social structure of hidden and hard-to-reach populations --- such as drug users and sex workers --- is a major goal of epidemiological and public health research on risk behaviors and disease prevention.
no code implementations • CVPR 2014 • Lin Chen, Wen Li, Dong Xu
In this work, we propose a new framework for recognizing RGB images captured by the conventional cameras by leveraging a set of labeled RGB-D data, in which the depth features can be additionally extracted from the depth images.
no code implementations • CVPR 2014 • Lin Chen, Qiang Zhang, Baoxin Li
Relative attributes learning aims to learn ranking functions describing the relative strength of attributes.
no code implementations • CVPR 2013 • Lin Chen, Lixin Duan, Dong Xu
In this work, we propose to leverage a large number of loosely labeled web videos (e. g., from YouTube) and web images (e. g., from Google/Bing image search) for visual event recognition in consumer videos without requiring any labeled consumer videos.