no code implementations • 15 Oct 2024 • Jingyang Li, Jiachun Pan, Vincent Y. F. Tan, Kim-Chuan Toh, Pan Zhou

Semi-supervised learning (SSL), exemplified by FixMatch (Sohn et al., 2020), has shown significant generalization advantages over supervised learning (SL), particularly in the context of deep neural networks (DNNs).

1 code implementation • 10 Oct 2024 • Yunlong Hou, Vincent Y. F. Tan, Zixin Zhong

When PS$\varepsilon$BAI and N$\varepsilon$BAI are utilized judiciously in parallel, PS$\varepsilon$BAI$^+$ is shown to have a finite expected sample complexity.

no code implementations • 10 Oct 2024 • Yihang Gao, Vincent Y. F. Tan

Kolmogorov--Arnold Networks (KANs), a recently proposed neural network architecture, have gained significant attention in the deep learning community, due to their potential as a viable alternative to multi-layer perceptrons (MLPs) and their broad applicability to various scientific tasks.

no code implementations • 8 Oct 2024 • Eugene Lim, Vincent Y. F. Tan, Harold Soh

Subsequently, each user obtains a reward drawn from the unknown reward distribution associated with its assigned arm.

no code implementations • 27 Sep 2024 • Junwen Yang, Vincent Y. F. Tan, Tianyuan Jin

Motivated by real-world applications that necessitate responsible experimentation, we introduce the problem of best arm identification (BAI) with minimal regret.

no code implementations • 8 Sep 2024 • Recep Can Yavas, Yuqi Huang, Vincent Y. F. Tan, Jonathan Scarlett

At each time step, the decision maker pulls an arm and observes its outcome from the random variable associated to that arm.

no code implementations • 7 Sep 2024 • Adarsh Barik, Anand Krishna, Vincent Y. F. Tan

In this work, we study the robust phase retrieval problem where the task is to recover an unknown signal $\theta^* \in \mathbb{R}^d$ in the presence of potentially arbitrarily corrupted magnitude-only linear measurements.

no code implementations • 12 Aug 2024 • Adarsh Barik, Anand Krishna, Vincent Y. F. Tan

We study a robust online convex optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of rounds k, unknown to the learner.

no code implementations • 19 Jul 2024 • Shuche Wang, Vincent Y. F. Tan

Distributed gradient descent algorithms have come to the fore in modern machine learning, especially in parallelizing the handling of large datasets that are distributed across several workers.

1 code implementation • 18 Jun 2024 • Yuting Feng, Vincent Y. F. Tan, Bogdan Cautis

We consider a ubiquitous scenario in the study of Influence Maximization (IM), in which there is limited knowledge about the topology of the diffusion network.

no code implementations • 18 Jun 2024 • Zhirui Chen, Vincent Y. F. Tan

Given an offline dataset, our objective consists in ascertaining the optimal action for each state, with the ultimate goal of minimizing the {\em simple regret}.

no code implementations • 24 May 2024 • Jie Bian, Vincent Y. F. Tan

The Indexed Minimum Empirical Divergence (IMED) algorithm is a highly effective approach that offers a stronger theoretical guarantee of the asymptotic optimality compared to the Kullback--Leibler Upper Confidence Bound (KL-UCB) algorithm for the multi-armed bandit problem.

1 code implementation • 22 May 2024 • Yujun Shi, Jun Hao Liew, Hanshu Yan, Vincent Y. F. Tan, Jiashi Feng

Accuracy and speed are critical in image editing tasks.

no code implementations • 2 Apr 2024 • Yanyan Dong, Vincent Y. F. Tan

The lower bound for bandit feedback is $ \tilde{\Omega}\big( (\lambda K)^{\frac{1}{3}} (TI)^{\frac{2}{3}}\big)$ while that for semi-bandit feedback is $ \tilde{\Omega}\big( (\lambda K I)^{\frac{1}{3}} T^{\frac{2}{3}}\big)$ where $I$ is the number of base arms in the combinatorial arm played in each round.

no code implementations • 23 Feb 2024 • Junwen Yang, Tianyuan Jin, Vincent Y. F. Tan

Our results offer valuable quantitative insights into the benefits of the abstention option, laying the groundwork for further exploration in other online decision-making problems with such an option.

no code implementations • 17 Jan 2024 • Zhirui Chen, P. N. Karthik, Yeow Meng Chee, Vincent Y. F. Tan

We study best arm identification (BAI) in linear bandits in the fixed-budget regime under differential privacy constraints, when the arm rewards are supported on the unit interval.

1 code implementation • 19 Dec 2023 • Jiachun Pan, Hanshu Yan, Jun Hao Liew, Jiashi Feng, Vincent Y. F. Tan

However, since the off-the-shelf pre-trained networks are trained on clean images, the one-step estimation procedure of the clean image may be inaccurate, especially in the early stages of the generation process in diffusion models.

no code implementations • 1 Nov 2023 • Recep Can Yavas, Vincent Y. F. Tan

For fixed sparsity $s$ and budget $T$, the exponent in the error probability of Lasso-OD depends on $s$ but not on the dimension $d$, yielding a significant performance improvement for sparse and high-dimensional linear bandits.

no code implementations • 26 Oct 2023 • Fengzhuo Zhang, Vincent Y. F. Tan, Zhaoran Wang, Zhuoran Yang

Second, using kernel embedding of distributions, we design efficient algorithms to estimate the transition kernels, reward functions, and graphons from sampled agents.

no code implementations • 20 Oct 2023 • P. N. Karthik, Vincent Y. F. Tan, Arpan Mukherjee, Ali Tajer

It is shown that under every policy, the state-action visitation proportions satisfy a specific approximate flow conservation constraint and that these proportions match the optimal proportions dictated by the lower bound under any asymptotically optimal policy.

1 code implementation • 6 Sep 2023 • Xiaochen Zhu, Vincent Y. F. Tan, Xiaokui Xiao

Graph neural networks (GNNs) have gained an increasing amount of popularity due to their superior capability in learning node embeddings for various graph inference tasks, but training them can raise privacy concerns.

1 code implementation • 20 Jul 2023 • Jiachun Pan, Jun Hao Liew, Vincent Y. F. Tan, Jiashi Feng, Hanshu Yan

Existing customization methods require access to multiple reference examples to align pre-trained diffusion probabilistic models (DPMs) with user-provided concepts.

no code implementations • 12 Jul 2023 • Elizabeth Z. C. Tan, Caroline Chaux, Emmanuel Soubies, Vincent Y. F. Tan

We design algorithms for Robust Principal Component Analysis (RPCA) which consists in decomposing a matrix into the sum of a low rank matrix and a sparse matrix.

4 code implementations • CVPR 2024 • Yujun Shi, Chuhui Xue, Jun Hao Liew, Jiachun Pan, Hanshu Yan, Wenqing Zhang, Vincent Y. F. Tan, Song Bai

In this work, we extend this editing framework to diffusion models and propose a novel approach DragDiffusion.

no code implementations • 25 Apr 2023 • Prathamesh Mayekar, Jonathan Scarlett, Vincent Y. F. Tan

We study a distributed stochastic multi-armed bandit where a client supplies the learner with communication-constrained feedback based on the rewards for the corresponding arm pulls.

1 code implementation • 31 Jan 2023 • Yunlong Hou, Vincent Y. F. Tan, Zixin Zhong

Under this constraint, we design and analyze an algorithm {\sc PASCombUCB} that minimizes the regret over the horizon of time $T$.

3 code implementations • CVPR 2023 • Jiawei Du, Yidi Jiang, Vincent Y. F. Tan, Joey Tianyi Zhou, Haizhou Li

To mitigate the adverse impact of this accumulated trajectory error, we propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.

Ranked #5 on Dataset Distillation - 1IPC on TinyImageNet

1 code implementation • 23 Oct 2022 • Yi Wei, Zixin Zhong, Vincent Y. F. Tan

The beam alignment (BA) problem consists in accurately aligning the transmitter and receiver beams to establish a reliable communication link in wireless communication systems.

no code implementations • 15 Oct 2022 • Haiyun He, Gholamali Aminian, Yuheng Bu, Miguel Rodrigues, Vincent Y. F. Tan

Our findings offer new insights that the generalization performance of SSL with pseudo-labeling is affected not only by the information between the output hypothesis and input training data but also by the information {\em shared} between the {\em labeled} and {\em pseudo-labeled} data samples.

no code implementations • 14 Oct 2022 • Zhirui Chen, P. N. Karthik, Vincent Y. F. Tan, Yeow Meng Chee

Furthermore, we show that for any algorithm whose upper bound on the expected stopping time matches with the lower bound up to a multiplicative constant ({\em almost-optimal} algorithm), the ratio of any two consecutive communication time instants must be {\em bounded}, a result that is of independent interest.

2 code implementations • 1 Oct 2022 • Yujun Shi, Jian Liang, Wenqing Zhang, Vincent Y. F. Tan, Song Bai

To remedy this problem caused by the data heterogeneity, we propose {\sc FedDecorr}, a novel method that can effectively mitigate dimensional collapse in federated learning.

no code implementations • 20 Sep 2022 • Fengzhuo Zhang, Boyi Liu, Kaixin Wang, Vincent Y. F. Tan, Zhuoran Yang, Zhaoran Wang

The cooperative Multi-A gent R einforcement Learning (MARL) with permutation invariant agents framework has achieved tremendous empirical successes in real-world applications.

no code implementations • 19 Aug 2022 • Kota Srinivas Reddy, P. N. Karthik, Vincent Y. F. Tan

The local best arm at a client is the arm with the largest mean among the arms local to the client, whereas the global best arm is the arm with the largest average mean across all the clients.

1 code implementation • 27 May 2022 • Jiawei Du, Daquan Zhou, Jiashi Feng, Vincent Y. F. Tan, Joey Tianyi Zhou

Intuitively, SAF achieves this by avoiding sudden drops in the loss in the sharp local minima throughout the trajectory of the updates of the weights.

no code implementations • 12 May 2022 • Vincent Y. F. Tan, Prashanth L. A., Krishna Jagannathan

In several applications such as clinical trials and financial portfolio optimization, the expected value (or the average reward) does not satisfactorily capture the merits of a drug or a portfolio.

no code implementations • 29 Mar 2022 • P. N. Karthik, Kota Srinivas Reddy, Vincent Y. F. Tan

For this problem, we derive the first-known problem instance-dependent asymptotic lower bound on the growth rate of the expected time required to find the index of the best arm, where the asymptotics is as the error probability vanishes.

no code implementations • 9 Feb 2022 • Junwen Yang, Zixin Zhong, Vincent Y. F. Tan

This paper considers the problem of online clustering with bandit feedback.

1 code implementation • 25 Jan 2022 • Yunlong Hou, Vincent Y. F. Tan, Zixin Zhong

We design and analyze VA-LUCB, a parameter-free algorithm, for identifying the best arm under the fixed-confidence setup and under a stringent constraint that the variance of the chosen arm is strictly smaller than a given threshold.

no code implementations • 12 Jan 2022 • Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan

Secondly, to robustify DIDs, we propose an adversarial training strategy, hybrid adversarial training ({\sc HAT}), that jointly trains DIDs with adversarial and non-adversarial noisy data to ensure that the reconstruction quality is high and the denoisers around non-adversarial data are locally smooth.

1 code implementation • CVPR 2022 • Yujun Shi, Kuangqi Zhou, Jian Liang, Zihang Jiang, Jiashi Feng, Philip Torr, Song Bai, Vincent Y. F. Tan

Specifically, we experimentally show that directly encouraging CIL Learner at the initial phase to output similar representations as the model jointly trained on all classes can greatly boost the CIL performance.

1 code implementation • 27 Oct 2021 • Fengzhuo Zhang, Anshoo Tandon, Vincent Y. F. Tan

We design and analyze an algorithm Active Learning Algorithm for Trees with Homogeneous Edge (Active-LATHE), which surprisingly boosts the error exponent by at least 40\% when $\rho$ is at least $0. 8$.

no code implementations • 16 Oct 2021 • Zixin Zhong, Wang Chi Cheung, Vincent Y. F. Tan

We study the Pareto frontier of two archetypal objectives in multi-armed bandits, namely, regret minimization (RM) and best arm identification (BAI) with a fixed horizon.

1 code implementation • ICLR 2022 • Jiawei Du, Hanshu Yan, Jiashi Feng, Joey Tianyi Zhou, Liangli Zhen, Rick Siow Mong Goh, Vincent Y. F. Tan

Recently, the relation between the sharpness of the loss landscape and the generalization error has been established by Foret et al. (2020), in which the Sharpness Aware Minimizer (SAM) was proposed to mitigate the degradation of the generalization.

1 code implementation • 3 Oct 2021 • Haiyun He, Hanshu Yan, Vincent Y. F. Tan

Using information-theoretic principles, we consider the generalization error (gen-error) of iterative semi-supervised learning (SSL) algorithms that iteratively generate pseudo-labels for a large amount of unlabelled data to progressively refine the model parameters.

1 code implementation • 25 Aug 2021 • Joel Q. L. Chang, Vincent Y. F. Tan

This paper unifies the design and the analysis of risk-averse Thompson sampling algorithms for the multi-armed bandit problem for a class of risk functionals $\rho$ that are continuous and dominant.

no code implementations • NeurIPS 2021 • Fengzhuo Zhang, Vincent Y. F. Tan

The optimalities of the robust version of CLRG and NJ are verified by comparing their sample complexities and the impossibility result.

no code implementations • 27 May 2021 • Junwen Yang, Vincent Y. F. Tan

We study the problem of best arm identification in linear bandits in the fixed-budget setting.

no code implementations • 11 May 2021 • Qiaosheng Zhang, Vincent Y. F. Tan

This paper investigates fundamental limits of exact recovery in the general d-uniform hypergraph stochastic block model (d-HSBM), wherein n nodes are partitioned into k disjoint communities with relative sizes (p1,..., pk).

1 code implementation • 10 Apr 2021 • Ting Cai, Vincent Y. F. Tan, Cédric Févotte

We consider an adversarially-trained version of the nonnegative matrix factorization, a popular latent dimensionality reduction technique.

2 code implementations • 10 Feb 2021 • Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Y. F. Tan, Masashi Sugiyama

By comparing \textit{non-robust} (normally trained) and \textit{robustified} (adversarially trained) models, we observe that adversarial training (AT) robustifies CNNs by aligning the channel-wise activations of adversarial data with those of their natural counterparts.

no code implementations • 22 Jan 2021 • Anshoo Tandon, Aldric H. J. Yuan, Vincent Y. F. Tan

We provide error exponent analyses and extensive numerical results on a variety of trees to show that the sample complexity of SGA is significantly better than the algorithm of Katiyar et al. (2020).

no code implementations • 16 Nov 2020 • Joel Q. L. Chang, Qiuyu Zhu, Vincent Y. F. Tan

The multi-armed bandit (MAB) problem is a ubiquitous decision-making problem that exemplifies the exploration-exploitation tradeoff.

1 code implementation • 15 Oct 2020 • Zixin Zhong, Wang Chi Cheung, Vincent Y. F. Tan

When the amount of corruptions per step (CPS) is below a threshold, PSS($u$) identifies the best arm or item with probability tending to $1$ as $T\rightarrow \infty$.

1 code implementation • 24 Jul 2020 • Dana Lahat, Yanbin Lang, Vincent Y. F. Tan, Cédric Févotte

In this work, we provide a collection of tools for PSDMF, by showing that PSDMF algorithms can be designed based on phase retrieval (PR) and affine rank minimization (ARM) algorithms.

no code implementations • 8 Jun 2020 • Qiaosheng Zhang, Geewon Suh, Changho Suh, Vincent Y. F. Tan

In this paper, we design and analyze MC2G (Matrix Completion with 2 Graphs), an algorithm that performs matrix completion in the presence of social and item similarity graphs.

no code implementations • 9 May 2020 • Anshoo Tandon, Vincent Y. F. Tan, Shiyao Zhu

In this case, we show that they strictly improve on the recent results of Nikolakakis, Kalogerias, and Sarwate [Proc.

1 code implementation • 24 Apr 2020 • Jiawei Du, Hanshu Yan, Vincent Y. F. Tan, Joey Tianyi Zhou, Rick Siow Mong Goh, Jiashi Feng

However, similar to existing preprocessing-based methods, the randomized process will degrade the prediction accuracy.

no code implementations • 13 Mar 2020 • Haiyun He, Qiaosheng Zhang, Vincent Y. F. Tan

This paper investigates a novel offline change-point detection problem from an information-theoretic perspective.

2 code implementations • ICML 2020 • Qiuyu Zhu, Vincent Y. F. Tan

The multi-armed bandit (MAB) problem is a classical learning task that exemplifies the exploration-exploitation tradeoff.

no code implementations • 25 Jan 2020 • Zexin Wang, Vincent Y. F. Tan, Jonathan Scarlett

We consider the problem of Bayesian optimization of a one-dimensional Brownian motion in which the $T$ adaptively chosen observations are corrupted by Gaussian noise.

no code implementations • ICML 2020 • Zixin Zhong, Wang Chi Cheung, Vincent Y. F. Tan

Finally, extensive numerical simulations corroborate the efficacy of CascadeBAI as well as the tightness of our upper bound on its time complexity.

no code implementations • 6 Dec 2019 • Qiaosheng Zhang, Vincent Y. F. Tan, Changho Suh

We consider the problem of recovering a binary rating matrix as well as clusters of users and items based on a partially observed matrix together with side-information in the form of social and item similarity graphs.

no code implementations • 3 Dec 2019 • Mahdi Haghifam, Vincent Y. F. Tan, Ashish Khisti

Motivated by real-world machine learning applications, we consider a statistical classification task in a sequential setting where test samples arrive sequentially.

1 code implementation • ICLR 2020 • Saurabh Khanna, Vincent Y. F. Tan

We make a case that the network topology of Granger causal relations is directly inferrable from a structured sparse estimate of the internal parameters of the SRU networks trained to predict the processes$'$ time series measurements.

2 code implementations • ICLR 2020 • Hanshu Yan, Jiawei Du, Vincent Y. F. Tan, Jiashi Feng

We then provide an insightful understanding of this phenomenon by exploiting a certain desirable property of the flow of a continuous-time ODE, namely that integral curves are non-intersecting.

no code implementations • 13 Jun 2019 • Hanshu Yan, Xuan Chen, Vincent Y. F. Tan, Wenhan Yang, Joe Wu, Jiashi Feng

They jointly facilitate unsupervised learning of a noise model for various noise types.

1 code implementation • 11 Jun 2019 • Sandra S. Y. Tan, Antonios Varvitsiotis, Vincent Y. F. Tan

Program., 145(1):451--482, 2014], a powerful framework for determining convergence rates of first-order optimization algorithms.

no code implementations • 15 Mar 2019 • Rui Xia, Vincent Y. F. Tan, Louis Filstroff, Cédric Févotte

We propose a novel ranking model that combines the Bradley-Terry-Luce probability model with a nonnegative matrix factorization framework to model and uncover the presence of latent variables that influence the performance of top tennis players.

no code implementations • 30 Jan 2019 • Nicolas Gillis, Le Thi Khanh Hien, Valentin Leplat, Vincent Y. F. Tan

We propose to use Lagrange duality to judiciously optimize for a set of weights to be used within the framework of the weighted-sum approach, that is, we minimize a single objective function which is a weighted sum of the all objective functions.

no code implementations • 2 Oct 2018 • Zixin Zhong, Wang Chi Cheung, Vincent Y. F. Tan

While Thompson sampling (TS) algorithms have been shown to be empirically superior to Upper Confidence Bound (UCB) algorithms for cascading bandits, theoretical guarantees are only known for the latter.

no code implementations • 3 Jun 2018 • Lin Zhou, Vincent Y. F. Tan, Mehul Motani

Motivated by real-world machine learning applications, we analyze approximations to the non-asymptotic fundamental limits of statistical classification.

no code implementations • 27 Sep 2017 • Mine Alsan, Ranjitha Prasad, Vincent Y. F. Tan

In particular, we employ the Bayesian BTL model which allows for meaningful prior assumptions and to cope with situations where the number of objects is large and the number of comparisons between some objects is small or even zero.

no code implementations • 1 Apr 2017 • Renbo Zhao, William B. Haskell, Vincent Y. F. Tan

We revisit the stochastic limited-memory BFGS (L-BFGS) algorithm.

no code implementations • 30 Mar 2017 • Zhaoqiang Liu, Vincent Y. F. Tan

These results provide intuition for the informativeness of $k$-means (with and without dimensionality reduction) as an algorithm for learning mixture models.

1 code implementation • 27 Dec 2016 • Zhaoqiang Liu, Vincent Y. F. Tan

We propose a geometric assumption on nonnegative data matrices such that under this assumption, we are able to provide upper bounds (both deterministic and probabilistic) on the relative error of nonnegative matrix factorization (NMF).

no code implementations • 4 Sep 2016 • Renbo Zhao, Vincent Y. F. Tan

The multiplicative update (MU) algorithm has been extensively used to estimate the basis and coefficient matrices in nonnegative matrix factorization (NMF) problems under a wide range of divergences and regularizers.

no code implementations • 30 Jul 2016 • Renbo Zhao, Vincent Y. F. Tan, Huan Xu

We develop a unified and systematic framework for performing online nonnegative matrix factorization under a wide variety of important divergences.

no code implementations • 10 Apr 2016 • Renbo Zhao, Vincent Y. F. Tan

We propose a unified and systematic framework for performing online nonnegative matrix factorization in the presence of outliers.

no code implementations • 15 Feb 2016 • Changho Suh, Vincent Y. F. Tan, Renbo Zhao

We study the top-$K$ ranking problem where the goal is to recover the set of top-$K$ ranked items out of a large collection of items based on partially revealed preferences.

3 code implementations • 25 Nov 2011 • Vincent Y. F. Tan, Cédric Févotte

This paper addresses the estimation of the latent dimensionality in nonnegative matrix factorization (NMF) with the \beta-divergence.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.