Search Results for author: Rong Zhu

Found 24 papers, 7 papers with code

Lero: A Learning-to-Rank Query Optimizer

1 code implementation14 Feb 2023 Rong Zhu, Wei Chen, Bolin Ding, Xingguang Chen, Andreas Pfadler, Ziniu Wu, Jingren Zhou

In this paper, we introduce a learning-to-rank query optimizer, called Lero, which builds on top of a native query optimizer and continuously learns to improve the optimization performance.

Binary Classification Learning-To-Rank

Robust Contextual Linear Bandits

no code implementations26 Oct 2022 Rong Zhu, Branislav Kveton

Our experiments show that RoLinTS is comparably statistically efficient to the classic methods when the misspecification is low, more robust when the misspecification is high, and significantly more computationally efficient than its naive implementation.

Multi-Armed Bandits

Baihe: SysML Framework for AI-driven Databases

no code implementations29 Dec 2021 Andreas Pfadler, Rong Zhu, Wei Chen, Botong Huang, Tianjing Zeng, Bolin Ding, Jingren Zhou

Based on the high level architecture, we then describe a concrete implementation of Baihe for PostgreSQL and present example use cases for learned query optimizers.

Glue: Adaptively Merging Single Table Cardinality to Estimate Join Query Size

no code implementations7 Dec 2021 Rong Zhu, Tianjing Zeng, Andreas Pfadler, Wei Chen, Bolin Ding, Jingren Zhou

Cardinality estimation (CardEst), a central component of the query optimizer, plays a significant role in generating high-quality query plans in DBMS.

$\sbf{\delta^2}$-exploration for Reinforcement Learning

no code implementations29 Sep 2021 Rong Zhu, Mattia Rigotti

Effectively tackling the \emph{exploration-exploitation dilemma} is still a major challenge in reinforcement learning.

General Reinforcement Learning Q-Learning +2

Cardinality Estimation in DBMS: A Comprehensive Benchmark Evaluation

1 code implementation13 Sep 2021 Yuxing Han, Ziniu Wu, Peizhi Wu, Rong Zhu, Jingyi Yang, Liang Wei Tan, Kai Zeng, Gao Cong, Yanzhao Qin, Andreas Pfadler, Zhengping Qian, Jingren Zhou, Jiangneng Li, Bin Cui

Therefore, we propose a new metric P-Error to evaluate the performance of CardEst methods, which overcomes the limitation of Q-Error and is able to reflect the overall end-to-end performance of CardEst methods.

Random Effect Bandits

no code implementations23 Jun 2021 Rong Zhu, Branislav Kveton

It is well known that side information, such as the prior distribution of arm means in Thompson sampling, can improve the statistical efficiency of the bandit algorithm.

Multi-Armed Bandits Thompson Sampling

Guidance and Teaching Network for Video Salient Object Detection

no code implementations21 May 2021 Yingxia Jiao, Xiao Wang, Yu-Cheng Chou, Shouyuan Yang, Ge-Peng Ji, Rong Zhu, Ge Gao

Owing to the difficulties of mining spatial-temporal cues, the existing approaches for video salient object detection (VSOD) are limited in understanding complex and noisy scenarios, and often fail in inferring prominent objects.

Object object-detection +2

Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks

1 code implementation NeurIPS 2021 Rong Zhu, Mattia Rigotti

Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment.

Efficient Exploration Multi-Armed Bandits +1

A Unified Transferable Model for ML-Enhanced DBMS

1 code implementation6 May 2021 Ziniu Wu, Pei Yu, Peilun Yang, Rong Zhu, Yuxing Han, Yaliang Li, Defu Lian, Kai Zeng, Jingren Zhou

We propose to explore the transferabilities of the ML methods both across tasks and across DBs to tackle these fundamental drawbacks.

Management

Gradient descent temporal difference-difference learning

no code implementations1 Jan 2021 Rong Zhu, James Murray

Off-policy learning algorithms, in which an agent updates the value function of the optimal policy while selecting actions using an independent exploration policy, provide an effective solution to the explore-exploit tradeoff and have proven to be of great practical value in reinforcement learning.

BayesCard: Revitilizing Bayesian Frameworks for Cardinality Estimation

1 code implementation29 Dec 2020 Ziniu Wu, Amir Shaikhha, Rong Zhu, Kai Zeng, Yuxing Han, Jingren Zhou

Recently proposed deep learning based methods largely improve the estimation accuracy but their performance can be greatly affected by data and often difficult for system deployment.

Probabilistic Programming

Efficient and Scalable Structure Learning for Bayesian Networks: Algorithms and Applications

no code implementations7 Dec 2020 Rong Zhu, Andreas Pfadler, Ziniu Wu, Yuxing Han, Xiaoke Yang, Feng Ye, Zhenping Qian, Jingren Zhou, Bin Cui

To resolve this, we propose a new structure learning algorithm LEAST, which comprehensively fulfills our business requirements as it attains high accuracy, efficiency and scalability at the same time.

Anomaly Detection Explainable Recommendation

Self-correcting Q-Learning

no code implementations2 Dec 2020 Rong Zhu, Mattia Rigotti

The Q-learning algorithm is known to be affected by the maximization bias, i. e. the systematic overestimation of action values, an important issue that has recently received renewed attention.

Q-Learning

FSPN: A New Class of Probabilistic Graphical Model

no code implementations18 Nov 2020 Ziniu Wu, Rong Zhu, Andreas Pfadler, Yuxing Han, Jiangneng Li, Zhengping Qian, Kai Zeng, Jingren Zhou

We introduce factorize sum split product networks (FSPNs), a new class of probabilistic graphical models (PGMs).

FLAT: Fast, Lightweight and Accurate Method for Cardinality Estimation

1 code implementation18 Nov 2020 Rong Zhu, Ziniu Wu, Yuxing Han, Kai Zeng, Andreas Pfadler, Zhengping Qian, Jingren Zhou, Bin Cui

Despite decades of research, existing methods either over simplify the models only using independent factorization which leads to inaccurate estimates, or over complicate them by lossless conditional factorization without any independent assumption which results in slow probability computation.

Learning Efficient Parameter Server Synchronization Policies for Distributed SGD

no code implementations ICLR 2020 Rong Zhu, Sheng Yang, Andreas Pfadler, Zhengping Qian, Jingren Zhou

We apply a reinforcement learning (RL) based approach to learning optimal synchronization policies used for Parameter Server-based distributed training of machine learning models with Stochastic Gradient Descent (SGD).

Q-Learning Reinforcement Learning (RL)

AliGraph: A Comprehensive Graph Neural Network Platform

no code implementations23 Feb 2019 Rong Zhu, Kun Zhao, Hongxia Yang, Wei. Lin, Chang Zhou, Baole Ai, Yong Li, Jingren Zhou

An increasing number of machine learning tasks require dealing with large graph datasets, which capture rich and complex relationship among potentially billions of elements.

Distributed, Parallel, and Cluster Computing

Subsampled Optimization: Statistical Guarantees, Mean Squared Error Approximation, and Sampling Method

no code implementations10 Apr 2018 Rong Zhu, Jiming Jiang

For optimization on large-scale data, exactly calculating its solution may be computationally difficulty because of the large size of the data.

Gradient-based Sampling: An Adaptive Importance Sampling for Least-squares

no code implementations NeurIPS 2016 Rong Zhu

In modern data analysis, random sampling is an efficient and widely-used strategy to overcome the computational difficulties brought by large sample size.

Optimal Subsampling for Large Sample Logistic Regression

no code implementations3 Feb 2017 HaiYing Wang, Rong Zhu, Ping Ma

In this paper, we propose fast subsampling algorithms to efficiently approximate the maximum likelihood estimate in logistic regression.

regression

Optimal Subsampling Approaches for Large Sample Linear Regression

no code implementations17 Sep 2015 Rong Zhu, Ping Ma, Michael W. Mahoney, Bin Yu

For unweighted estimation algorithm, we show that its resulting subsample estimator is not consistent to the full sample OLS estimator.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.