Search Results for author: Bin Shi

Found 19 papers, 4 papers with code

Teaching MLP More Graph Information: A Three-stage Multitask Knowledge Distillation Framework

no code implementations2 Mar 2024 Junxian Li, Bin Shi, Erfei Cui, Hua Wei, Qinghua Zheng

To the best of our knowledge, it is the first work to include hidden layer distillation for student MLP on graphs and to combine graph Positional Encoding with MLP.

Knowledge Distillation

GraphPub: Generation of Differential Privacy Graph with High Availability

no code implementations28 Feb 2024 Wanghan Xu, Bin Shi, Ao Liu, Jiqiang Zhang, Bo Dong

In recent years, with the rapid development of graph neural networks (GNN), more and more graph datasets have been published for GNN tasks.

Interpretable Geoscience Artificial Intelligence (XGeoS-AI): Application to Demystify Image Recognition

no code implementations8 Nov 2023 Jin-Jian Xu, Hao Zhang, Chao-Sheng Tang, Lin Li, Bin Shi

Experimental results demonstrate that the effectiveness, versatility, and heuristics of the proposed framework have great potential in solving geoscience image recognition problems.

Computed Tomography (CT)

Uncertainty-aware Traffic Prediction under Missing Data

1 code implementation13 Sep 2023 Hao Mei, Junxian Li, Zhiming Liang, Guanjie Zheng, Bin Shi, Hua Wei

However, most studies assume the prediction locations have complete or at least partial historical records and cannot be extended to non-historical recorded locations.

Decision Making Traffic Prediction +1

Linear convergence of forward-backward accelerated algorithms without knowledge of the modulus of strong convexity

no code implementations16 Jun 2023 Bowen Li, Bin Shi, Ya-xiang Yuan

A significant milestone in modern gradient-based optimization was achieved with the development of Nesterov's accelerated gradient descent (NAG) method.

On Underdamped Nesterov's Acceleration

no code implementations28 Apr 2023 Shuo Chen, Bin Shi, Ya-xiang Yuan

In this paper, based on the high-resolution differential equation framework, we construct the new Lyapunov functions for the underdamped case, which is motivated by the power of the time $t^{\gamma}$ or the iteration $k^{\gamma}$ in the mixed term.

Reinforcement Learning Approaches for Traffic Signal Control under Missing Data

1 code implementation21 Apr 2023 Hao Mei, Junxian Li, Bin Shi, Hua Wei

In this work, we aim to control the traffic signals in a real-world setting, where some of the intersections in the road network are not installed with sensors and thus with no direct observations around them.

reinforcement-learning Reinforcement Learning (RL)

Linear Convergence of ISTA and FISTA

no code implementations13 Dec 2022 Bowen Li, Bin Shi, Ya-xiang Yuan

Specifically, assuming the smooth part to be strongly convex is more reasonable for the least-square model, even though the image matrix is probably ill-conditioned.

LibSignal: An Open Library for Traffic Signal Control

2 code implementations19 Nov 2022 Hao Mei, Xiaoliang Lei, Longchao Da, Bin Shi, Hua Wei

This paper introduces a library for cross-simulator comparison of reinforcement learning models in traffic signal control tasks.

reinforcement-learning Reinforcement Learning (RL)

Proximal Subgradient Norm Minimization of ISTA and FISTA

no code implementations3 Nov 2022 Bowen Li, Bin Shi, Ya-xiang Yuan

We apply the tighter inequality discovered in the well-constructed Lyapunov function and then obtain the proximal subgradient norm minimization by the phase-space representation, regardless of gradient-correction or implicit-velocity.

Gradient Norm Minimization of Nesterov Acceleration: $o(1/k^3)$

no code implementations19 Sep 2022 Shuo Chen, Bin Shi, Ya-xiang Yuan

In the history of first-order algorithms, Nesterov's accelerated gradient descent (NAG) is one of the milestones.

Open-Ended Question Answering

Modeling Network-level Traffic Flow Transitions on Sparse Data

1 code implementation13 Aug 2022 Xiaoliang Lei, Hao Mei, Bin Shi, Hua Wei

DTIGNN models the traffic system as a dynamic graph influenced by traffic signals, learns the transition models grounded by fundamental transition equations from transportation, and predicts future traffic states with imputation in the process.

Decision Making Imputation

An adjoint-free algorithm for conditional nonlinear optimal perturbations (CNOPs) via sampling

no code implementations1 Aug 2022 Bin Shi, Guodong Sun

In this paper, we propose a sampling algorithm based on state-of-the-art statistical machine learning techniques to obtain conditional nonlinear optimal perturbations (CNOPs), which is different from traditional (deterministic) optimization methods. 1 Specifically, the traditional approach is unavailable in practice, which requires numerically computing the gradient (first-order information) such that the computation cost is expensive, since it needs a large number of times to run numerical models.

On the Hyperparameters in Stochastic Gradient Descent with Momentum

no code implementations9 Aug 2021 Bin Shi

By comparison, we demonstrate how the optimal linear rate of convergence and the final gap for SGD only about the learning rate varies with the momentum coefficient increasing from zero to one when the momentum is introduced.

On Learning Rates and Schrödinger Operators

no code implementations15 Apr 2020 Bin Shi, Weijie J. Su, Michael. I. Jordan

In this paper, we present a general theoretical analysis of the effect of the learning rate in stochastic gradient descent (SGD).

Acceleration via Symplectic Discretization of High-Resolution Differential Equations

no code implementations NeurIPS 2019 Bin Shi, Simon S. Du, Weijie J. Su, Michael. I. Jordan

We study first-order optimization methods obtained by discretizing ordinary differential equations (ODEs) corresponding to Nesterov's accelerated gradient methods (NAGs) and Polyak's heavy-ball method.

Vocal Bursts Intensity Prediction

Understanding the Acceleration Phenomenon via High-Resolution Differential Equations

no code implementations21 Oct 2018 Bin Shi, Simon S. Du, Michael. I. Jordan, Weijie J. Su

We also show that these ODEs are more accurate surrogates for the underlying algorithms; in particular, they not only distinguish between NAG-SC and Polyak's heavy-ball method, but they allow the identification of a term that we refer to as "gradient correction" that is present in NAG-SC but not in the heavy-ball method and is responsible for the qualitative difference in convergence of the two methods.

Vocal Bursts Intensity Prediction

A Conservation Law Method in Optimization

no code implementations27 Aug 2017 Bin Shi

We propose some algorithms to find local minima in nonconvex optimization and to obtain global minima in some degree from the Newton Second Law without friction.

Friction

Cannot find the paper you are looking for? You can Submit a new open access paper.