Search Results for author: James Zhang

Found 22 papers, 9 papers with code

Large Models for Time Series and Spatio-Temporal Data: A Survey and Outlook

5 code implementations16 Oct 2023 Ming Jin, Qingsong Wen, Yuxuan Liang, Chaoli Zhang, Siqiao Xue, Xue Wang, James Zhang, Yi Wang, Haifeng Chen, XiaoLi Li, Shirui Pan, Vincent S. Tseng, Yu Zheng, Lei Chen, Hui Xiong

In this survey, we offer a comprehensive and up-to-date review of large models tailored (or adapted) for time series and spatio-temporal data, spanning four key facets: data types, model categories, model scopes, and application areas/tasks.

Time Series Time Series Analysis

Continuous Invariance Learning

no code implementations9 Oct 2023 Yong Lin, Fan Zhou, Lu Tan, Lintao Ma, Jiameng Liu, Yansu He, Yuan Yuan, Yu Liu, James Zhang, Yujiu Yang, Hao Wang

To address this challenge, we then propose Continuous Invariance Learning (CIL), which extracts invariant features across continuously indexed domains.

Cloud Computing

Deep Optimal Timing Strategies for Time Series

1 code implementation9 Oct 2023 Chen Pan, Fan Zhou, Xuanwei Hu, Xinxin Zhu, Wenxin Ning, Zi Zhuang, Siqiao Xue, James Zhang, Yunhua Hu

Deciding the best future execution time is a critical task in many business activities while evolving time series forecasting, and optimal timing strategy provides such a solution, which is driven by observed data.

Probabilistic Time Series Forecasting Time Series

Continual Learning in Predictive Autoscaling

no code implementations29 Jul 2023 Hongyan Hao, Zhixuan Chu, Shiyi Zhu, Gangwei Jiang, Yan Wang, Caigao Jiang, James Zhang, Wei Jiang, Siqiao Xue, Jun Zhou

In order to surmount this challenge and effectively integrate new sample distribution, we propose a density-based sample selection strategy that utilizes kernel density estimation to calculate sample density as a reference to compute sample weight, and employs weight sampling to construct a new memory set.

Continual Learning Density Estimation

Automatic Deduction Path Learning via Reinforcement Learning with Environmental Correction

no code implementations16 Jun 2023 Shuai Xiao, Chen Pan, Min Wang, Xinxin Zhu, Siqiao Xue, Jing Wang, Yunhua Hu, James Zhang, Jinghua Feng

To this end, we formulate the problem as a partially observable Markov decision problem (POMDP) and employ an environment correction algorithm based on the characteristics of the business.

Hierarchical Reinforcement Learning reinforcement-learning

Full Scaling Automation for Sustainable Development of Green Data Centers

1 code implementation1 May 2023 Shiyu Wang, Yinbo Sun, Xiaoming Shi, Shiyi Zhu, Lin-Tao Ma, James Zhang, Yifei Zheng, Jian Liu

The rapid rise in cloud computing has resulted in an alarming increase in data centers' carbon emissions, which now accounts for >3% of global greenhouse gas emissions, necessitating immediate steps to combat their mounting strain on the global climate.

Cloud Computing Representation Learning

SLOTH: Structured Learning and Task-based Optimization for Time Series Forecasting on Hierarchies

no code implementations11 Feb 2023 Fan Zhou, Chen Pan, Lintao Ma, Yu Liu, Shiyu Wang, James Zhang, Xinxin Zhu, Xuanwei Hu, Yunhua Hu, Yangfei Zheng, Lei Lei, Yun Hu

Moreover, unlike most previous reconciliation methods which either rely on strong assumptions or focus on coherent constraints only, we utilize deep neural optimization networks, which not only achieve coherency without any assumptions, but also allow more flexible and realistic constraints to achieve task-based targets, e. g., lower under-estimation penalty and meaningful decision-making loss to facilitate the subsequent downstream tasks.

Decision Making Multivariate Time Series Forecasting +1

End-to-End Modeling Hierarchical Time Series Using Autoregressive Transformer and Conditional Normalizing Flow based Reconciliation

1 code implementation28 Dec 2022 Shiyu Wang, Fan Zhou, Yinbo Sun, Lintao Ma, James Zhang, Yangfei Zheng, Bo Zheng, Lei Lei, Yun Hu

Multivariate time series forecasting with hierarchical structure is pervasive in real-world applications, demanding not only predicting each level of the hierarchy, but also reconciling all forecasts to ensure coherency, i. e., the forecasts should satisfy the hierarchical aggregation constraints.

Multivariate Time Series Forecasting Time Series

A Graph Regularized Point Process Model For Event Propagation Sequence

no code implementations21 Nov 2022 Siqiao Xue, Xiaoming Shi, Hongyan Hao, Lintao Ma, Shiyu Wang, Shijun Wang, James Zhang

Point process is the dominant paradigm for modeling event sequences occurring at irregular intervals.

Digital Human Interactive Recommendation Decision-Making Based on Reinforcement Learning

no code implementations6 Oct 2022 Xiong Junwu, Xiaoyun Feng, Yunzhou Shi, James Zhang, Zhongzhou Zhao, Wei Zhou

Our proposed framework learns through real-time interactions between the digital human and customers dynamically through the state-of-art RL algorithms, combined with multimodal embedding and graph embedding, to improve the accuracy of personalization and thus enable the digital human agent to timely catch the attention of the customer.

Decision Making Graph Embedding +2

Learning Large-scale Universal User Representation with Sparse Mixture of Experts

no code implementations11 Jul 2022 Caigao Jiang, Siqiao Xue, James Zhang, Lingyue Liu, Zhibo Zhu, Hongyan Hao

However, unlike natural language processing (NLP) tasks, the parameters of user behaviour model come mostly from user embedding layer, which makes most existing works fail in training a universal user embedding of large scale.

A Meta Reinforcement Learning Approach for Predictive Autoscaling in the Cloud

1 code implementation31 May 2022 Siqiao Xue, Chao Qu, Xiaoming Shi, Cong Liao, Shiyi Zhu, Xiaoyu Tan, Lintao Ma, Shiyu Wang, Shijun Wang, Yun Hu, Lei Lei, Yangfei Zheng, Jianguo Li, James Zhang

Predictive autoscaling (autoscaling with workload forecasting) is an important mechanism that supports autonomous adjustment of computing resources in accordance with fluctuating workload demands in the Cloud.

Decision Making Management +3

Bellman Meets Hawkes: Model-Based Reinforcement Learning via Temporal Point Processes

1 code implementation29 Jan 2022 Chao Qu, Xiaoyu Tan, Siqiao Xue, Xiaoming Shi, James Zhang, Hongyuan Mei

We consider a sequential decision making problem where the agent faces the environment characterized by the stochastic discrete events and seeks an optimal intervention policy such that its long-term reward is maximized.

Decision Making Model-based Reinforcement Learning +3

Unit Ball Model for Embedding Hierarchical Structures in the Complex Hyperbolic Space

1 code implementation NeurIPS 2021 Huiru Xiao, Caigao Jiang, Yangqiu Song, James Zhang, Junwu Xiong

Specifically, we propose to learn the embeddings of hierarchically structured data in the unit ball model of the complex hyperbolic space.

Representation Learning

Model Embedding Model-Based Reinforcement Learning

no code implementations16 Jun 2020 Xiaoyu Tan, Chao Qu, Junwu Xiong, James Zhang

Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL).

Model-based Reinforcement Learning reinforcement-learning +1

Neural Physicist: Learning Physical Dynamics from Image Sequences

no code implementations9 Jun 2020 Baocheng Zhu, Shijun Wang, James Zhang

In this paper, by leveraging recent progresses in representation learning and state space models (SSMs), we propose NeurPhy, which uses variational auto-encoder (VAE) to extract underlying Markovian dynamic state at each time step, neural process (NP) to extract the global system parameters, and a non-linear non-recurrent stochastic state space model to learn the physical dynamic transition.

Representation Learning

Riemannian Proximal Policy Optimization

no code implementations19 May 2020 Shijun Wang, Baocheng Zhu, Chen Li, Mingzhe Wu, James Zhang, Wei Chu, Yuan Qi

In this paper, We propose a general Riemannian proximal optimization algorithm with guaranteed convergence to solve Markov decision process (MDP) problems.

Variational Policy Propagation for Multi-agent Reinforcement Learning

no code implementations19 Apr 2020 Chao Qu, Hui Li, Chang Liu, Junwu Xiong, James Zhang, Wei Chu, Weiqiang Wang, Yuan Qi, Le Song

We propose a \emph{collaborative} multi-agent reinforcement learning algorithm named variational policy propagation (VPP) to learn a \emph{joint} policy through the interactions over agents.

Multi-agent Reinforcement Learning reinforcement-learning +2

S2VG: Soft Stochastic Value Gradient method

no code implementations25 Sep 2019 Xiaoyu Tan, Chao Qu, Junwu Xiong, James Zhang

In this paper, we propose a simple and elegant model-based reinforcement learning algorithm called soft stochastic value gradient method (S2VG).

Model-based Reinforcement Learning reinforcement-learning +1

Anomaly detection in wide area network mesh using two machine learning anomaly detection algorithms

no code implementations30 Jan 2018 James Zhang, Ilija Vukotic, Robert Gardner

Anomaly detection is the practice of identifying items or events that do not conform to an expected behavior or do not correlate with other items in a dataset.

Anomaly Detection BIG-bench Machine Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.