Search Results for author: Zijian Gao

Found 9 papers, 0 papers with code

Random forest model identifies serve strength as a key predictor of tennis match outcome

no code implementations8 Oct 2019 Zijian Gao, Amanda Kowalczyk

We compiled, cleaned, and used the largest database of tennis match information to date to predict match outcome using fairly simple machine learning methods.

BIG-bench Machine Learning Sports Analytics

KnowRU: Knowledge Reusing via Knowledge Distillation in Multi-agent Reinforcement Learning

no code implementations27 Mar 2021 Zijian Gao, Kele Xu, Bo Ding, Huaimin Wang, Yiying Li, Hongda Jia

In this paper, we propose a method, named "KnowRU" for knowledge reusing which can be easily deployed in the majority of the multi-agent reinforcement learning algorithms without complicated hand-coded design.

Knowledge Distillation Multi-agent Reinforcement Learning +2

KnowSR: Knowledge Sharing among Homogeneous Agents in Multi-agent Reinforcement Learning

no code implementations25 May 2021 Zijian Gao, Kele Xu, Bo Ding, Huaimin Wang, Yiying Li, Hongda Jia

In this paper, we present an adaptation method of the majority of multi-agent reinforcement learning (MARL) algorithms called KnowSR which takes advantage of the differences in learning between agents.

Knowledge Distillation Multi-agent Reinforcement Learning +2

Coarse to Fine: Video Retrieval before Moment Localization

no code implementations14 Oct 2021 Zijian Gao, Huanyu Liu, Jingyu Liu

The current state-of-the-art methods for video corpus moment retrieval (VCMR) often use similarity-based feature alignment approach for the sake of convenience and speed.

Moment Retrieval Retrieval +2

CLIP2TV: Align, Match and Distill for Video-Text Retrieval

no code implementations10 Nov 2021 Zijian Gao, Jingyu Liu, Weiqi Sun, Sheng Chen, Dedan Chang, Lili Zhao

Modern video-text retrieval frameworks basically consist of three parts: video encoder, text encoder and the similarity head.

Ranked #12 on Video Retrieval on MSR-VTT-1kA (using extra training data)

Representation Learning Retrieval +2

Nuclear Norm Maximization Based Curiosity-Driven Learning

no code implementations21 May 2022 Chao Chen, Zijian Gao, Kele Xu, Sen yang, Yiying Li, Bo Ding, Dawei Feng, Huaimin Wang

To handle the sparsity of the extrinsic rewards in reinforcement learning, researchers have proposed intrinsic reward which enables the agent to learn the skills that might come in handy for pursuing the rewards in the future, such as encouraging the agent to visit novel states.

Atari Games

Self-Supervised Exploration via Temporal Inconsistency in Reinforcement Learning

no code implementations24 Aug 2022 Zijian Gao, Kele Xu, Yuanzhao Zhai, Dawei Feng, Bo Ding, XinJun Mao, Huaimin Wang

Our method involves training a self-supervised prediction model, saving snapshots of the model parameters, and using nuclear norm to evaluate the temporal inconsistency between the predictions of different snapshots as intrinsic rewards.

reinforcement-learning Reinforcement Learning (RL)

Dynamic Memory-based Curiosity: A Bootstrap Approach for Exploration

no code implementations24 Aug 2022 Zijian Gao, Yiying Li, Kele Xu, Yuanzhao Zhai, Dawei Feng, Bo Ding, XinJun Mao, Huaimin Wang

The curiosity arouses if memorized information can not deal with the current state, and the information gap between dual learners can be formulated as the intrinsic reward for agents, and then such state information can be consolidated into the dynamic memory.

Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.