Search Results for author: Yuan-Fang Wang

Found 19 papers, 8 papers with code

FACTS: Fine-Grained Action Classification for Tactical Sports

no code implementations21 Dec 2024 Christopher Lai, Jason Mo, Haotian Xia, Yuan-Fang Wang

Classifying fine-grained actions in fast-paced, close-combat sports such as fencing and boxing presents unique challenges due to the complexity, speed, and nuance of movements.

Action Classification Classification +3

Language and Multimodal Models in Sports: A Survey of Datasets and Applications

no code implementations18 Jun 2024 Haotian Xia, Zhengbang Yang, Yun Zhao, Yuqing Wang, Jingxi Li, Rhys Tracy, Zhuangdi Zhu, Yuan-Fang Wang, Hanjie Chen, Weining Shen

This survey provides a foundational resource for researchers and practitioners aiming to leverage NLP and multimodal models in sports, offering insights into current trends and future opportunities in the field.

Sports Analytics Survey

SportQA: A Benchmark for Sports Understanding in Large Language Models

1 code implementation24 Feb 2024 Haotian Xia, Zhengbang Yang, Yuqing Wang, Rhys Tracy, Yun Zhao, Dongdong Huang, Zezhi Chen, Yan Zhu, Yuan-Fang Wang, Weining Shen

A deep understanding of sports, a field rich in strategic and dynamic content, is crucial for advancing Natural Language Processing (NLP).

Few-Shot Learning Multiple-choice +1

Advanced Volleyball Stats for All Levels: Automatic Setting Tactic Detection and Classification with a Single Camera

1 code implementation26 Sep 2023 Haotian Xia, Rhys Tracy, Yun Zhao, Yuqing Wang, Yuan-Fang Wang, Weining Shen

Our frameworks combine setting ball trajectory recognition with a novel set trajectory classifier to generate comprehensive and advanced statistical data.

All Computational Efficiency +1

Graph Encoding and Neural Network Approaches for Volleyball Analytics: From Game Outcome to Individual Play Predictions

no code implementations22 Aug 2023 Rhys Tracy, Haotian Xia, Alex Rasla, Yuan-Fang Wang, Ambuj Singh

Our results show that the use of GNNs with our graph encoding yields a much more advanced analysis of the data, which noticeably improves prediction results overall.

Prediction Type prediction

VREN: Volleyball Rally Dataset with Expression Notation Language

no code implementations28 Sep 2022 Haotian Xia, Rhys Tracy, Yun Zhao, Erwan Fraisse, Yuan-Fang Wang, Linda Petzold

The second goal is to introduce a volleyball descriptive language to fully describe the rally processes in the games and apply the language to our dataset.

Decision Making Descriptive +1

Auto-Encoder based Co-Training Multi-View Representation Learning

no code implementations9 Jan 2022 Run-kun Lu, Jian-wei Liu, Yuan-Fang Wang, Hao-jie Xie, Xin Zuo

As we known, auto-encoder is a method of deep learning, which can learn the latent feature of raw data by reconstructing the input, and based on this, we propose a novel algorithm called Auto-encoder based Co-training Multi-View Learning (ACMVL), which utilizes both complementarity and consistency and finds a joint latent feature representation of multiple views.

MULTI-VIEW LEARNING Representation Learning

VATEX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research

4 code implementations ICCV 2019 Xin Wang, Jiawei Wu, Junkun Chen, Lei LI, Yuan-Fang Wang, William Yang Wang

We also introduce two tasks for video-and-language research based on VATEX: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context.

Machine Translation Translation +3

MAN: Moment Alignment Network for Natural Language Moment Retrieval via Iterative Graph Adjustment

no code implementations CVPR 2019 Da Zhang, Xiyang Dai, Xin Wang, Yuan-Fang Wang, Larry S. Davis

In this paper, we present Moment Alignment Network (MAN), a novel framework that unifies the candidate moment encoding and temporal structural reasoning in a single-shot feed-forward network.

Moment Retrieval Natural Language Moment Retrieval +1

Dynamic Temporal Pyramid Network: A Closer Look at Multi-Scale Modeling for Activity Detection

no code implementations7 Aug 2018 Da Zhang, Xiyang Dai, Yuan-Fang Wang

(3) We further exploit the temporal context of activities by appropriately fusing multi-scale feature maps, and demonstrate that both local and global temporal contexts are important.

Action Detection Activity Detection +2

S3D: Single Shot multi-Span Detector via Fully 3D Convolutional Networks

1 code implementation21 Jul 2018 Da Zhang, Xiyang Dai, Xin Wang, Yuan-Fang Wang

In this paper, we present a novel Single Shot multi-Span Detector for temporal activity detection in long, untrimmed videos using a simple end-to-end fully three-dimensional convolutional (Conv3D) network.

Action Detection Activity Detection

No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling

2 code implementations ACL 2018 Xin Wang, Wenhu Chen, Yuan-Fang Wang, William Yang Wang

Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem.

Image Captioning Reinforcement Learning +1

Deep Reinforcement Learning for Visual Object Tracking in Videos

no code implementations31 Jan 2017 Da Zhang, Hamid Maei, Xin Wang, Yuan-Fang Wang

In this paper we introduce a fully end-to-end approach for visual tracking in videos that learns to predict the bounding box locations of a target object at every frame.

Decision Making Deep Reinforcement Learning +6

Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer

2 code implementations CVPR 2017 Xin Wang, Geoffrey Oxholm, Da Zhang, Yuan-Fang Wang

That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales.

Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.