Search Results for author: Liuyu Xiang

Found 11 papers, 6 papers with code

Dynamic Generation of Personalities with Large Language Models

1 code implementation10 Apr 2024 Jianzhi Liu, Hexiang Gu, Tianyu Zheng, Liuyu Xiang, Huijia Wu, Jie Fu, Zhaofeng He

We propose a new metric to assess personality generation capability based on this evaluation method.

Personality Generation

Read to Play (R2-Play): Decision Transformer with Multimodal Game Instruction

1 code implementation6 Feb 2024 Yonggang Jin, Ge Zhang, Hao Zhao, Tianyu Zheng, Jiawei Guo, Liuyu Xiang, Shawn Yue, Stephen W. Huang, Zhaofeng He, Jie Fu

Drawing inspiration from the success of multimodal instruction tuning in visual tasks, we treat the visual-based RL task as a long-horizon vision task and construct a set of multimodal game instructions to incorporate instruction tuning into a decision transformer.

Deep Reinforcement Learning with Task-Adaptive Retrieval via Hypernetwork

1 code implementation19 Jun 2023 Yonggang Jin, Chenxu Wang, Tianyu Zheng, Liuyu Xiang, Yaodong Yang, Junge Zhang, Jie Fu, Zhaofeng He

Deep reinforcement learning algorithms are usually impeded by sampling inefficiency, heavily depending on multiple interactions with the environment to acquire accurate decision-making capabilities.

Decision Making Hippocampus +2

Online Open-set Semi-supervised Object Detection with Dual Competing Head

no code implementations23 May 2023 Zerun Wang, Ling Xiao, Liuyu Xiang, Zhaotian Weng, Toshihiko Yamasaki

To alleviate these issues, this paper proposes an end-to-end online OSSOD framework that improves performance and efficiency: 1) We propose a semi-supervised outlier filtering method that more effectively filters the OOD instances using both labeled and unlabeled data.

object-detection Object Detection +1

Box-Level Active Detection

1 code implementation CVPR 2023 Mengyao Lyu, Jundong Zhou, Hui Chen, YiJie Huang, Dongdong Yu, Yaqian Li, Yandong Guo, Yuchen Guo, Liuyu Xiang, Guiguang Ding

Active learning selects informative samples for annotation within budget, which has proven efficient recently on object detection.

Active Learning object-detection +1

LODE: Deep Local Deblurring and A New Benchmark

1 code implementation19 Sep 2021 Zerun Wang, Liuyu Xiang, Fan Yang, Jinzhao Qian, Jie Hu, Haidong Huang, Jungong Han, Yuchen Guo, Guiguang Ding

While recent deep deblurring algorithms have achieved remarkable progress, most existing methods focus on the global deblurring problem, where the image blur mostly arises from severe camera shake.

Deblurring

PANDA: A Gigapixel-level Human-centric Video Dataset

no code implementations CVPR 2020 Xueyang Wang, Xiya Zhang, Yinheng Zhu, Yuchen Guo, Xiaoyun Yuan, Liuyu Xiang, Zerun Wang, Guiguang Ding, David J. Brady, Qionghai Dai, Lu Fang

We believe PANDA will contribute to the community of artificial intelligence and praxeology by understanding human behaviors and interactions in large-scale real-world scenes.

4k Attribute +1

Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification

1 code implementation ECCV 2020 Liuyu Xiang, Guiguang Ding, Jungong Han

We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.

General Classification Knowledge Distillation +1

Incremental Few-Shot Learning for Pedestrian Attribute Recognition

no code implementations2 Jun 2019 Liuyu Xiang, Xiaoming Jin, Guiguang Ding, Jungong Han, Leida Li

Pedestrian attribute recognition has received increasing attention due to its important role in video surveillance applications.

Attribute Few-Shot Learning +1

Adaptive Region Embedding for Text Classification

no code implementations28 May 2019 Liuyu Xiang, Xiaoming Jin, Lan Yi, Guiguang Ding

Deep learning models such as convolutional neural networks and recurrent networks are widely applied in text classification.

General Classification text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.