Search Results for author: Wei Liang

Found 23 papers, 10 papers with code

Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance

1 code implementation26 Mar 2024 Zan Wang, Yixin Chen, Baoxiong Jia, Puhao Li, Jinlu Zhang, Jingze Zhang, Tengyu Liu, Yixin Zhu, Wei Liang, Siyuan Huang

Despite significant advancements in text-to-motion synthesis, generating language-guided human motion within 3D environments poses substantial challenges.

Motion Synthesis

Language-driven All-in-one Adverse Weather Removal

no code implementations3 Dec 2023 Hao Yang, Liyuan Pan, Yan Yang, Wei Liang

Then, with the guidance of degradation prior, we sparsely select restoration experts from a candidate list dynamically based on a Mixture-of-Experts (MoE) structure.

DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation

no code implementations ICCV 2023 Hanqing Wang, Wei Liang, Luc van Gool, Wenguan Wang

VLN-CE is a recently released embodied task, where AI agents need to navigate a freely traversable environment to reach a distant target location, given language instructions.

Decision Making Navigate +1

MEWL: Few-shot multimodal word learning with referential uncertainty

1 code implementation1 Jun 2023 Guangyuan Jiang, Manjie Xu, Shiji Xin, Wei Liang, Yujia Peng, Chi Zhang, Yixin Zhu

To fill in this gap, we introduce the MachinE Word Learning (MEWL) benchmark to assess how machines learn word meaning in grounded visual scenes.

Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding

no code implementations6 Apr 2023 Yuke Hu, Wei Liang, Ruofan Wu, Kai Xiao, Weiqiang Wang, Xiaochen Li, Jinfei Liu, Zhan Qin

Knowledge Graph Embedding (KGE) is a fundamental technique that extracts expressive representation from knowledge graph (KG) to facilitate diverse downstream tasks.

Knowledge Graph Embedding

Towards Versatile Embodied Navigation

1 code implementation30 Oct 2022 Hanqing Wang, Wei Liang, Luc van Gool, Wenguan Wang

With the emergence of varied visual navigation tasks (e. g, image-/object-/audio-goal and vision-language navigation) that specify the target in different ways, the community has made appealing advances in training specialized agents capable of handling individual navigation tasks well.

Decision Making Vision-Language Navigation +1

HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes

1 code implementation18 Oct 2022 Zan Wang, Yixin Chen, Tengyu Liu, Yixin Zhu, Wei Liang, Siyuan Huang

Learning to generate diverse scene-aware and goal-oriented human motions in 3D scenes remains challenging due to the mediocre characteristics of the existing datasets on Human-Scene Interaction (HSI); they only have limited scale/quality and lack semantics.

LGC-Net: A Lightweight Gyroscope Calibration Network for Efficient Attitude Estimation

no code implementations19 Sep 2022 Yaohua Liu, Wei Liang, Jinqiang Cui

This paper presents a lightweight, efficient calibration neural network model for denoising low-cost microelectromechanical system (MEMS) gyroscope and estimating the attitude of a robot in real-time.

Denoising

Confidence Band Estimation for Survival Random Forests

1 code implementation26 Apr 2022 Sarah Elizabeth Formentini, Wei Liang, Ruoqing Zhu

The idea is to estimate the variance-covariance matrix of the cumulative hazard function prediction on a grid of time points.

valid

Counterfactual Cycle-Consistent Learning for Instruction Following and Generation in Vision-Language Navigation

1 code implementation CVPR 2022 Hanqing Wang, Wei Liang, Jianbing Shen, Luc van Gool, Wenguan Wang

Since the rise of vision-language navigation (VLN), great progress has been made in instruction following -- building a follower to navigate environments under the guidance of instructions.

counterfactual Data Augmentation +3

Structured Scene Memory for Vision-Language Navigation

1 code implementation CVPR 2021 Hanqing Wang, Wenguan Wang, Wei Liang, Caiming Xiong, Jianbing Shen

Recently, numerous algorithms have been developed to tackle the problem of vision-language navigation (VLN), i. e., entailing an agent to navigate 3D environments through following linguistic instructions.

Decision Making Navigate +1

Active Visual Information Gathering for Vision-Language Navigation

1 code implementation ECCV 2020 Hanqing Wang, Wenguan Wang, Tianmin Shu, Wei Liang, Jianbing Shen

Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments.

Vision-Language Navigation

Robust Encoder-Decoder Learning Framework towards Offline Handwritten Mathematical Expression Recognition Based on Multi-Scale Deep Neural Network

no code implementations8 Feb 2019 Guangcun Shan, Hongyu Wang, Wei Liang

Offline handwritten mathematical expression recognition is a challenging task, because handwritten mathematical expressions mainly have two problems in the process of recognition.

3D Face Synthesis Driven by Personality Impression

no code implementations27 Sep 2018 Yining Lang, Wei Liang, Yujia Wang, Lap-Fai Yu

In this paper, we propose a novel approach to synthesize 3D faces based on personality impression for creating virtual characters.

Graphics

Deep Single-View 3D Object Reconstruction with Visual Hull Embedding

1 code implementation10 Sep 2018 Hanqing Wang, Jiaolong Yang, Wei Liang, Xin Tong

The key idea of our method is to leverage object mask and pose estimation from CNNs to assist the 3D shape learning by constructing a probabilistic single-view visual hull inside of the network.

3D Object Reconstruction Object +1

Transferring Objects: Joint Inference of Container and Human Pose

no code implementations ICCV 2017 Hanqing Wang, Wei Liang, Lap-Fai Yu

In the inference phase, given a scanned 3D scene with different object candidates and a dictionary of human poses, our approach infers the best object as a container together with human pose for transferring a given object.

Object

Cannot find the paper you are looking for? You can Submit a new open access paper.