Search Results for author: Zixu Wang

Found 14 papers, 5 papers with code

Instruction Multi-Constraint Molecular Generation Using a Teacher-Student Large Language Model

1 code implementation20 Mar 2024 Peng Zhou, Jianmin Wang, Chunyan Li, Zixu Wang, Yiping Liu, Siqi Sun, Jianxin Lin, Longyue Wang, Xiangxiang Zeng

While various models and computational tools have been proposed for structure and property analysis of molecules, generating molecules that conform to all desired structures and properties remains a challenge.

Drug Discovery Knowledge Distillation +2

nuScenes Knowledge Graph -- A comprehensive semantic representation of traffic scenes for trajectory prediction

1 code implementation15 Dec 2023 Leon Mlodzian, Zhigang Sun, Hendrik Berkemeyer, Sebastian Monka, Zixu Wang, Stefan Dietze, Lavdim Halilaj, Juergen Luettin

Further, we present nuScenes Knowledge Graph (nSKG), a knowledge graph for the nuScenes dataset, that models explicitly all scene participants and road elements, as well as their semantic and spatial relationships.

Knowledge Graphs Trajectory Prediction

Neuromorphic-P2M: Processing-in-Pixel-in-Memory Paradigm for Neuromorphic Image Sensors

no code implementations22 Jan 2023 Md Abdullah-Al Kaiser, Gourav Datta, Zixu Wang, Ajey P. Jacob, Peter A. Beerel, Akhilesh R. Jaiswal

Edge devices equipped with computer vision must deal with vast amounts of sensory data with limited computing resources.

Scene Text Recognition with Semantics

no code implementations19 Oct 2022 Joshua Cesare Placidi, Yishu Miao, Zixu Wang, Lucia Specia

Scene Text Recognition (STR) models have achieved high performance in recent years on benchmark datasets where text images are presented with minimal noise.

Scene Text Recognition

Contrastive Video-Language Learning with Fine-grained Frame Sampling

no code implementations10 Oct 2022 Zixu Wang, Yujie Zhong, Yishu Miao, Lin Ma, Lucia Specia

However, even in paired video-text segments, only a subset of the frames are semantically relevant to the corresponding text, with the remainder representing noise; where the ratio of noisy frames is higher for longer videos.

Question Answering Representation Learning +3

Guiding Visual Question Generation

no code implementations NAACL 2022 Nihir Vedd, Zixu Wang, Marek Rei, Yishu Miao, Lucia Specia

In traditional Visual Question Generation (VQG), most images have multiple concepts (e. g. objects and categories) for which a question could be generated, but models are trained to mimic an arbitrary choice of concept as given in their training data.

Question Generation Question-Generation +2

Cross-Modal Generative Augmentation for Visual Question Answering

no code implementations11 May 2021 Zixu Wang, Yishu Miao, Lucia Specia

Experiments on Visual Question Answering as downstream task demonstrate the effectiveness of the proposed generative model, which is able to improve strong UpDn-based models to achieve state-of-the-art performance.

Data Augmentation Question Answering +1

Exploring Supervised and Unsupervised Rewards in Machine Translation

1 code implementation EACL 2021 Julia Ive, Zixu Wang, Marina Fomicheva, Lucia Specia

Reinforcement Learning (RL) is a powerful framework to address the discrepancy between loss functions used during training and the final evaluation metrics to be used at test time.

Machine Translation Reinforcement Learning (RL) +2

Latent Variable Models for Visual Question Answering

no code implementations16 Jan 2021 Zixu Wang, Yishu Miao, Lucia Specia

Current work on Visual Question Answering (VQA) explore deterministic approaches conditioned on various types of image and question features.

Benchmarking Question Answering +1

Actions as Moving Points

2 code implementations ECCV 2020 Yixuan Li, Zixu Wang, Li-Min Wang, Gangshan Wu

The existing action tubelet detectors often depend on heuristic anchor design and placement, which might be computationally expensive and sub-optimal for precise localization.

Action Detection Action Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.