Search Results for author: Jianping Wu

Found 13 papers, 6 papers with code

All in One: Exploring Unified Video-Language Pre-training

1 code implementation CVPR 2023 Alex Jinpeng Wang, Yixiao Ge, Rui Yan, Yuying Ge, Xudong Lin, Guanyu Cai, Jianping Wu, Ying Shan, XiaoHu Qie, Mike Zheng Shou

In this work, we for the first time introduce an end-to-end video-language model, namely \textit{all-in-one Transformer}, that embeds raw video and textual signals into joint representations using a unified backbone architecture.

Ranked #6 on TGIF-Transition on TGIF-QA (using extra training data)

Language Modelling Multiple-choice +10

MILES: Visual BERT Pre-training with Injected Language Semantics for Video-text Retrieval

1 code implementation26 Apr 2022 Yuying Ge, Yixiao Ge, Xihui Liu, Alex Jinpeng Wang, Jianping Wu, Ying Shan, XiaoHu Qie, Ping Luo

Dominant pre-training work for video-text retrieval mainly adopt the "dual-encoder" architectures to enable efficient retrieval, where two separate encoders are used to contrast global video and text representations, but ignore detailed local semantics.

Action Recognition Retrieval +6

Masked Image Modeling with Denoising Contrast

1 code implementation19 May 2022 Kun Yi, Yixiao Ge, Xiaotong Li, Shusheng Yang, Dian Li, Jianping Wu, Ying Shan, XiaoHu Qie

Since the development of self-supervised visual representation learning from contrastive learning to masked image modeling (MIM), there is no significant difference in essence, that is, how to design proper pretext tasks for vision dictionary look-up.

Contrastive Learning Denoising +6

Revitalize Region Feature for Democratizing Video-Language Pre-training of Retrieval

2 code implementations15 Mar 2022 Guanyu Cai, Yixiao Ge, Binjie Zhang, Alex Jinpeng Wang, Rui Yan, Xudong Lin, Ying Shan, Lianghua He, XiaoHu Qie, Jianping Wu, Mike Zheng Shou

Recent dominant methods for video-language pre-training (VLP) learn transferable representations from the raw pixels in an end-to-end manner to achieve advanced performance on downstream video-language retrieval.

Question Answering Retrieval +4

Brain-on-Switch: Towards Advanced Intelligent Network Data Plane via NN-Driven Traffic Analysis at Line-Speed

1 code implementation17 Mar 2024 Jinzhu Yan, Haotian Xu, Zhuotao Liu, Qi Li, Ke Xu, Mingwei Xu, Jianping Wu

Many types of NNs (such as Recurrent Neural Network (RNN), and transformers) that are designed to work with sequential data have advantages over tree-based models, because they can take raw network data as input without complex feature computations on the fly.

Discovery of Important Crossroads in Road Network using Massive Taxi Trajectories

no code implementations9 Jul 2014 Ming Xu, Jianping Wu, Yiman Du, Haohan Wang, Geqi Qi, Kezhen Hu, Yun-Peng Xiao

However, none of existing approaches addresses the problem of identifying network-wide important crossroads in real road network.

Real-Time Vanishing Point Detector Integrating Under-Parameterized RANSAC and Hough Transform

no code implementations ICCV 2021 Jianping Wu, Liang Zhang, Ye Liu, Ke Chen

We propose a novel approach that integrates under-parameterized RANSAC (UPRANSAC) with Hough Transform to detect vanishing points (VPs) from un-calibrated monocular images.

Cyclic Graph Attentive Match Encoder (CGAME): A Novel Neural Network For OD Estimation

no code implementations26 Nov 2021 Guanzhou Li, Yujing He, Jianping Wu, Duowei Li

As a powerful nonlinear approximator, deep learning is an ideal data-driven method to provide a novel perspective for OD estimation.

Management

Modeling Adaptive Platoon and Reservation Based Autonomous Intersection Control: A Deep Reinforcement Learning Approach

no code implementations24 Jun 2022 Duowei Li, Jianping Wu, Feng Zhu, Tianyi Chen, Yiik Diew Wong

As a strategy to reduce travel delay and enhance energy efficiency, platooning of connected and autonomous vehicles (CAVs) at non-signalized intersections has become increasingly popular in academia.

Autonomous Vehicles Reinforcement Learning (RL)

COOR-PLT: A hierarchical control model for coordinating adaptive platoons of connected and autonomous vehicles at signal-free intersections based on deep reinforcement learning

no code implementations1 Jul 2022 Duowei Li, Jianping Wu, Feng Zhu, Tianyi Chen, Yiik Diew Wong

The simulation results demonstrate that the model is able to: (1) achieve satisfactory convergence performances; (2) adaptively determine platoon size in response to varying traffic conditions; and (3) completely avoid deadlocks at the intersection.

Autonomous Vehicles Fairness

ViLEM: Visual-Language Error Modeling for Image-Text Retrieval

no code implementations CVPR 2023 Yuxin Chen, Zongyang Ma, Ziqi Zhang, Zhongang Qi, Chunfeng Yuan, Ying Shan, Bing Li, Weiming Hu, XiaoHu Qie, Jianping Wu

ViLEM then enforces the model to discriminate the correctness of each word in the plausible negative texts and further correct the wrong words via resorting to image information.

Contrastive Learning Retrieval +3

StaPep: an open-source tool for the structure prediction and feature extraction of hydrocarbon-stapled peptides

1 code implementation28 Feb 2024 Zhe Wang, Jianping Wu, Mengjun Zheng, Chenchen Geng, Borui Zhen, Wei zhang, Hui Wu, Zhengyang Xu, Gang Xu, Si Chen, Xiang Li

Many tools exist for extracting structural and physiochemical descriptors from linear peptides to predict their properties, but similar tools for hydrocarbon-stapled peptides are lacking. Here, we present StaPep, a Python-based toolkit designed for generating 2D/3D structures and calculating 21 distinct features for hydrocarbon-stapled peptides. The current version supports hydrocarbon-stapled peptides containing 2 non-standard amino acids (norleucine and 2-aminoisobutyric acid) and 6 nonnatural anchoring residues (S3, S5, S8, R3, R5 and R8). Then we established a hand-curated dataset of 201 hydrocarbon-stapled peptides and 384 linear peptides with sequence information and experimental membrane permeability, to showcase StaPep's application in artificial intelligence projects. A machine learning-based predictor utilizing above calculated features was developed with AUC of 0. 85, for identifying cell-penetrating hydrocarbon-stapled peptides. StaPep's pipeline spans data retrieval, cleaning, structure generation, molecular feature calculation, and machine learning model construction for hydrocarbon-stapled peptides. The source codes and dataset are freely available on Github: https://github. com/dahuilangda/stapep_package.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.