no code implementations • 9 Jul 2014 • Ming Xu, Jianping Wu, Yiman Du, Haohan Wang, Geqi Qi, Kezhen Hu, Yun-Peng Xiao
However, none of existing approaches addresses the problem of identifying network-wide important crossroads in real road network.
no code implementations • NeurIPS 2018 • Songtao Wang, Dan Li, Yang Cheng, Jinkun Geng, Yanshu Wang, Shuai Wang, Shu-Tao Xia, Jianping Wu
In distributed machine learning (DML), the network performance between machines significantly impacts the speed of iterative training.
no code implementations • ICCV 2021 • Jianping Wu, Liang Zhang, Ye Liu, Ke Chen
We propose a novel approach that integrates under-parameterized RANSAC (UPRANSAC) with Hough Transform to detect vanishing points (VPs) from un-calibrated monocular images.
no code implementations • 26 Nov 2021 • Guanzhou Li, Yujing He, Jianping Wu, Duowei Li
As a powerful nonlinear approximator, deep learning is an ideal data-driven method to provide a novel perspective for OD estimation.
no code implementations • 24 Jun 2022 • Duowei Li, Jianping Wu, Feng Zhu, Tianyi Chen, Yiik Diew Wong
As a strategy to reduce travel delay and enhance energy efficiency, platooning of connected and autonomous vehicles (CAVs) at non-signalized intersections has become increasingly popular in academia.
no code implementations • 1 Jul 2022 • Duowei Li, Jianping Wu, Feng Zhu, Tianyi Chen, Yiik Diew Wong
The simulation results demonstrate that the model is able to: (1) achieve satisfactory convergence performances; (2) adaptively determine platoon size in response to varying traffic conditions; and (3) completely avoid deadlocks at the intersection.
no code implementations • CVPR 2023 • Yuxin Chen, Zongyang Ma, Ziqi Zhang, Zhongang Qi, Chunfeng Yuan, Ying Shan, Bing Li, Weiming Hu, XiaoHu Qie, Jianping Wu
ViLEM then enforces the model to discriminate the correctness of each word in the plausible negative texts and further correct the wrong words via resorting to image information.
Ranked #45 on Visual Reasoning on Winoground
1 code implementation • 28 Feb 2024 • Zhe Wang, Jianping Wu, Mengjun Zheng, Chenchen Geng, Borui Zhen, Wei zhang, Hui Wu, Zhengyang Xu, Gang Xu, Si Chen, Xiang Li
Many tools exist for extracting structural and physiochemical descriptors from linear peptides to predict their properties, but similar tools for hydrocarbon-stapled peptides are lacking. Here, we present StaPep, a Python-based toolkit designed for generating 2D/3D structures and calculating 21 distinct features for hydrocarbon-stapled peptides. The current version supports hydrocarbon-stapled peptides containing 2 non-standard amino acids (norleucine and 2-aminoisobutyric acid) and 6 nonnatural anchoring residues (S3, S5, S8, R3, R5 and R8). Then we established a hand-curated dataset of 201 hydrocarbon-stapled peptides and 384 linear peptides with sequence information and experimental membrane permeability, to showcase StaPep's application in artificial intelligence projects. A machine learning-based predictor utilizing above calculated features was developed with AUC of 0. 85, for identifying cell-penetrating hydrocarbon-stapled peptides. StaPep's pipeline spans data retrieval, cleaning, structure generation, molecular feature calculation, and machine learning model construction for hydrocarbon-stapled peptides. The source codes and dataset are freely available on Github: https://github. com/dahuilangda/stapep_package.
1 code implementation • 17 Mar 2024 • Jinzhu Yan, Haotian Xu, Zhuotao Liu, Qi Li, Ke Xu, Mingwei Xu, Jianping Wu
Many types of NNs (such as Recurrent Neural Network (RNN), and transformers) that are designed to work with sequential data have advantages over tree-based models, because they can take raw network data as input without complex feature computations on the fly.
2 code implementations • 15 Mar 2022 • Guanyu Cai, Yixiao Ge, Binjie Zhang, Alex Jinpeng Wang, Rui Yan, Xudong Lin, Ying Shan, Lianghua He, XiaoHu Qie, Jianping Wu, Mike Zheng Shou
Recent dominant methods for video-language pre-training (VLP) learn transferable representations from the raw pixels in an end-to-end manner to achieve advanced performance on downstream video-language retrieval.
1 code implementation • 19 May 2022 • Kun Yi, Yixiao Ge, Xiaotong Li, Shusheng Yang, Dian Li, Jianping Wu, Ying Shan, XiaoHu Qie
Since the development of self-supervised visual representation learning from contrastive learning to masked image modeling (MIM), there is no significant difference in essence, that is, how to design proper pretext tasks for vision dictionary look-up.
1 code implementation • 26 Apr 2022 • Yuying Ge, Yixiao Ge, Xihui Liu, Alex Jinpeng Wang, Jianping Wu, Ying Shan, XiaoHu Qie, Ping Luo
Dominant pre-training work for video-text retrieval mainly adopt the "dual-encoder" architectures to enable efficient retrieval, where two separate encoders are used to contrast global video and text representations, but ignore detailed local semantics.
Ranked #7 on Zero-Shot Video Retrieval on MSVD
1 code implementation • CVPR 2023 • Alex Jinpeng Wang, Yixiao Ge, Rui Yan, Yuying Ge, Xudong Lin, Guanyu Cai, Jianping Wu, Ying Shan, XiaoHu Qie, Mike Zheng Shou
In this work, we for the first time introduce an end-to-end video-language model, namely \textit{all-in-one Transformer}, that embeds raw video and textual signals into joint representations using a unified backbone architecture.
Ranked #6 on TGIF-Transition on TGIF-QA (using extra training data)