Search Results for author: Jiang Wang

Found 23 papers, 6 papers with code

TransMOT: Spatial-Temporal Graph Transformer for Multiple Object Tracking

no code implementations1 Apr 2021 Peng Chu, Jiang Wang, Quanzeng You, Haibin Ling, Zicheng Liu

TransMOT effectively models the interactions of a large number of objects by arranging the trajectories of the tracked objects as a set of sparse weighted graphs, and constructing a spatial graph transformer encoder layer, a temporal transformer encoder layer, and a spatial graph transformer decoder layer based on the graphs.

 Ranked #1 on Multi-Object Tracking on MOT16 (using extra training data)

Multi-Object Tracking Multiple Object Tracking +1

A Learning-Based Computational Impact Time Guidance

no code implementations9 Mar 2021 Zichao Liu, Jiang Wang, Shaoming He, Hyo-Sang Shin, Antonios Tsourdos

This paper investigates the problem of impact-time-control and proposes a learning-based computational guidance algorithm to solve this problem.

Coarse Graining Molecular Dynamics with Graph Neural Networks

1 code implementation22 Jul 2020 Brooke E. Husic, Nicholas E. Charron, Dominik Lemm, Jiang Wang, Adrià Pérez, Maciej Majewski, Andreas Krämer, Yaoyi Chen, Simon Olsson, Gianni de Fabritiis, Frank Noé, Cecilia Clementi

5, 755 (2019)] demonstrated that the existence of such a variational limit enables the use of a supervised machine learning framework to generate a coarse-grained force field, which can then be used for simulation in the coarse-grained space.

Ensemble Learning of Coarse-Grained Molecular Dynamics Force Fields with a Kernel Approach

no code implementations4 May 2020 Jiang Wang, Stefan Chmiela, Klaus-Robert Müller, Frank Noè, Cecilia Clementi

Using ensemble learning and stratified sampling, we propose a 2-layer training scheme that enables GDML to learn an effective coarse-grained model.

Ensemble Learning

RC-DARTS: Resource Constrained Differentiable Architecture Search

no code implementations30 Dec 2019 Xiaojie Jin, Jiang Wang, Joshua Slocum, Ming-Hsuan Yang, Shengyang Dai, Shuicheng Yan, Jiashi Feng

In this paper, we propose the resource constrained differentiable architecture search (RC-DARTS) method to learn architectures that are significantly smaller and faster while achieving comparable accuracy.

Image Classification One-Shot Learning

Adversarial Examples Improve Image Recognition

6 code implementations CVPR 2020 Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan Yuille, Quoc V. Le

We show that AdvProp improves a wide range of models on various image recognition tasks and performs better when the models are bigger.

Image Classification

Machine Learning of coarse-grained Molecular Dynamics Force Fields

no code implementations4 Dec 2018 Jiang Wang, Simon Olsson, Christoph Wehmeyer, Adria Perez, Nicholas E. Charron, Gianni de Fabritiis, Frank Noe, Cecilia Clementi

We show that CGnets can capture all-atom explicit-solvent free energy surfaces with models using only a few coarse-grained beads and no solvent, while classical coarse-graining methods fail to capture crucial features of the free energy surface.

Dimensionality Reduction Learning Theory

NOTE-RCNN: NOise Tolerant Ensemble RCNN for Semi-Supervised Object Detection

no code implementations ICCV 2019 JIyang Gao, Jiang Wang, Shengyang Dai, Li-Jia Li, Ram Nevatia

Comparing to standard Faster RCNN, it contains three highlights: an ensemble of two classification heads and a distillation head to avoid overfitting on noisy labels and improve the mining precision, masking the negative sample loss in box predictor to avoid the harm of false negative labels, and training box regression head only on seed annotations to eliminate the harm from inaccurate boundaries of mined bounding boxes.

Semi-Supervised Object Detection Weakly Supervised Object Detection

Training Generative Adversarial Networks via Primal-Dual Subgradient Methods: A Lagrangian Perspective on GAN

no code implementations ICLR 2018 Xu Chen, Jiang Wang, Hao Ge

This formulation shows the connection between the standard GAN training process and the primal-dual subgradient methods for convex optimization.

Kernel Pooling for Convolutional Neural Networks

no code implementations CVPR 2017 Yin Cui, Feng Zhou, Jiang Wang, Xiao Liu, Yuanqing Lin, Serge Belongie

We demonstrate how to approximate kernels such as Gaussian RBF up to a given order using compact explicit feature maps in a parameter-free manner.

Face Recognition Fine-Grained Visual Categorization +2

Localizing by Describing: Attribute-Guided Attention Localization for Fine-Grained Recognition

no code implementations20 May 2016 Xiao Liu, Jiang Wang, Shilei Wen, Errui Ding, Yuanqing Lin

By designing a novel reward strategy, we are able to learn to locate regions that are spatially and semantically distinctive with reinforcement learning algorithm.

CNN-RNN: A Unified Framework for Multi-label Image Classification

1 code implementation CVPR 2016 Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, Wei Xu

While deep convolutional neural networks (CNNs) have shown a great success in single-label image classification, it is important to note that real world images generally contain multiple labels, which could correspond to different objects, scenes, actions and attributes in an image.

Classification General Classification +2

Fully Convolutional Attention Networks for Fine-Grained Recognition

no code implementations22 Mar 2016 Xiao Liu, Tian Xia, Jiang Wang, Yi Yang, Feng Zhou, Yuanqing Lin

Fine-grained recognition is challenging due to its subtle local inter-class differences versus large intra-class variations such as poses.

Look and Think Twice: Capturing Top-Down Visual Attention With Feedback Convolutional Neural Networks

no code implementations ICCV 2015 Chunshui Cao, Xian-Ming Liu, Yi Yang, Yinan Yu, Jiang Wang, Zilei Wang, Yongzhen Huang, Liang Wang, Chang Huang, Wei Xu, Deva Ramanan, Thomas S. Huang

While feedforward deep convolutional neural networks (CNNs) have been a great success in computer vision, it is important to remember that the human visual contex contains generally more feedback connections than foward connections.

ABC-CNN: An Attention Based Convolutional Neural Network for Visual Question Answering

no code implementations18 Nov 2015 Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, Ram Nevatia

ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics.

Question Answering Visual Question Answering

Attention to Scale: Scale-aware Semantic Image Segmentation

no code implementations CVPR 2016 Liang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu, Alan L. Yuille

We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model.

Semantic Segmentation

Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks

no code implementations CVPR 2016 Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, Wei Xu

The sentence generator produces one simple short sentence that describes a specific short video interval.

Video Captioning

Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images

1 code implementation ICCV 2015 Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille

In particular, we propose a transposed weight sharing scheme, which not only improves performance on image captioning, but also makes the model more suitable for the novel concept learning task.

Image Captioning

Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN)

2 code implementations20 Dec 2014 Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille

In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions.

Image Captioning

Explain Images with Multimodal Recurrent Neural Networks

no code implementations4 Oct 2014 Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille

In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images.

Cross-view Action Modeling, Learning and Recognition

no code implementations CVPR 2014 Jiang wang, Xiaohan Nie, Yin Xia, Ying Wu, Song-Chun Zhu

We present a novel multiview spatio-temporal AND-OR graph (MST-AOG) representation for cross-view action recognition, i. e., the recognition is performed on the video from an unknown and unseen view.

Action Recognition

Learning Fine-grained Image Similarity with Deep Ranking

6 code implementations CVPR 2014 Jiang Wang, Yang song, Thomas Leung, Chuck Rosenberg, Jinbin Wang, James Philbin, Bo Chen, Ying Wu

This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features.

General Classification

Visual Tracking via Locality Sensitive Histograms

no code implementations CVPR 2013 Shengfeng He, Qingxiong Yang, Rynson W. H. Lau, Jiang Wang, Ming-Hsuan Yang

A robust tracking framework based on the locality sensitive histograms is proposed, which consists of two main components: a new feature for tracking that is robust to illumination changes and a novel multi-region tracking algorithm that runs in realtime even with hundreds of regions.

Visual Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.