Search Results for author: Ming Wu

Found 16 papers, 9 papers with code

Lawin Transformer: Improving Semantic Segmentation Transformer with Multi-Scale Representations via Large Window Attention

2 code implementations5 Jan 2022 Haotian Yan, Chuang Zhang, Ming Wu

In this paper, we succeed in introducing multi-scale representations into semantic segmentation ViT via window attention mechanism and further improves the performance and efficiency.

Image Classification Semantic Segmentation

SamplingAug: On the Importance of Patch Sampling Augmentation for Single Image Super-Resolution

1 code implementation30 Nov 2021 Shizun Wang, Ming Lu, Kaixin Chen, Jiaming Liu, Xiaoqi Li, Chuang Zhang, Ming Wu

However, existing methods mostly train the DNNs on uniformly sampled LR-HR patch pairs, which makes them fail to fully exploit informative patches within the image.

Data Augmentation Image Super-Resolution +1

ConTNet: Why not use convolution and transformer at the same time?

1 code implementation27 Apr 2021 Haotian Yan, Zhe Li, Weijian Li, Changhu Wang, Ming Wu, Chuang Zhang

It is also worth pointing that, given identical strong data augmentations, the performance improvement of ConTNet is more remarkable than that of ResNet.

Computer Vision Image Classification +2

Contextual Graph Reasoning Networks

no code implementations1 Jan 2021 Zhaoqing Wang, Jiaming Liu, Yangyuxuan Kang, Mingming Gong, Chuang Zhang, Ming Lu, Ming Wu

Graph Reasoning has shown great potential recently in modeling long-range dependencies, which are crucial for various computer vision tasks.

Computer Vision Instance Segmentation +2

Adaptive Self-training for Neural Sequence Labeling with Few Labels

no code implementations1 Jan 2021 Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, Ahmed Hassan Awadallah

Neural sequence labeling is an important technique employed for many Natural Language Processing (NLP) tasks, such as Named Entity Recognition (NER), slot tagging for dialog systems and semantic parsing.

Meta-Learning named-entity-recognition +3

Adaptive Self-training for Few-shot Neural Sequence Labeling

no code implementations7 Oct 2020 Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, Ahmed Hassan Awadallah

While self-training serves as an effective mechanism to learn from large amounts of unlabeled data -- meta-learning helps in adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels.

Meta-Learning named-entity-recognition +3

GINet: Graph Interaction Network for Scene Parsing

1 code implementation ECCV 2020 Tianyi Wu, Yu Lu, Yu Zhu, Chuang Zhang, Ming Wu, Zhanyu Ma, Guodong Guo

GI unit is further improved by the SC-loss to enhance the semantic representations over the exemplar-based semantic graph.

Scene Parsing

FGSD: A Dataset for Fine-Grained Ship Detection in High Resolution Satellite Images

no code implementations15 Mar 2020 Kaiyan Chen, Ming Wu, Jiaming Liu, Chuang Zhang

To further promote the research of ship detection, we introduced a new fine-grained ship detection datasets, which is named as FGSD.

C-DLinkNet: considering multi-level semantic features for human parsing

no code implementations31 Jan 2020 Yu Lu, Muyan Feng, Ming Wu, Chuang Zhang

Human parsing is an essential branch of semantic segmentation, which is a fine-grained semantic segmentation task to identify the constituent parts of human.

Human Parsing Semantic Segmentation

Learning Feature Interactions with Lorentzian Factorization Machine

1 code implementation22 Nov 2019 Canran Xu, Ming Wu

Learning representations for feature interactions to model user behaviors is critical for recommendation system and click-trough rate (CTR) predictions.

Click-Through Rate Prediction

Towards Efficient Large-Scale Graph Neural Network Computing

no code implementations19 Oct 2018 Lingxiao Ma, Zhi Yang, Youshan Miao, Jilong Xue, Ming Wu, Lidong Zhou, Yafei Dai

This evolution has led to large graph-based irregular and sparse models that go beyond what existing deep learning frameworks are designed for.

graph partitioning Knowledge Graphs

RPC Considered Harmful: Fast Distributed Deep Learning on RDMA

no code implementations22 May 2018 Jilong Xue, Youshan Miao, Cheng Chen, Ming Wu, Lintao Zhang, Lidong Zhou

Its computation is typically characterized by a simple tensor data abstraction to model multi-dimensional matrices, a data-flow graph to model computation, and iterative executions with relatively frequent synchronizations, thereby making it substantially different from Map/Reduce style distributed big data computation.

Computer Vision Natural Language Processing

Cannot find the paper you are looking for? You can Submit a new open access paper.