Search Results for author: Hai Wang

Found 21 papers, 3 papers with code

ESOD:Edge-based Task Scheduling for Object Detection

no code implementations20 Oct 2021 Yihao Wang, Ling Gao, Jie Ren, Rui Cao, Hai Wang, Jie Zheng, Quanli Gao

In detail, we train a DNN model (termed as pre-model) to predict which object detection model to use for the coming task and offloads to which edge servers by physical characteristics of the image task (e. g., brightness, saturation).

Object Detection

Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning

no code implementations27 Jul 2021 Hoifung Poon, Hai Wang, Hunter Lang

We first present deep probabilistic logic(DPL), which offers a unifying framework for task-specific self-supervision by composing probabilistic logic with deep learning.

Active Learning Language Modelling +4

Constrained Radar Waveform Design for Range Profiling

no code implementations18 Mar 2021 Bo Tang, Jun Liu, Hai Wang, Yihua Hu

Range profiling refers to the measurement of target response along the radar slant range.

Contextual Heterogeneous Graph Network for Human-Object Interaction Detection

no code implementations ECCV 2020 Hai Wang, Wei-Shi Zheng, Ling Yingbiao

However, previous graph models regard human and object as the same kind of nodes and do not consider that the messages are not equally the same between different entities.

Graph Attention Human-Object Interaction Detection

Knowledge Efficient Deep Learning for Natural Language Processing

no code implementations28 Aug 2020 Hai Wang

Second, we apply a KRDL model to assist the machine reading models to find the correct evidence sentences that can support their decision.

Language Modelling Multi-Task Learning +1

On-The-Fly Information Retrieval Augmentation for Language Models

no code implementations WS 2020 Hai Wang, David Mcallester

Here we experiment with the use of information retrieval as an augmentation for pre-trained language models.

Information Retrieval

MixPUL: Consistency-based Augmentation for Positive and Unlabeled Learning

no code implementations20 Apr 2020 Tong Wei, Feng Shi, Hai Wang, Wei-Wei Tu. Yu-Feng Li

To facilitate supervised consistency, reliable negative examples are mined from unlabeled data due to the absence of negative samples.

Data Augmentation

Improving Pre-Trained Multilingual Model with Vocabulary Expansion

no code implementations CONLL 2019 Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu

However, in multilingual setting, it is extremely resource-consuming to pre-train a deep language model over large-scale corpora for each language.

Language Modelling Machine Reading Comprehension +3

Improving Pre-Trained Multilingual Models with Vocabulary Expansion

no code implementations26 Sep 2019 Hai Wang, Dian Yu, Kai Sun, Janshu Chen, Dong Yu

However, in multilingual setting, it is extremely resource-consuming to pre-train a deep language model over large-scale corpora for each language.

Language Modelling Machine Reading Comprehension +3

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

no code implementations21 Oct 2018 Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang

We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics.

Image Classification Model Compression +1

Learning to Globally Edit Images with Textual Description

no code implementations13 Oct 2018 Hai Wang, Jason D. Williams, SingBing Kang

The models (bucket, filter bank, and end-to-end) differ in how much expert knowledge is encoded, with the most general version being purely end-to-end.

Deep Probabilistic Logic: A Unifying Framework for Indirect Supervision

no code implementations EMNLP 2018 Hai Wang, Hoifung Poon

In this paper, we propose deep probabilistic logic (DPL) as a general framework for indirect supervision, by composing probabilistic logic with deep learning.

Reading Comprehension Representation Learning

Emergent Predication Structure in Hidden State Vectors of Neural Readers

no code implementations WS 2017 Hai Wang, Takeshi Onishi, Kevin Gimpel, David Mcallester

A significant number of neural architectures for reading comprehension have recently been developed and evaluated on large cloze-style datasets.

Reading Comprehension

Broad Context Language Modeling as Reading Comprehension

no code implementations EACL 2017 Zewei Chu, Hai Wang, Kevin Gimpel, David Mcallester

Progress in text understanding has been driven by large datasets that test particular capabilities, like recent datasets for reading comprehension (Hermann et al., 2015).

Coreference Resolution Language Modelling +1

Who did What: A Large-Scale Person-Centered Cloze Dataset

no code implementations EMNLP 2016 Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, David Mcallester

We have constructed a new "Who-did-What" dataset of over 200, 000 fill-in-the-gap (cloze) multiple choice reading comprehension problems constructed from the LDC English Gigaword newswire corpus.

Reading Comprehension

Reducing Runtime by Recycling Samples

no code implementations5 Feb 2016 Jialei Wang, Hai Wang, Nathan Srebro

Contrary to the situation with stochastic gradient descent, we argue that when using stochastic methods with variance reduction, such as SDCA, SAG or SVRG, as well as their variants, it could be beneficial to reuse previously used samples instead of fresh samples, even when fresh samples are available.

Cannot find the paper you are looking for? You can Submit a new open access paper.