1 code implementation • 28 Oct 2023 • Hai Wang, Xiaoyu Xiang, Yuchen Fan, Jing-Hao Xue
To address this issue, we propose a method called StitchDiffusion.
1 code implementation • 10 Oct 2023 • Caizhen He, Hai Wang, Long Chen, Tong Luo, Yingfeng Cai
The V2X-AHD can effectively improve the accuracy of 3D object detection and reduce the number of network parameters, according to this study, which serves as a benchmark for cooperative perception.
Ranked #2 on
3D Object Detection
on V2XSet
1 code implementation • 31 Jul 2023 • Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang, Vijay Srinivasan, Xiang Ren, Hongxia Jin
We find that our proposed method is highly effective in steering the LLM.
no code implementations • 20 Jul 2023 • Shiyang Li, Jun Yan, Hai Wang, Zheng Tang, Xiang Ren, Vijay Srinivasan, Hongxia Jin
We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them.
2 code implementations • 17 Jul 2023 • Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin
Large language models~(LLMs) strengthen instruction-following capability through instruction-finetuning (IFT) on supervised instruction/response data.
1 code implementation • 14 Mar 2022 • Hai Wang, Xiaoyu Xiang, Yapeng Tian, Wenming Yang, Qingmin Liao
Second, we put forward a spatial-temporal deformable feature aggregation (STDFA) module, in which spatial and temporal contexts in dynamic video frames are adaptively captured and aggregated to enhance SR reconstruction.
no code implementations • 20 Oct 2021 • Yihao Wang, Ling Gao, Jie Ren, Rui Cao, Hai Wang, Jie Zheng, Quanli Gao
In detail, we train a DNN model (termed as pre-model) to predict which object detection model to use for the coming task and offloads to which edge servers by physical characteristics of the image task (e. g., brightness, saturation).
no code implementations • 27 Jul 2021 • Hoifung Poon, Hai Wang, Hunter Lang
We first present deep probabilistic logic(DPL), which offers a unifying framework for task-specific self-supervision by composing probabilistic logic with deep learning.
no code implementations • 7 May 2021 • Jinjin Gu, Haoming Cai, Chao Dong, Jimmy S. Ren, Yu Qiao, Shuhang Gu, Radu Timofte, Manri Cheon, SungJun Yoon, Byungyeon Kang, Junwoo Lee, Qing Zhang, Haiyang Guo, Yi Bin, Yuqing Hou, Hengliang Luo, Jingyu Guo, ZiRui Wang, Hai Wang, Wenming Yang, Qingyan Bai, Shuwei Shi, Weihao Xia, Mingdeng Cao, Jiahao Wang, Yifan Chen, Yujiu Yang, Yang Li, Tao Zhang, Longtao Feng, Yiting Liao, Junlin Li, William Thong, Jose Costa Pereira, Ales Leonardis, Steven McDonagh, Kele Xu, Lehan Yang, Hengxing Cai, Pengfei Sun, Seyed Mehdi Ayyoubzadeh, Ali Royat, Sid Ahmed Fezza, Dounia Hammou, Wassim Hamidouche, Sewoong Ahn, Gwangjin Yoon, Koki Tsubota, Hiroaki Akutsu, Kiyoharu Aizawa
This paper reports on the NTIRE 2021 challenge on perceptual image quality assessment (IQA), held in conjunction with the New Trends in Image Restoration and Enhancement workshop (NTIRE) workshop at CVPR 2021.
1 code implementation • 21 Apr 2021 • Ren Yang, Radu Timofte, Jing Liu, Yi Xu, Xinjian Zhang, Minyi Zhao, Shuigeng Zhou, Kelvin C. K. Chan, Shangchen Zhou, Xiangyu Xu, Chen Change Loy, Xin Li, Fanglong Liu, He Zheng, Lielin Jiang, Qi Zhang, Dongliang He, Fu Li, Qingqing Dang, Yibin Huang, Matteo Maggioni, Zhongqian Fu, Shuai Xiao, Cheng Li, Thomas Tanay, Fenglong Song, Wentao Chao, Qiang Guo, Yan Liu, Jiang Li, Xiaochao Qu, Dewang Hou, Jiayu Yang, Lyn Jiang, Di You, Zhenyu Zhang, Chong Mou, Iaroslav Koshelev, Pavel Ostyakov, Andrey Somov, Jia Hao, Xueyi Zou, Shijie Zhao, Xiaopeng Sun, Yiting Liao, Yuanzhi Zhang, Qing Wang, Gen Zhan, Mengxi Guo, Junlin Li, Ming Lu, Zhan Ma, Pablo Navarrete Michelini, Hai Wang, Yiyun Chen, Jingyu Guo, Liliang Zhang, Wenming Yang, Sijung Kim, Syehoon Oh, Yucong Wang, Minjie Cai, Wei Hao, Kangdi Shi, Liangyan Li, Jun Chen, Wei Gao, Wang Liu, XiaoYu Zhang, Linjie Zhou, Sixin Lin, Ru Wang
This paper reviews the first NTIRE challenge on quality enhancement of compressed video, with a focus on the proposed methods and results.
no code implementations • 18 Mar 2021 • Bo Tang, Jun Liu, Hai Wang, Yihua Hu
Range profiling refers to the measurement of target response along the radar slant range.
no code implementations • ECCV 2020 • Hai Wang, Wei-Shi Zheng, Ling Yingbiao
However, previous graph models regard human and object as the same kind of nodes and do not consider that the messages are not equally the same between different entities.
no code implementations • 28 Aug 2020 • Hai Wang
Second, we apply a KRDL model to assist the machine reading models to find the correct evidence sentences that can support their decision.
no code implementations • WS 2020 • Hai Wang, David Mcallester
Here we experiment with the use of information retrieval as an augmentation for pre-trained language models.
no code implementations • 20 Apr 2020 • Tong Wei, Feng Shi, Hai Wang, Wei-Wei Tu. Yu-Feng Li
To facilitate supervised consistency, reliable negative examples are mined from unlabeled data due to the absence of negative samples.
no code implementations • CONLL 2019 • Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu
However, in multilingual setting, it is extremely resource-consuming to pre-train a deep language model over large-scale corpora for each language.
no code implementations • 26 Sep 2019 • Hai Wang, Dian Yu, Kai Sun, Janshu Chen, Dong Yu
However, in multilingual setting, it is extremely resource-consuming to pre-train a deep language model over large-scale corpora for each language.
1 code implementation • CONLL 2019 • Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu, David Mcallester, Dan Roth
Remarkable success has been achieved in the last few years on some limited machine reading comprehension (MRC) tasks.
no code implementations • 21 Oct 2018 • Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang
We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics.
no code implementations • 13 Oct 2018 • Hai Wang, Jason D. Williams, SingBing Kang
The models (bucket, filter bank, and end-to-end) differ in how much expert knowledge is encoded, with the most general version being purely end-to-end.
no code implementations • EMNLP 2018 • Hai Wang, Hoifung Poon
In this paper, we propose deep probabilistic logic (DPL) as a general framework for indirect supervision, by composing probabilistic logic with deep learning.
no code implementations • WS 2017 • Hai Wang, Takeshi Onishi, Kevin Gimpel, David Mcallester
A significant number of neural architectures for reading comprehension have recently been developed and evaluated on large cloze-style datasets.
no code implementations • EACL 2017 • Zewei Chu, Hai Wang, Kevin Gimpel, David Mcallester
Progress in text understanding has been driven by large datasets that test particular capabilities, like recent datasets for reading comprehension (Hermann et al., 2015).
Ranked #32 on
Language Modelling
on LAMBADA
no code implementations • EMNLP 2016 • Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, David Mcallester
We have constructed a new "Who-did-What" dataset of over 200, 000 fill-in-the-gap (cloze) multiple choice reading comprehension problems constructed from the LDC English Gigaword newswire corpus.
no code implementations • 5 Feb 2016 • Jialei Wang, Hai Wang, Nathan Srebro
Contrary to the situation with stochastic gradient descent, we argue that when using stochastic methods with variance reduction, such as SDCA, SAG or SVRG, as well as their variants, it could be beneficial to reuse previously used samples instead of fresh samples, even when fresh samples are available.
1 code implementation • ACM Transactions on Graphics 2015 • Qixing Huang, Hai Wang, Vladlen Koltun
We present an approach to automatic 3D reconstruction of objects depicted in Web images.