no code implementations • 20 Feb 2025 • Hai Wang, Xiaoyu Xiang, Weihao Xia, Jing-Hao Xue
The advent of text-driven 360-degree panorama generation, enabling the synthesis of 360-degree panoramic images directly from textual descriptions, marks a transformative advancement in immersive visual content creation.
1 code implementation • 19 Nov 2024 • Haiping Ma, Aoqing Xia, Changqian Wang, Hai Wang, Xingyi Zhang
Redundant and extraneous cognitive states can lead to limited transfer and negative transfer effects.
3 code implementations • 4 Nov 2024 • Xingwu Sun, Yanfeng Chen, Yiqing Huang, Ruobing Xie, Jiaqi Zhu, Kai Zhang, Shuaipeng Li, Zhen Yang, Jonny Han, Xiaobo Shu, Jiahao Bu, Zhongzhi Chen, Xuemeng Huang, Fengzong Lian, Saiyong Yang, Jianfeng Yan, Yuyuan Zeng, Xiaoqin Ren, Chao Yu, Lulu Wu, Yue Mao, Jun Xia, Tao Yang, Suncong Zheng, Kan Wu, Dian Jiao, Jinbao Xue, Xipeng Zhang, Decheng Wu, Kai Liu, Dengpeng Wu, Guanghui Xu, Shaohua Chen, Shuang Chen, Xiao Feng, Yigeng Hong, Junqiang Zheng, Chengcheng Xu, Zongwei Li, Xiong Kuang, Jianglu Hu, Yiqi Chen, Yuchi Deng, Guiyang Li, Ao Liu, Chenchen Zhang, Shihui Hu, Zilong Zhao, Zifan Wu, Yao Ding, Weichao Wang, Han Liu, Roberts Wang, Hao Fei, Peijie Yu, Ze Zhao, Xun Cao, Hai Wang, Fusheng Xiang, Mengyuan Huang, Zhiyuan Xiong, Bin Hu, Xuebin Hou, Lei Jiang, Jianqiang Ma, Jiajia Wu, Yaping Deng, Yi Shen, Qian Wang, Weijie Liu, Jie Liu, Meng Chen, Liang Dong, Weiwen Jia, Hu Chen, Feifei Liu, Rui Yuan, Huilin Xu, Zhenxiang Yan, Tengfei Cao, Zhichao Hu, Xinhua Feng, Dong Du, TingHao Yu, Yangyu Tao, Feng Zhang, Jianchen Zhu, Chengzhong Xu, Xirui Li, Chong Zha, Wen Ouyang, Yinben Xia, Xiang Li, Zekun He, Rongpeng Chen, Jiawei Song, Ruibin Chen, Fan Jiang, Chongqing Zhao, Bo wang, Hao Gong, Rong Gan, Winston Hu, Zhanhui Kang, Yong Yang, Yuhong Liu, Di Wang, Jie Jiang
In this paper, we introduce Hunyuan-Large, which is currently the largest open-source Transformer-based mixture of experts model, with a total of 389 billion parameters and 52 billion activation parameters, capable of handling up to 256K tokens.
no code implementations • 25 Oct 2024 • Taicheng Guo, Chaochun Liu, Hai Wang, Varun Mannam, Fang Wang, Xin Chen, Xiangliang Zhang, Chandan K. Reddy
Our key insight is that the paths in a KG can capture complex relationships between users and items, eliciting the underlying reasons for user preferences and enriching user profiles.
1 code implementation • 12 Sep 2024 • Hai Wang, Jing-Hao Xue
Secondly, leveraging this embedded noisy latent representation and guided by a target prompt, the seamless tiling translation with spatial control enables the generation of a translated image with identical left and right halves while adhering to the extended input's structure and semantic layout.
no code implementations • 1 Sep 2024 • Jinming Wang, Hai Wang, Hongkai Wen, Geyong Min, Man Luo
With the proliferation of location-aware devices, large amount of trajectories have been generated when agents such as people, vehicles and goods flow around the urban environment.
1 code implementation • 24 Dec 2023 • Lezhi Li, Ting-Yu Chang, Hai Wang
This report outlines a transformative initiative in the financial investment industry, where the conventional decision-making process, laden with labor-intensive tasks such as sifting through voluminous documents, is being reimagined.
1 code implementation • 28 Oct 2023 • Hai Wang, Xiaoyu Xiang, Yuchen Fan, Jing-Hao Xue
To address this issue, we propose a method called StitchDiffusion.
1 code implementation • 10 Oct 2023 • Caizhen He, Hai Wang, Long Chen, Tong Luo, Yingfeng Cai
The V2X-AHD can effectively improve the accuracy of 3D object detection and reduce the number of network parameters, according to this study, which serves as a benchmark for cooperative perception.
Ranked #2 on
3D Object Detection
on V2XSet
1 code implementation • 31 Jul 2023 • Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang, Vijay Srinivasan, Xiang Ren, Hongxia Jin
To demonstrate the threat, we propose a simple method to perform VPI by poisoning the model's instruction tuning data, which proves highly effective in steering the LLM.
no code implementations • 20 Jul 2023 • Shiyang Li, Jun Yan, Hai Wang, Zheng Tang, Xiang Ren, Vijay Srinivasan, Hongxia Jin
We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them.
3 code implementations • 17 Jul 2023 • Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin
Large language models (LLMs) strengthen instruction-following capability through instruction-finetuning (IFT) on supervised instruction/response data.
1 code implementation • 14 Mar 2022 • Hai Wang, Xiaoyu Xiang, Yapeng Tian, Wenming Yang, Qingmin Liao
Second, we put forward a spatial-temporal deformable feature aggregation (STDFA) module, in which spatial and temporal contexts in dynamic video frames are adaptively captured and aggregated to enhance SR reconstruction.
no code implementations • 20 Oct 2021 • Yihao Wang, Ling Gao, Jie Ren, Rui Cao, Hai Wang, Jie Zheng, Quanli Gao
In detail, we train a DNN model (termed as pre-model) to predict which object detection model to use for the coming task and offloads to which edge servers by physical characteristics of the image task (e. g., brightness, saturation).
no code implementations • 27 Jul 2021 • Hoifung Poon, Hai Wang, Hunter Lang
We first present deep probabilistic logic(DPL), which offers a unifying framework for task-specific self-supervision by composing probabilistic logic with deep learning.
no code implementations • 7 May 2021 • Jinjin Gu, Haoming Cai, Chao Dong, Jimmy S. Ren, Yu Qiao, Shuhang Gu, Radu Timofte, Manri Cheon, SungJun Yoon, Byungyeon Kang, Junwoo Lee, Qing Zhang, Haiyang Guo, Yi Bin, Yuqing Hou, Hengliang Luo, Jingyu Guo, ZiRui Wang, Hai Wang, Wenming Yang, Qingyan Bai, Shuwei Shi, Weihao Xia, Mingdeng Cao, Jiahao Wang, Yifan Chen, Yujiu Yang, Yang Li, Tao Zhang, Longtao Feng, Yiting Liao, Junlin Li, William Thong, Jose Costa Pereira, Ales Leonardis, Steven McDonagh, Kele Xu, Lehan Yang, Hengxing Cai, Pengfei Sun, Seyed Mehdi Ayyoubzadeh, Ali Royat, Sid Ahmed Fezza, Dounia Hammou, Wassim Hamidouche, Sewoong Ahn, Gwangjin Yoon, Koki Tsubota, Hiroaki Akutsu, Kiyoharu Aizawa
This paper reports on the NTIRE 2021 challenge on perceptual image quality assessment (IQA), held in conjunction with the New Trends in Image Restoration and Enhancement workshop (NTIRE) workshop at CVPR 2021.
1 code implementation • 21 Apr 2021 • Ren Yang, Radu Timofte, Jing Liu, Yi Xu, Xinjian Zhang, Minyi Zhao, Shuigeng Zhou, Kelvin C. K. Chan, Shangchen Zhou, Xiangyu Xu, Chen Change Loy, Xin Li, Fanglong Liu, He Zheng, Lielin Jiang, Qi Zhang, Dongliang He, Fu Li, Qingqing Dang, Yibin Huang, Matteo Maggioni, Zhongqian Fu, Shuai Xiao, Cheng Li, Thomas Tanay, Fenglong Song, Wentao Chao, Qiang Guo, Yan Liu, Jiang Li, Xiaochao Qu, Dewang Hou, Jiayu Yang, Lyn Jiang, Di You, Zhenyu Zhang, Chong Mou, Iaroslav Koshelev, Pavel Ostyakov, Andrey Somov, Jia Hao, Xueyi Zou, Shijie Zhao, Xiaopeng Sun, Yiting Liao, Yuanzhi Zhang, Qing Wang, Gen Zhan, Mengxi Guo, Junlin Li, Ming Lu, Zhan Ma, Pablo Navarrete Michelini, Hai Wang, Yiyun Chen, Jingyu Guo, Liliang Zhang, Wenming Yang, Sijung Kim, Syehoon Oh, Yucong Wang, Minjie Cai, Wei Hao, Kangdi Shi, Liangyan Li, Jun Chen, Wei Gao, Wang Liu, XiaoYu Zhang, Linjie Zhou, Sixin Lin, Ru Wang
This paper reviews the first NTIRE challenge on quality enhancement of compressed video, with a focus on the proposed methods and results.
no code implementations • 18 Mar 2021 • Bo Tang, Jun Liu, Hai Wang, Yihua Hu
Range profiling refers to the measurement of target response along the radar slant range.
no code implementations • ECCV 2020 • Hai Wang, Wei-Shi Zheng, Ling Yingbiao
However, previous graph models regard human and object as the same kind of nodes and do not consider that the messages are not equally the same between different entities.
no code implementations • 28 Aug 2020 • Hai Wang
Second, we apply a KRDL model to assist the machine reading models to find the correct evidence sentences that can support their decision.
no code implementations • WS 2020 • Hai Wang, David Mcallester
Here we experiment with the use of information retrieval as an augmentation for pre-trained language models.
no code implementations • 20 Apr 2020 • Tong Wei, Feng Shi, Hai Wang, Wei-Wei Tu. Yu-Feng Li
To facilitate supervised consistency, reliable negative examples are mined from unlabeled data due to the absence of negative samples.
no code implementations • CONLL 2019 • Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu
However, in multilingual setting, it is extremely resource-consuming to pre-train a deep language model over large-scale corpora for each language.
no code implementations • 26 Sep 2019 • Hai Wang, Dian Yu, Kai Sun, Janshu Chen, Dong Yu
However, in multilingual setting, it is extremely resource-consuming to pre-train a deep language model over large-scale corpora for each language.
1 code implementation • CONLL 2019 • Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu, David Mcallester, Dan Roth
Remarkable success has been achieved in the last few years on some limited machine reading comprehension (MRC) tasks.
no code implementations • 21 Oct 2018 • Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang
We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics.
no code implementations • 13 Oct 2018 • Hai Wang, Jason D. Williams, SingBing Kang
The models (bucket, filter bank, and end-to-end) differ in how much expert knowledge is encoded, with the most general version being purely end-to-end.
no code implementations • EMNLP 2018 • Hai Wang, Hoifung Poon
In this paper, we propose deep probabilistic logic (DPL) as a general framework for indirect supervision, by composing probabilistic logic with deep learning.
no code implementations • WS 2017 • Hai Wang, Takeshi Onishi, Kevin Gimpel, David Mcallester
A significant number of neural architectures for reading comprehension have recently been developed and evaluated on large cloze-style datasets.
no code implementations • EACL 2017 • Zewei Chu, Hai Wang, Kevin Gimpel, David Mcallester
Progress in text understanding has been driven by large datasets that test particular capabilities, like recent datasets for reading comprehension (Hermann et al., 2015).
Ranked #32 on
Language Modelling
on LAMBADA
no code implementations • EMNLP 2016 • Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, David Mcallester
We have constructed a new "Who-did-What" dataset of over 200, 000 fill-in-the-gap (cloze) multiple choice reading comprehension problems constructed from the LDC English Gigaword newswire corpus.
no code implementations • 5 Feb 2016 • Jialei Wang, Hai Wang, Nathan Srebro
Contrary to the situation with stochastic gradient descent, we argue that when using stochastic methods with variance reduction, such as SDCA, SAG or SVRG, as well as their variants, it could be beneficial to reuse previously used samples instead of fresh samples, even when fresh samples are available.
1 code implementation • ACM Transactions on Graphics 2015 • Qixing Huang, Hai Wang, Vladlen Koltun
We present an approach to automatic 3D reconstruction of objects depicted in Web images.