no code implementations • EMNLP 2021 • Xiaobao Guo, Adams Kong, Huan Zhou, Xianfeng Wang, Min Wang
Specifically, to improve unimodal representations, a unimodal refinement module is designed to refine modality-specific learning via iteratively updating the distribution with transformer-based attention layers.
2 code implementations • 9 May 2022 • Wei Dai, Rui Liu, Tianyi Wu, Min Wang, Jianqin Yin, Jun Liu
The new light HierAttn network has the potential in promoting the use of deep learning in clinics and allowing patients for early diagnosis of skin disorders with personal devices.
1 code implementation • 4 Mar 2022 • Zhengyang Feng, Shaohua Guo, Xin Tan, Ke Xu, Min Wang, Lizhuang Ma
This paper presents a novel parametric curve-based method for lane detection in RGB images.
Ranked #4 on
Lane Detection
on LLAMAS
no code implementations • 13 Feb 2022 • Wanglong Lu, Hanli Zhao, Xianta Jiang, Xiaogang Jin, Min Wang, Jiankai Lyu, Kaijie Shi
Facial image inpainting is a task of filling visually realistic and semantically meaningful contents for missing or masked pixels in a face image.
no code implementations • 1 Jan 2022 • Hao Yang, Min Wang, Zhengfei Yu, Yun Zhou
Extensive experiments on well-known white- and black-box attacks show that MFDV-SNN achieves a significant improvement over existing methods, which indicates that it is a simple but effective method to improve model robustness.
1 code implementation • 12 Dec 2021 • Hui Wu, Min Wang, Wengang Zhou, Yang Hu, Houqiang Li
Next, a refinement block is introduced to enhance the visual tokens with self-attention and cross-attention.
1 code implementation • 25 Nov 2021 • Jiachen Xu, Min Wang, Jingyu Gong, Wentao Liu, Chen Qian, Yuan Xie, Lizhuang Ma
Prior plays an important role in providing the plausible constraint on human motion.
no code implementations • 15 Nov 2021 • Xiang Huang, Zhanhong Ye, Hongsheng Liu, Beiji Shi, Zidong Wang, Kang Yang, Yang Li, Bingya Weng, Min Wang, Haotian Chu, Jing Zhou, Fan Yu, Bei Hua, Lei Chen, Bin Dong
In these applications, our goal is to solve parametric PDEs rather than one instance of them.
no code implementations • 2 Nov 2021 • Xiang Huang, Hongsheng Liu, Beiji Shi, Zidong Wang, Kang Yang, Yang Li, Bingya Weng, Min Wang, Haotian Chu, Jing Zhou, Fan Yu, Bei Hua, Lei Chen, Bin Dong
In recent years, deep learning technology has been used to solve partial differential equations (PDEs), among which the physics-informed neural networks (PINNs) emerges to be a promising method for solving both forward and inverse PDE problems.
1 code implementation • NeurIPS 2021 • Jianbo Ouyang, Hui Wu, Min Wang, Wengang Zhou, Houqiang Li
Since our re-ranking model is not directly involved with the visual feature used in the initial retrieval, it is ready to be applied to retrieval result lists obtained from various retrieval algorithms.
no code implementations • 12 Sep 2021 • Libing Wu, Min Wang, Dan Wu, Jia Wu
Then, to efficiently utilize the historical state information of the intersection, we design a sequence model with the temporal convolutional network (TCN) to capture the historical information and further merge it with the spatial information to improve its performance.
no code implementations • 23 Aug 2021 • Jian Zhao, Gang Wang, Jianan Li, Lei Jin, Nana Fan, Min Wang, Xiaojuan Wang, Ting Yong, Yafeng Deng, Yandong Guo, Shiming Ge, Guodong Guo
The 2nd Anti-UAV Workshop \& Challenge aims to encourage research in developing novel and accurate methods for multi-scale object tracking.
1 code implementation • CVPR 2021 • Zedong Tang, Fenlong Jiang, Maoguo Gong, Hao Li, Yue Wu, Fan Yu, Zidong Wang, Min Wang
For the fully connected layers, by utilizing the low-rank property of Kronecker factors of Fisher information matrix, our method only requires inverting a small matrix to approximate the curvature with desirable accuracy.
no code implementations • 3 Jun 2021 • Jie He, Min Wang
How does the control power of corporate top1 shareholder arise?
1 code implementation • Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021 • Zedong Tang, Fenlong Jiang, Maoguo Gong, Hao Li, Yue Wu, Fan Yu, Zidong Wang, Min Wang
For the fully connected layers, by utilizing the low-rank property of Kronecker factors of Fisher information matrix, our method only requires inverting a small matrix to approximate the curvature with desirable accuracy.
1 code implementation • AAAI Technical Track on Machine Learning 2021 • Mengyun Chen, Kaixin Gao, Xiaolei Liu, Zidong Wang, Ningxi Ni, Qian Zhang, Lei Chen, Chao Ding, ZhengHai Huang, Min Wang, Shuangling Wang, Fan Yu, Xinyuan Zhao, Dachuan Xu
It is well-known that second-order optimizer can accelerate the training of deep neural networks, however, the huge computation cost of second-order optimization makes it impractical to apply in real practice.
no code implementations • 28 Mar 2021 • Min Wang, Shanchen Pang, Tong Ding, Sibo Qiao, Xue Zhai, Shuo Wang, Neal N. Xiong, Zhengwen Huang
In addition, we design a utility prediction model for SSF based on the Generative Adversarial Networks (GAN) and Fully Connected Neural Network (FCNN).
no code implementations • 5 Jan 2021 • Jianfeng Lu, Yulong Lu, Min Wang
This paper concerns the a priori generalization analysis of the Deep Ritz Method (DRM) [W. E and B. Yu, 2017], a popular neural-network-based method for solving high dimensional partial differential equations.
1 code implementation • ICCV 2021 • Hui Wu, Min Wang, Wengang Zhou, Houqiang Li
To this end, we propose a novel deep local feature learning architecture to simultaneously focus on multiple discriminative local patterns in an image.
no code implementations • 24 Dec 2020 • Zedong Tang, Fenlong Jiang, Junke Song, Maoguo Gong, Hao Li, Fan Yu, Zidong Wang, Min Wang
Optimizers that further adjust the scale of gradient, such as Adam, Natural Gradient (NG), etc., despite widely concerned and used by the community, are often found poor generalization performance, compared with Stochastic Gradient Descent (SGD).
no code implementations • 27 Nov 2020 • Kai-Xin Gao, Xiao-Lei Liu, Zheng-Hai Huang, Min Wang, Shuangling Wang, Zidong Wang, Dachuan Xu, Fan Yu
Using second-order optimization methods for training deep neural networks (DNNs) has attracted many researchers.
no code implementations • 26 Nov 2020 • Sibo Qiao, Shanchen Pang, Gang Luo, Silin Pan, Xun Wang, Min Wang, Xue Zhai, Taotao Chen
The first step to automatically analyze fetal FC views is locating the fetal four crucial chambers of heart in a US image.
no code implementations • 21 Nov 2020 • Kai-Xin Gao, Xiao-Lei Liu, Zheng-Hai Huang, Min Wang, Zidong Wang, Dachuan Xu, Fan Yu
There have been many attempts to use second-order optimization methods for training deep neural networks.
no code implementations • 24 Mar 2020 • Min Wang, Feng Qiu, Wentao Liu, Chen Qian, Xiaowei Zhou, Lizhuang Ma
In this paper, we introduce body part segmentation as critical supervision.
no code implementations • 6 Jul 2018 • Chanseok Park, Min Wang
Based on the median and the median absolute deviation estimators, and the Hodges-Lehmann and Shamos estimators, robustified analogues of the conventional $t$-test statistic are proposed.
Applications
no code implementations • 2 Jul 2018 • Sihao Xue, Zhenyi Ying, Fan Mo, Min Wang, Jue Sun
Besides this, at most of time, ASR system is used to deal with real-time problem such as keyword spotting (KWS).
no code implementations • 13 Jun 2018 • Yating Wang, Siu Wun Cheung, Eric T. Chung, Yalchin Efendiev, Min Wang
Numerical results show that using deep learning and multiscale models, we can improve the forward models, which are conditioned to the available data.
no code implementations • SEMEVAL 2018 • Min Wang, Xiaobing Zhou
We perform the LSTM and BiLSTM model for the emotion intensity prediction.
no code implementations • 23 May 2018 • Min Wang, Xipeng Chen, Wentao Liu, Chen Qian, Liang Lin, Lizhuang Ma
In this paper, we propose a two-stage depth ranking based method (DRPose3D) to tackle the problem of 3D human pose estimation.
no code implementations • IJCNLP 2017 • Min Wang, Qingxun Liu, Peng Ding, Yongbin Li, Xiaobing Zhou
In this paper, we perform convolutional neural networks (CNN) to learn the joint representations of question-answer pairs first, then use the joint representations as the inputs of the long short-term memory (LSTM) with attention to learn the answer sequence of a question for labeling the matching quality of each answer.
1 code implementation • 15 Aug 2016 • Min Wang, Baoyuan Liu, Hassan Foroosh
A topological subdivisioning is adopted to reduce the connection between the input channels and output channels.
no code implementations • CVPR 2015 • Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, Marianna Pensky
Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity.