no code implementations • Findings (EMNLP) 2021 • Kaiyu Huang, Hao Yu, Junpeng Liu, Wei Liu, Jingxiang Cao, Degen Huang
Experimental results on five benchmarks and four cross-domain datasets show the lexicon-based graph convolutional network successfully captures the information of candidate words and helps to improve performance on the benchmarks (Bakeoff-2005 and CTB6) and the cross-domain datasets (SIGHAN-2010).
no code implementations • 30 Mar 2023 • Binbin Li, Xinyu Du, Yao Hu, Hao Yu, Wende Zhang
Online camera-to-ground calibration is to generate a non-rigid body transformation between the camera and the road surface in a real-time manner.
1 code implementation • CVPR 2023 • Zheng Qin, Hao Yu, Changjian Wang, Yuxing Peng, Kai Xu
We first design a local spatial consistency measure over the deformation graph of the point cloud, which evaluates the spatial compatibility only between the correspondences in the vicinity of a graph node.
1 code implementation • CVPR 2023 • Hao Yu, Zheng Qin, Ji Hou, Mahdi Saleh, Dongsheng Li, Benjamin Busam, Slobodan Ilic
To this end, we introduce RoITr, a Rotation-Invariant Transformer to cope with the pose variations in the point cloud matching task.
no code implementations • 2 Mar 2023 • Jiayuan Zhuang, Zheng Qin, Hao Yu, Xucan Chen
Classification and localization are two main sub-tasks in object detection.
no code implementations • CVPR 2023 • Hao Yu, Xu Cheng, Wei Peng
Visible-infrared recognition (VI recognition) is a challenging task due to the enormous visual difference across heterogeneous images.
no code implementations • 27 Sep 2022 • Hao Yu, Ji Hou, Zheng Qin, Mahdi Saleh, Ivan Shugurov, Kai Wang, Benjamin Busam, Slobodan Ilic
More specifically, 3D structures of the whole frame are first represented by our global PPF signatures, from which structural descriptors are learned to help geometric descriptors sense the 3D world beyond local regions.
no code implementations • 22 Jul 2022 • Fenia Christopoulou, Gerasimos Lampouras, Milan Gritta, Guchun Zhang, Yinpeng Guo, Zhongqi Li, Qi Zhang, Meng Xiao, Bo Shen, Lin Li, Hao Yu, Li Yan, Pingyi Zhou, Xin Wang, Yuchi Ma, Ignacio Iacobacci, Yasheng Wang, Guangtai Liang, Jiansheng Wei, Xin Jiang, Qianxiang Wang, Qun Liu
We present PanGu-Coder, a pretrained decoder-only language model adopting the PanGu-Alpha architecture for text-to-code generation, i. e. the synthesis of programming language solutions given a natural language problem description.
1 code implementation • 16 Mar 2022 • ZiFan Chen, Jie Zhao, Hao Yu, Yue Zhang, Li Zhang
Accurate and efficient lumbar spine disease identification is crucial for clinical diagnosis.
no code implementations • 9 Mar 2022 • Fu Li, Hao Yu, Ivan Shugurov, Benjamin Busam, Shaowu Yang, Slobodan Ilic
Pose estimation of 3D objects in monocular images is a fundamental and long-standing problem in computer vision.
1 code implementation • CVPR 2022 • Zheng Qin, Hao Yu, Changjian Wang, Yulan Guo, Yuxing Peng, Kai Xu
Such sparse and loose matching requires contextual features capturing the geometric structure of the point clouds.
2 code implementations • 26 Jan 2022 • Yun-Hao Cao, Hao Yu, Jianxin Wu
Vision Transformers (ViTs) is emerging as an alternative to convolutional neural networks (CNNs) for visual recognition.
1 code implementation • 30 Nov 2021 • Hao Yu, Jianxin Wu
Recently, vision transformer (ViT) and its variants have achieved promising performances in various computer vision tasks.
1 code implementation • NeurIPS 2021 • Hao Yu, Fu Li, Mahdi Saleh, Benjamin Busam, Slobodan Ilic
We study the problem of extracting correspondences between a pair of point clouds for registration.
1 code implementation • NeurIPS 2021 • Hao Yu, Fu Li, Mahdi Saleh, Benjamin Busam, Slobodan Ilic
We study the problem of extracting correspondences between a pair of point clouds for registration.
no code implementations • 20 Apr 2021 • Zhenning Li, Hao Yu, Guohui Zhang, Shangjia Dong, Cheng-Zhong Xu
Inefficient traffic control may cause numerous problems such as traffic congestion and energy waste.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 12 Jan 2021 • Hao Yu, Huanyu Wang, Jianxin Wu
In this paper, we find that mixup constantly explores the representation space, and inspired by the exploration-exploitation dilemma in reinforcement learning, we propose mixup Without hesitation (mWh), a concise, effective, and easy-to-use training algorithm.
no code implementations • 27 Nov 2020 • Meng Shen, Hao Yu, Liehuang Zhu, Ke Xu, Qi Li, Xiaojiang Du
Deep neural networks (DNNs) have been increasingly used in face recognition (FR) systems.
no code implementations • 4 Nov 2020 • Yuan Cheng, Yuchao Yang, Hai-Bao Chen, Ngai Wong, Hao Yu
Real-time understanding in video is crucial in various AI applications such as autonomous driving.
no code implementations • 2 Oct 2020 • Yuan Hui, Zheng Yang, Hao Yu
The magnetization evolution of the free layer in an orthogonal spin-torque device is studied based on a macrospin model.
Mesoscale and Nanoscale Physics
no code implementations • 28 Feb 2020 • Rui Lin, Ching-Yun Ko, Zhuolun He, Cong Chen, Yuan Cheng, Hao Yu, Graziano Chesi, Ngai Wong
The emerging edge computing has promoted immense interests in compacting a neural network without sacrificing much accuracy.
no code implementations • 12 Feb 2020 • Nataniel Ruiz, Hao Yu, Danielle A. Allessio, Mona Jalal, Ajjen Joshi, Thomas Murray, John J. Magee, Jacob R. Whitehill, Vitaly Ablavsky, Ivon Arroyo, Beverly P. Woolf, Stan Sclaroff, Margrit Betke
In this work, we propose a video-based transfer learning approach for predicting problem outcomes of students working with an intelligent tutoring system (ITS).
no code implementations • 19 Jan 2020 • Yun Bai, Xixi Li, Hao Yu, Suling Jia
Sparse and short news headlines can be arbitrary, noisy, and ambiguous, making it difficult for classic topic model LDA (latent Dirichlet allocation) designed for accommodating long text to discover knowledge from them.
no code implementations • NeurIPS 2019 • Hao Yu
In this paper, we propose a new parallel multi-block stochastic ADMM for distributed stochastic optimization, where each node is only required to perform simple stochastic gradient descent updates.
no code implementations • 10 May 2019 • Hao Yu, Rong Jin
We show that for stochastic non-convex optimization under the P-L condition, the classical data-parallel SGD with exponentially increasing batch sizes can achieve the fastest known $O(1/(NT))$ convergence with linear speedup using only $\log(T)$ communication rounds.
no code implementations • 9 May 2019 • Hao Yu, Rong Jin, Sen yang
Recent developments on large-scale distributed machine learning applications, e. g., deep neural networks, benefit enormously from the advances in distributed non-convex optimization techniques, e. g., distributed Stochastic Gradient Descent (SGD).
no code implementations • NeurIPS 2018 • Xiaohan Wei, Hao Yu, Qing Ling, Michael Neely
In this paper, we show that by leveraging a local error bound condition on the dual function, the proposed algorithm can achieve a better primal convergence time of $\mathcal{O}\l(\varepsilon^{-2/(2+\beta)}\log_2(\varepsilon^{-1})\r)$, where $\beta\in(0, 1]$ is a local error bound parameter.
2 code implementations • 6 Nov 2018 • Krishna Kumar Singh, Hao Yu, Aron Sarmasi, Gautam Pradeep, Yong Jae Lee
Our approach only needs to modify the input image and can work with any network to improve its performance.
no code implementations • 1 Nov 2018 • Hao Yu, Vivek Kulkarni, William Wang
First, we introduce methods that learn network representations of entities in the knowledge graph capturing these varied aspects of similarity.
no code implementations • 17 Jul 2018 • Hao Yu, Sen yang, Shenghuo Zhu
Ideally, parallel mini-batch SGD can achieve a linear speed-up of the training time (with respect to the number of workers) compared with SGD over a single worker.
no code implementations • 27 May 2018 • Juyong Zhang, Yuxin Yao, Yue Peng, Hao Yu, Bailin Deng
We propose a novel method to accelerate Lloyd's algorithm for K-Means clustering.
no code implementations • 21 May 2018 • Yuan Cheng, Guangya Li, Hai-Bao Chen, Sheldon X. -D. Tan, Hao Yu
As it requires a huge number of parameters when exposed to high dimensional inputs in video detection and classification, there is a grand challenge to develop a compact yet accurate video comprehension at terminal devices.
no code implementations • 10 Apr 2018 • Hao Yu, Zhaoning Zhang, Zheng Qin, Hao Wu, Dongsheng Li, Jun Zhao, Xicheng Lu
LRM is a general method for real-time detectors, as it utilizes the final feature map which exists in all real-time detectors to mine hard examples.
2 code implementations • 24 Mar 2018 • Zheng Qin, Zhaoning Zhang, Shiqing Zhang, Hao Yu, Yuxing Peng
Compact neural networks are inclined to exploit "sparsely-connected" convolutions such as depthwise convolution and group convolution for employment in mobile applications.
no code implementations • NeurIPS 2017 • Hao Yu, Michael J. Neely, Xiaohan Wei
This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set by introducing multiple stochastic functional constraints that are i. i. d.
no code implementations • 5 May 2017 • Minne Li, Zhaoning Zhang, Hao Yu, Xinyuan Chen, Dongsheng Li
S-OHEM exploits OHEM with stratified sampling, a widely-adopted sampling technique, to choose the training examples according to this influence during hard example mining, and thus enhance the performance of object detectors.
no code implementations • 20 Feb 2017 • Yixing Li, Zichuan Liu, Kai Xu, Hao Yu, Fengbo Ren
For processing static data in large batch sizes, the proposed solution is on a par with a Titan X GPU in terms of throughput while delivering 9. 5x higher energy efficiency.
no code implementations • 12 Dec 2016 • Zichuan Liu, Yixing Li, Fengbo Ren, Hao Yu
In this paper, we develop a binary convolutional encoder-decoder network (B-CEDNet) for natural scene text processing (NSTP).
no code implementations • 8 Apr 2016 • Hao Yu, Michael J. Neely
That prior work proposes an algorithm to achieve $O(\sqrt{T})$ regret and $O(T^{3/4})$ constraint violations for general problems and another algorithm to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when the constraint set can be described by a finite number of linear constraints.