no code implementations • 19 Aug 2024 • Keith Tyser, Ben Segev, Gaston Longhitano, Xin-Yu Zhang, Zachary Meeks, Jason Lee, Uday Garg, Nicholas Belsten, Avi Shporer, Madeleine Udell, Dov Te'eni, Iddo Drori
We evaluate the alignment of automatic paper reviews with human reviews using an arena of human preferences by pairwise comparisons.
1 code implementation • NeurIPS 2021 • Zhengyang Geng, Xin-Yu Zhang, Shaojie Bai, Yisen Wang, Zhouchen Lin
This paper focuses on training implicit models of infinite layers.
no code implementations • 9 Sep 2020 • Runzhe Wan, Xin-Yu Zhang, Rui Song
Severe infectious diseases such as the novel coronavirus (COVID-19) pose a huge threat to public health.
no code implementations • 8 Aug 2020 • Xin-Yu Zhang
In high-dimensional statistical inference, sparsity regularizations have shown advantages in consistency and convergence rates for coefficient estimation.
1 code implementation • ECCV 2020 • Taihong Xiao, Jinwei Yuan, Deqing Sun, Qifei Wang, Xin-Yu Zhang, Kehan Xu, Ming-Hsuan Yang
Cost volume is an essential component of recent deep models for optical flow estimation and is usually constructed by calculating the inner product between two feature vectors.
2 code implementations • 11 Jul 2020 • Junhao Cheng, Zhuojun Chen, Xin-Yu Zhang, Yizhou Li, Xiaoyuan Jing
To the best of our knowledge, PSNet is the first work to explicitly address scale limitation and feature similarity in multi-column design.
1 code implementation • 8 Jul 2020 • Xin-Yu Zhang, Taihong Xiao, HaoLin Jia, Ming-Ming Cheng, Ming-Hsuan Yang
In this work, we propose a simple yet effective meta-learning algorithm in semi-supervised learning.
no code implementations • 11 Jun 2020 • Xin-Yu Zhang
In a high-dimensional setting, sparse model has shown its power in computational and statistical efficiency.
Optimization and Control Computation
no code implementations • 6 May 2020 • Kai Zhao, Xin-Yu Zhang, Qi Han, Ming-Ming Cheng
Convolutional neural networks (CNNs) are typically over-parameterized, bringing considerable computational overhead and memory footprint in inference.
no code implementations • 30 Apr 2020 • Sinan Tan, Huaping Liu, Di Guo, Xin-Yu Zhang, Fuchun Sun
Embodiment is an important characteristic for all intelligent agents (creatures and robots), while existing scene description tasks mainly focus on analyzing images passively and the semantic understanding of the scenario is separated from the interaction between the agent and the environment.
no code implementations • 20 Mar 2020 • Xin-Yu Zhang, Yang Zhao, Hao Zhang
A wealth of angle problems occur when facial recognition is performed: At present, the feature extraction network presents eigenvectors with large differences between the frontal face and profile face recognition of the same person in many cases.
1 code implementation • 19 Feb 2020 • Xin-Yu Zhang, Kai Zhao, Taihong Xiao, Ming-Ming Cheng, Ming-Hsuan Yang
Recent advances in convolutional neural networks(CNNs) usually come with the expense of excessive computational overhead and memory footprint.
no code implementations • 17 Feb 2020 • Hao Wu, Hanyuan Zhang, Xin-Yu Zhang, Weiwei Sun, Baihua Zheng, Yuning Jiang
We propose a deep convolutional neural network called DeepDualMapper which fuses the aerial image and trajectory data in a more seamless manner to extract the digital map.
Ranked #5 on Semantic Segmentation on TLCGIS
no code implementations • 13 Jan 2020 • Xin-Yu Zhang, Dong Gong, Jiewei Cao, Chunhua Shen
Due to the lack of supervision in the target domain, it is crucial to identify the underlying similarity-and-dissimilarity relationships among the unlabelled samples in the target domain.
1 code implementation • 31 Dec 2019 • Mengting Chen, Yuxin Fang, Xinggang Wang, Heng Luo, Yifeng Geng, Xin-Yu Zhang, Chang Huang, Wenyu Liu, Bo wang
The learning problem of the sample generation (i. e., diversity transfer) is solved via minimizing an effective meta-classification loss in a single-stage network, instead of the generative loss in previous works.
no code implementations • ICLR 2020 • Xin-Yu Zhang, Qiang Wang, Jian Zhang, Zhao Zhong
The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization.
Ranked #606 on Image Classification on ImageNet
no code implementations • 27 Nov 2019 • Xin-Yu Zhang, Le Zhang, Zao-Yi Zheng, Yun Liu, Jia-Wang Bian, Ming-Ming Cheng
The effectiveness of the triplet loss heavily relies on the triplet selection, in which a common practice is to first sample intra-class patches (positives) from the dataset for batch construction and then mine in-batch negatives to form triplets.
1 code implementation • 13 Sep 2019 • Xin-Yu Zhang, Rufeng Zhang, Jiewei Cao, Dong Gong, Mingyu You, Chunhua Shen
Finally, we aggregate the global appearance and part features to improve the feature performance further.
1 code implementation • ICCV 2019 • Xin-Yu Zhang, Jiewei Cao, Chunhua Shen, Mingyu You
In this work, we develop a self-training method with progressive augmentation framework (PAST) to promote the model performance progressively on the target dataset.
Ranked #12 on Unsupervised Domain Adaptation on Market to Duke
32 code implementations • 2 Apr 2019 • Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, Philip Torr
We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e. g., CIFAR-100 and ImageNet.
Ranked #2 on Image Classification on GasHisSDB
1 code implementation • 28 Mar 2019 • Yun Liu, Ming-Ming Cheng, Xin-Yu Zhang, Guang-Yu Nie, Meng Wang
Recent progress on salient object detection mainly aims at exploiting how to effectively integrate multi-scale convolutional features in convolutional neural networks (CNNs).
no code implementations • 22 Jul 2018 • Hao Tian, Changbo Wang, Dinesh Manocha, Xin-Yu Zhang
We compute a grasp space for each part of the example object using active learning.
Robotics
1 code implementation • 7 May 2017 • Xin-Yu Zhang, Srinjoy Das, Ojash Neopane, Ken Kreutz-Delgado
In support of such applications, various FPGA accelerator architectures have been proposed for convolutional neural networks (CNNs) that enable high performance for classification tasks at lower power than CPU and GPU processors.