no code implementations • 31 Mar 2014 • Xiangyu Chang, Yu Wang, Rongjian Li, Zongben Xu
Nevertheless, this framework has two serious drawbacks: One is that the solution of the framework unavoidably involves a considerable portion of redundant noise features in many situations, and the other is that the framework neither offers intuitive explanations on why this framework can select relevant features nor leads to any theoretical guarantee for feature selection consistency.
no code implementations • 16 Oct 2015 • Kumpati S. Narendra, Snehasis Mukhopadyhay, Yu Wang
During the past two decades, the first author has worked on numerous methods for improving the stability, robustness, and performance of adaptive systems using multiple models and the other authors have collaborated with him on some of them.
no code implementations • 6 Oct 2016 • Yu Wang, Haofu Liao, Yang Feng, Xiangyang Xu, Jiebo Luo
We find that Chinese, Japanese and Koreans do exhibit substantial differences in certain attributes, such as bangs, smiling, and bushy eyebrows.
no code implementations • 1 Dec 2016 • Song Han, Junlong Kang, Huizi Mao, Yiming Hu, Xin Li, Yubin Li, Dongliang Xie, Hong Luo, Song Yao, Yu Wang, Huazhong Yang, William J. Dally
Evaluated on the LSTM for speech recognition benchmark, ESE is 43x and 3x faster than Core i7 5930k CPU and Pascal Titan X GPU implementations.
no code implementations • 20 May 2017 • Yu Wang, Aniket Chakrabarti, David Sivakoff, Srinivasan Parthasarathy
In this work we devise an effective and efficient three-step-approach for detecting change points in dynamic networks under the snapshot model.
no code implementations • 24 May 2017 • Huizi Mao, Song Han, Jeff Pool, Wenshuo Li, Xingyu Liu, Yu Wang, William J. Dally
Since memory reference is more than two orders of magnitude more expensive than arithmetic operations, the regularity of sparse structure leads to more efficient hardware design.
1 code implementation • 16 Jun 2017 • Bin Dai, Yu Wang, John Aston, Gang Hua, David Wipf
Variational autoencoders (VAE) represent a popular, flexible form of deep generative model that can be stochastically fit to samples from a given random process using an information-theoretic variational bound on the true underlying distribution.
no code implementations • 16 Jun 2017 • Yuzhi Wang, Anqi Yang, Xiaoming Chen, Pengjun Wang, Yu Wang, Huazhong Yang
Temporal drift of sensory data is a severe problem impacting the data quality of wireless sensor networks (WSNs).
1 code implementation • 21 Jul 2017 • Yu Wang, Mirela Ben-Chen, Iosif Polterovich, Justin Solomon
We propose using the Dirichlet-to-Neumann operator as an extrinsic alternative to the Laplacian for spectral geometry processing and shape analysis.
Graphics
no code implementations • 18 Aug 2017 • Xie Chen, Xunying Liu, Anton Ragni, Yu Wang, Mark Gales
Instead of using a recurrent unit to capture the complete future word contexts, a feedforward unit is used to model a finite number of succeeding, future, words.
no code implementations • 18 Aug 2017 • Yu Wang, Jiayi Liu, Yuxiang Liu, Jun Hao, Yang He, Jinghe Hu, Weipeng P. Yan, Mantian Li
We present LADDER, the first deep reinforcement learning agent that can successfully learn control policies for large-scale real-world problems directly from raw inputs composed of high-level semantic information.
no code implementations • 1 Sep 2017 • Yu Wang, Jixing Xu, Aohan Wu, Mantian Li, Yang He, Jinghe Hu, Weipeng P. Yan
This paper proposes Telepath, a vision-based bionic recommender system model, which understands users from such perspective.
3 code implementations • ICLR 2018 • Yujun Lin, Song Han, Huizi Mao, Yu Wang, William J. Dally
The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections.
1 code implementation • The International Conference on Learning Representations 2017 • Yujun Lin, Song Han, Huizi Mao, Yu Wang, W. Dally
Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure.
no code implementations • 24 Dec 2017 • Kaiyuan Guo, Shulin Zeng, Jincheng Yu, Yu Wang, Huazhong Yang
Various FPGA based accelerator designs have been proposed with software and hardware optimization techniques to achieve high speed and energy efficiency.
Hardware Architecture
no code implementations • ICLR 2018 • Xuefei Ning, Yin Zheng, Zhuxi Jiang, Yu Wang, Huazhong Yang, Junzhou Huang
On the other hand, different with the other BNP topic models, the inference of iTM-VAE is modeled by neural networks, which has rich representation capacity and can be computed in a simple feed-forward manner.
no code implementations • 1 Feb 2018 • Yu Wang, Xie Chen, Mark Gales, Anton Ragni, Jeremy Wong
As the combination approaches become more complicated the difference between the phonetic and graphemic systems further decreases.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 8 Feb 2018 • Minhui Zou, Yang Shi, Chengliang Wang, Fangyu Li, WenZhan Song, Yu Wang
With the popularity of deep learning (DL), artificial intelligence (AI) has been applied in many areas of human life.
no code implementations • 24 Apr 2018 • Yuantian Miao, Zichan Ruan, Lei Pan, Yu Wang, Jun Zhang, Yang Xiang
Network traffic analytics technology is a cornerstone for cyber security systems.
Cryptography and Security
no code implementations • 6 May 2018 • David Güera, Yu Wang, Luca Bondi, Paolo Bestagini, Stefano Tubaro, Edward J. Delp
We examine in this paper the problem of identifying the camera model or type that was used to take an image and that can be spoofed.
no code implementations • 14 May 2018 • Wenshuo Li, Jincheng Yu, Xuefei Ning, Pengjun Wang, Qi Wei, Yu Wang, Huazhong Yang
So, in this paper, we propose a hardware-software collaborative attack framework to inject hidden neural network Trojans, which works as a back-door without requiring manipulating input images and is flexible for different scenarios.
no code implementations • 15 May 2018 • Xinghao Ding, Zhirui Lin, Fujin He, Yu Wang, Yue Huang
The estimation of crowd count in images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning.
no code implementations • 18 Jun 2018 • Xuefei Ning, Yin Zheng, Zhuxi Jiang, Yu Wang, Huazhong Yang, Junzhou Huang
Moreover, we also propose HiTM-VAE, where the document-specific topic distributions are generated in a hierarchical manner.
2 code implementations • 29 Jul 2018 • Xiujun Li, Yu Wang, Siqi Sun, Sarah Panda, Jingjing Liu, Jianfeng Gao
This proposal introduces a Dialogue Challenge for building end-to-end task-completion dialogue systems, with the goal of encouraging the dialogue research community to collaborate and benchmark on standard datasets and unified experimental environment.
no code implementations • COLING 2018 • Yu Wang, Abhishek Patel, Hongxia Jin
In this paper, a new deep reinforcement learning based augmented general tagging system is proposed.
no code implementations • 18 Sep 2018 • Yilin Shen, Xiangyu Zeng, Yu Wang, Hongxia Jin
The results show that our approach leverages such simple user information to outperform state-of-the-art approaches by 0. 25% for intent detection and 0. 31% for slot filling using standard training data.
1 code implementation • EMNLP 2018 • Bailin Wang, Wei Lu, Yu Wang, Hongxia Jin
It is common that entity mentions can contain other mentions recursively.
Ranked #6 on Nested Named Entity Recognition on NNE
Nested Mention Recognition Nested Named Entity Recognition +1
no code implementations • 30 Oct 2018 • Anton Ragni, Qiujia Li, Mark Gales, Yu Wang
These errors are not accounted for by the standard confidence estimation schemes and are hard to rectify in the upstream and downstream processing.
1 code implementation • NAACL 2018 • Yu Wang, Yilin Shen, Hongxia Jin
The most effective algorithms are based on the structures of sequence to sequence models (or "encoder-decoder" models), and generate the intents and semantic tags either using separate models or a joint model.
Ranked #1 on Intent Detection on ATIS
no code implementations • 26 Dec 2018 • Yu Wang, Abhishek Patel, Hongxia Jin
In this paper, a new deep reinforcement learning based augmented general sequence tagging system is proposed.
1 code implementation • 20 Feb 2019 • Yu Xing, Shuang Liang, Lingzhi Sui, Xijie Jia, Jiantao Qiu, Xin Liu, Yushun Wang, Yu Wang, Yi Shan
On the Xilinx ZU2 @330 MHz and ZU9 @330 MHz, we achieve equivalently state-of-the-art performance on our benchmarks by na\"ive implementations without optimizations, and the throughput is further improved up to 1. 26x by leveraging heterogeneous optimizations in DNNVM.
no code implementations • 22 Feb 2019 • Yu Wang, Siqi Wu, Bin Yu
First, we obtain a necessary and sufficient norm condition for the reference dictionary $D^*$ to be a sharp local minimum of the expected $\ell_1$ objective function.
1 code implementation • 25 Apr 2019 • Han Xu, Junning Li, Liqiang Liu, Yu Wang, Haidong Yuan, Xin Wang
Measurement and estimation of parameters are essential for science and engineering, where one of the main quests is to find systematic schemes that can achieve high precision.
Quantum Physics Mesoscale and Nanoscale Physics
no code implementations • CVPR 2019 • Yingwei Pan, Ting Yao, Yehao Li, Yu Wang, Chong-Wah Ngo, Tao Mei
Specifically, we present Transferrable Prototypical Networks (TPN) for adaptation such that the prototypes for each class in source and target domains are close in the embedding space and the score distributions predicted by prototypes separately on source and target data are similar.
7 code implementations • 7 May 2019 • Yu Wang, Quan Zhou, Jia Liu, Jian Xiong, Guangwei Gao, Xiaofu Wu, Longin Jan Latecki
LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation
Ranked #29 on Real-Time Semantic Segmentation on Cityscapes test
9 code implementations • NeurIPS 2019 • Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks.
Ranked #2 on Generative Question Answering on CoQA (using extra training data)
4 code implementations • 14 May 2019 • Weitian Li, Haiguang Xu, Zhixian Ma, Dan Hu, Zhenghao Zhu, Chenxi Shan, Jingying Wang, Junhua Gu, Dongchao Zheng, Xiaoli Lian, Qian Zheng, Yu Wang, Jie Zhu, Xiang-Ping Wu
The overwhelming foreground contamination is one of the primary impediments to probing the EoR through measuring the redshifted 21 cm signal.
Cosmology and Nongalactic Astrophysics
1 code implementation • 7 Jun 2019 • Xudong Sun, Alexej Gossmann, Yu Wang, Bernd Bischl
A novel variational inference based resampling framework is proposed to evaluate the robustness and generalization capability of deep learning models with respect to distribution shift.
2 code implementations • 24 Jun 2019 • Yu Wang, Quan Zhou, Xiaofu Wu
The whole network has nearly symmetric architecture, which is mainly composed of a series of factorized convolution unit (FCU) and its parallel counterparts (PFCU).
Ranked #28 on Real-Time Semantic Segmentation on Cityscapes test
3 code implementations • NeurIPS 2019 • Xiao Li, Yu Wang, Sumanta Basu, Karl Kumbier, Bin Yu
Based on the original definition of MDI by Breiman et al. for a single tree, we derive a tight non-asymptotic bound on the expected bias of MDI importance of noisy features, showing that deep trees have higher (expected) feature selection bias than shallow ones.
1 code implementation • 12 Jul 2019 • Yu Wang, Fengjuan Gao, Linzhang Wang, Ke Wang
In a cross-project prediction task, three neural bug detectors we instantiate from NeurSA are effective in catching null pointer dereference, array index out of bound and class cast bugs in unseen code.
no code implementations • 19 Jul 2019 • Shuqiang Lu, Lingyun Ying, Wenjie Lin, Yu Wang, Meining Nie, Kaiwen Shen, Lu Liu, Haixin Duan
With the development of artificial intelligence algorithms like deep learning models and the successful applications in many different fields, further similar trails of deep learning technology have been made in cyber security area.
2 code implementations • 6 Aug 2019 • Pengfei Zhu, Xinjie Yao, Yu Wang, Binyuan Hui, Dawei Du, QinGhua Hu
Dnet learns view-specific self-representation matrices, whereas Unet learns a common self-representation matrix for all views.
Ranked #1 on Multi-view Subspace Clustering on ORL
no code implementations • IJCNLP 2019 • Yu Wang
In this paper, we present a fast and reliable method based on PCA to select the number of dimensions for word embeddings.
2 code implementations • 16 Sep 2019 • Alper Kamil Bozkurt, Yu Wang, Michael M. Zavlanos, Miroslav Pajic
We present a reinforcement learning (RL) framework to synthesize a control policy from a given linear temporal logic (LTL) specification in an unknown stochastic environment that can be modeled as a Markov Decision Process (MDP).
no code implementations • 30 Sep 2019 • Lin-Lin Wang, Yu Wang, Mark J. F. Gales
These systems are explored for non-native spoken English data in this paper.
no code implementations • 2 Oct 2019 • Hao Zhou, Jorge Laval, Anye Zhou, Yu Wang, Wenchao Wu, Zhu Qing, Srinivas Peeta
Some suggestions towards congestion mitigation for future mMP studies are proposed: i) enrich data collection to facilitate the congestion learning, ii) incorporate non-imitation learning methods to combine traffic efficiency into a safety-oriented technical route, and iii) integrate domain knowledge from the traditional car following (CF) theory to improve the string stability of mMP.
no code implementations • 8 Nov 2019 • Xiaoming Chen, Yinhe Han, Yu Wang
Evaluations based on the 65nm technology demonstrate that the proposed architecture nearly reaches the theoretical minimum communication in a three-level memory hierarchy and it is computation dominant.
Distributed, Parallel, and Cluster Computing Hardware Architecture
1 code implementation • 19 Jan 2020 • Yi Wang, Yang Yang, Weiguo Zhu, Yi Wu, Xu Yan, Yongfeng Liu, Yu Wang, Liang Xie, Ziyao Gao, Wenjing Zhu, Xiang Chen, Wei Yan, Mingjie Tang, Yuan Tang
Previous database systems extended their SQL dialect to support ML.
1 code implementation • 1 Feb 2020 • Yu Wang, Byoungwook Jang, Alfred Hero
We apply the SyGlasso to an electroencephalography (EEG) study to compare the brain connectivity of alcoholic and nonalcoholic subjects.
no code implementations • 9 Feb 2020 • Yu Wang, Yining Sun, Zuchang Ma, Lisheng Gao, Yang Xu, Ting Sun
Then, we apply these pre-training models to a NER task by fine-tuning, and compare the effects of the different model architecture and pre-training tasks on the NER task.
3 code implementations • ACL 2020 • Xiaodong Liu, Yu Wang, Jianshu ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, Jianfeng Gao
We present MT-DNN, an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models.
3 code implementations • 28 Feb 2020 • Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon
We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM).
Ranked #4 on Question Generation on SQuAD1.1 (using extra training data)
no code implementations • 20 Mar 2020 • Xuefei Ning, Guangjun Ge, Wenshuo Li, Zhenhua Zhu, Yin Zheng, Xiaoming Chen, Zhen Gao, Yu Wang, Huazhong Yang
By inspecting the discovered architectures, we find that the operation primitives, the weight quantization range, the capacity of the model, and the connection pattern have influences on the fault resilience capability of NN models.
no code implementations • 21 Mar 2020 • Yang Feng, Yu Wang, Jiebo Luo
In this paper, we introduce a novel gating mechanism to deep neural networks.
Optical Flow Estimation Video-Based Person Re-Identification
no code implementations • 26 Mar 2020 • Shulin Zeng, Guohao Dai, Hanbo Sun, Kai Zhong, Guangjun Ge, Kaiyuan Guo, Yu Wang, Huazhong Yang
Currently, the majority of FPGA-based DNN accelerators in the cloud run in a time-division multiplexing way for multiple users sharing a single FPGA, and require re-compilation with $\sim$100 s overhead.
no code implementations • 1 Apr 2020 • Yu Wang, Nima Roohi, Matthew West, Mahesh Viswanathan, Geir E. Dullerud
Probabilistic Computation Tree Logic (PCTL) is frequently used to formally specify control objectives such as probabilistic reachability and safety.
no code implementations • 1 Apr 2020 • Yu Wang, Hussein Sibai, Mark Yen, Sayan Mitra, Geir E. Dullerud
We also show that the standard exponential mechanism that randomizes the output of an algorithm to achieve differential privacy fails to do so in the context of sequential algorithms.
1 code implementation • ECCV 2020 • Xuefei Ning, Yin Zheng, Tianchen Zhao, Yu Wang, Huazhong Yang
Experimental results on various search spaces confirm GATES's effectiveness in improving the performance predictor.
1 code implementation • ECCV 2020 • Xuefei Ning, Tianchen Zhao, Wenshuo Li, Peng Lei, Yu Wang, Huazhong Yang
In budgeted pruning, how to distribute the resources across layers (i. e., sparsity allocation) is the key problem.
no code implementations • 9 Apr 2020 • Lei Zhang, Cunhua Pan, Yu Wang, Hong Ren, Kezhi Wang
Simulation results verify the efficiency of the proposed algorithms and reveal the impacts of CSI uncertainties on ST's minimum transmit power and feasibility rate of the optimization problems.
3 code implementations • 20 Apr 2020 • Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, Jianfeng Gao
In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning.
Ranked #6 on Natural Language Inference on ANLI test (using extra training data)
no code implementations • 21 Apr 2020 • Long Chen, Hanjia Lyu, Tongyu Yang, Yu Wang, Jiebo Luo
To model the substantive difference of tweets with controversial terms and those with non-controversial terms, we apply topic modeling and LIWC-based sentiment analysis.
no code implementations • 21 Apr 2020 • Viet Duong, Phu Pham, Tongyu Yang, Yu Wang, Jiebo Luo
Recently, the pandemic of the novel Coronavirus Disease-2019 (COVID-19) has presented governments with ultimate challenges.
no code implementations • 2 May 2020 • Yu Wang, Yuelin Wang, Jie Liu, Zhuo Liu
More importantly, we discuss four kinds of basic approaches, including statistical machine translation based approach, neural machine translation based approach, classification based approach and language model based approach, six commonly applied performance boosting techniques for GEC systems and two data augmentation methods.
1 code implementation • 12 May 2020 • Yu Wang, Rong Ge, Shuang Qiu
Unlike existing work in deep neural network (DNN) graphs optimization for inference performance, we explore DNN graph optimization for energy awareness and savings for power- and resource-constrained machine learning devices.
1 code implementation • 16 May 2020 • Nick Altieri, Rebecca L. Barter, James Duncan, Raaz Dwivedi, Karl Kumbier, Xiao Li, Robert Netzorg, Briton Park, Chandan Singh, Yan Shuo Tan, Tiffany Tang, Yu Wang, Chao Zhang, Bin Yu
We use this data to develop predictions and corresponding prediction intervals for the short-term trajectory of COVID-19 cumulative death counts at the county-level in the United States up to two weeks ahead.
no code implementations • 18 May 2020 • Yu Wang, Fengjuan Gao, Linzhang Wang, Ke Wang
We have also created a neural bug detector based on GINN to catch null pointer deference bugs in Java code.
no code implementations • 22 May 2020 • Kechen Qin, Yu Wang, Cheng Li, Kalpa Gunaratna, Hongxia Jin, Virgil Pavlu, Javed A. Aslam
Multi-hop knowledge based question answering (KBQA) is a complex task for natural language understanding.
no code implementations • 27 May 2020 • Yu Wang, Junpeng Bao, JianQiang Du, Yongfeng Li
Compared with the existing AKI predictors, the predictor in this work greatly improves the precision of early prediction of AKI by using the Convolutional Neural Network architecture and a more concise input vector.
no code implementations • 4 Jun 2020 • Kai Zhong, Xuefei Ning, Guohao Dai, Zhenhua Zhu, Tianchen Zhao, Shulin Zeng, Yu Wang, Huazhong Yang
For training a variety of models on CIFAR-10, using 1-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within $1\%$.
1 code implementation • CVPR 2020 • Qi Cai, Yingwei Pan, Yu Wang, Jingen Liu, Ting Yao, Tao Mei
To this end, we devise a general loss function to cover most region-based object detectors with various sampling strategies, and then based on it we propose a unified sample weighting network to predict a sample's task weights.
no code implementations • 11 Jun 2020 • Yu Wang, Qitong Gao, Miroslav Pajic
For monotonicity constraints, we propose to use nonnegative neural networks and batch normalization.
6 code implementations • 23 Jun 2020 • Runzhou Ge, Zhuangzhuang Ding, Yihan Hu, Yu Wang, Sijia Chen, Li Huang, Yuan Li
High-efficiency point cloud 3D object detection operated on embedded systems is important for many robotics applications including autonomous driving.
no code implementations • 28 Jun 2020 • Zhuangzhuang Ding, Yihan Hu, Runzhou Ge, Li Huang, Sijia Chen, Yu Wang, Jie Liao
We proposed a one-stage, anchor-free and NMS-free 3D point cloud object detector AFDet, using object key-points to encode the 3D attributes, and to learn an end-to-end point cloud object detection without the need of hand-engineering or learning the anchors.
no code implementations • 28 Jun 2020 • Yu Wang, Sijia Chen, Li Huang, Runzhou Ge, Yihan Hu, Zhuangzhuang Ding, Jie Liao
This technical report presents the online and real-time 2D and 3D multi-object tracking (MOT) algorithms that reached the 1st places on both Waymo Open Dataset 2D tracking and 3D tracking challenges.
no code implementations • 28 Jun 2020 • Sijia Chen, Yu Wang, Li Huang, Runzhou Ge, Yihan Hu, Zhuangzhuang Ding, Jie Liao
A practical autonomous driving system urges the need to reliably and accurately detect vehicles and persons.
no code implementations • 1 Jul 2020 • Yi-Peng Zhang, Hanjia Lyu, Yubao Liu, Xiyang Zhang, Yu Wang, Jiebo Luo
The COVID-19 pandemic has severely affected people's daily lives and caused tremendous economic loss worldwide.
1 code implementation • ACL 2020 • Shaowei Chen, Jie Liu, Yu Wang, Wenzheng Zhang, Ziming Chi
The opinion entity extraction unit and the relation detection unit are developed as two channels to extract opinion entities and relations simultaneously.
2 code implementations • 7 Jul 2020 • Guyue Huang, Guohao Dai, Yu Wang, Huazhong Yang
GE-SpMM performs SpMM-like operation on sparse matrices represented in the most common Compressed Sparse Row (CSR) format, so it can be embedded in GNN frameworks with no preprocessing overheads and support general GNN algorithms.
Distributed, Parallel, and Cluster Computing
no code implementations • 13 Jul 2020 • Yucan Zhou, Yu Wang, Jianfei Cai, Yu Zhou, QinGhua Hu, Weiping Wang
Some works in the optimization of deep neural networks have shown that a better arrangement of training data can make the classifier converge faster and perform better.
1 code implementation • 13 Jul 2020 • Ahmed Elnaggar, Michael Heinzinger, Christian Dallago, Ghalia Rihawi, Yu Wang, Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer, Martin Steinegger, Debsindhu Bhowmik, Burkhard Rost
Here, we trained two auto-regressive models (Transformer-XL, XLNet) and four auto-encoder models (BERT, Albert, Electra, T5) on data from UniRef and BFD containing up to 393 billion amino acids.
Ranked #1 on Protein Secondary Structure Prediction on CASP12
Dimensionality Reduction Protein Secondary Structure Prediction
1 code implementation • ECCV 2020 • Tong Wu, Qingqiu Huang, Ziwei Liu, Yu Wang, Dahua Lin
We present a new loss function called Distribution-Balanced Loss for the multi-label recognition problems that exhibit long-tailed class distributions.
Ranked #7 on Long-tail Learning on VOC-MLT
no code implementations • 31 Jul 2020 • Tong Wu, Xuefei Ning, Wenshuo Li, Ranran Huang, Huazhong Yang, Yu Wang
In this paper, we tackle the issue of physical adversarial examples for object detectors in the wild.
no code implementations • 3 Aug 2020 • Yu Wang, Mojtaba Zarei, Borzoo Bonakdarpoor, Miroslav Pajic
In system analysis, conformance indicates that two systems simultaneously satisfy the same set of specifications of interest; thus, the results from analyzing one system automatically transfer to the other, or one system can safely replace the other in practice.
1 code implementation • NeurIPS 2021 • Xuefei Ning, Changcheng Tang, Wenshuo Li, Zixuan Zhou, Shuang Liang, Huazhong Yang, Yu Wang
Conducting efficient performance estimations of neural architectures is a major challenge in neural architecture search (NAS).
no code implementations • 13 Aug 2020 • Marco Manfredi, Yu Wang
Robustness to small image translations is a highly desirable property for object detectors.
no code implementations • 20 Aug 2020 • Yu Wang, Guangbing Zhou, Chenlu Xiang, Shunqing Zhang, Shugong Xu
The existing localization systems for indoor applications basically rely on wireless signal.
1 code implementation • BMVC 2020 • Dichao Liu, Yu Wang, Jien Kato, Kenji Mase
The evaluation information is backpropagated and forces the classification stream to improve its awareness of visual attention, which helps classification.
Ranked #25 on Fine-Grained Image Classification on Stanford Cars
no code implementations • 11 Sep 2020 • Mark Cartwright, Jason Cramer, Ana Elisa Mendez Mendez, Yu Wang, Ho-Hsiang Wu, Vincent Lostanlen, Magdalena Fuentes, Graham Dove, Charlie Mydlarz, Justin Salamon, Oded Nov, Juan Pablo Bello
In this article, we describe our data collection procedure and propose evaluation metrics for multilabel classification of urban sound tags.
no code implementations • 13 Sep 2020 • Zishen Wan, Bo Yu, Thomas Yuang Li, Jie Tang, Yuhao Zhu, Yu Wang, Arijit Raychowdhury, Shaoshan Liu
On the other hand, FPGA-based robotic accelerators are becoming increasingly competitive alternatives, especially in latency-critical and power-limited scenarios.
no code implementations • 28 Sep 2020 • Xuefei Ning, Wenshuo Li, Zixuan Zhou, Tianchen Zhao, Shuang Liang, Yin Zheng, Huazhong Yang, Yu Wang
A major challenge in NAS is to conduct a fast and accurate evaluation of neural architectures.
1 code implementation • NeurIPS 2020 • Qi Cai, Yu Wang, Yingwei Pan, Ting Yao, Tao Mei
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Shayne Longpre, Yu Wang, Christopher DuBois
Task-agnostic forms of data augmentation have proven widely effective in computer vision, even on pretrained models.
1 code implementation • 22 Oct 2020 • Tao Lu, Yuanzhi Wang, Yanduo Zhang, Yu Wang, Wei Liu, Zhongyuan Wang, Junjun Jiang
However, most of them fail to take into account the overall facial profile and fine texture details simultaneously, resulting in reduced naturalness and fidelity of the reconstructed face, and further impairing the performance of downstream tasks (e. g., face detection, facial recognition).
no code implementations • 9 Nov 2020 • Yu Wang, Shu Jiang, Weiman Lin, Yu Cao, Longtao Lin, Jiangtao Hu, Jinghao Miao, Qi Luo
This paper presents the design of a tune-free (human-out-of-the-loop parameter tuning) control framework, aiming at accelerating large scale autonomous driving system deployed on various vehicles and driving environments.
no code implementations • 18 Nov 2020 • Feng Gao, Jincheng Yu, Hao Shen, Yu Wang, Huazhong Yang
Learning depth and ego-motion from unlabeled videos via self-supervision from epipolar projection can improve the robustness and accuracy of the 3D perception and localization of vision-based robots.
no code implementations • 21 Nov 2020 • Tianchen Zhao, Xuefei Ning, Xiangsheng Shi, Songyi Yang, Shuang Liang, Peng Lei, Jianfei Chen, Huazhong Yang, Yu Wang
We also design the micro-level search space to strengthen the information flow for BNN.
1 code implementation • 25 Nov 2020 • Xuefei Ning, Changcheng Tang, Wenshuo Li, Songyi Yang, Tianchen Zhao, Niansong Zhang, Tianyi Lu, Shuang Liang, Huazhong Yang, Yu Wang
Neural Architecture Search (NAS) has received extensive attention due to its capability to discover neural network architectures in an automated manner.
1 code implementation • NeurIPS 2020 • Hong Liu, Mingsheng Long, Jianmin Wang, Yu Wang
(2) Since the target data arrive online, the agent should also maintain competence on previous target domains, i. e. to adapt without forgetting.
no code implementations • 3 Dec 2020 • Yu Wang, Ziqiao Guan, Wei Hou, Fusheng Wang
The early detection of CKD faces challenges of insufficient medical histories of positive patients and complicated risk factors.
1 code implementation • 22 Dec 2020 • Xuefei Ning, Junbo Zhao, Wenshuo Li, Tianchen Zhao, Yin Zheng, Huazhong Yang, Yu Wang
In this paper, considering scenarios with capacity budget, we aim to discover adversarially robust architecture at targeted capacities.
no code implementations • 1 Jan 2021 • Chao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, Yi Wu
We benchmark commonly used multi-agent deep reinforcement learning (MARL) algorithms on a variety of cooperative multi-agent games.
no code implementations • 1 Jan 2021 • Kai Zhong, Xuefei Ning, Tianchen Zhao, Zhenhua Zhu, Shulin Zeng, Guohao Dai, Yu Wang, Huazhong Yang
Through this dynamic precision framework, we can reduce the bit-width of convolution, which is the most computational cost, while keeping the training process close to the full precision floating-point training.
1 code implementation • 10 Jan 2021 • Guyue Huang, Jingbo Hu, Yifan He, Jialong Liu, Mingyuan Ma, Zhaoyang Shen, Juejian Wu, Yuanfan Xu, Hengrui Zhang, Kai Zhong, Xuefei Ning, Yuzhe ma, HaoYu Yang, Bei Yu, Huazhong Yang, Yu Wang
With the down-scaling of CMOS technology, the design complexity of very large-scale integrated (VLSI) is increasing.
no code implementations • 2 Feb 2021 • Guodong Yin, Yi Cai, Juejian Wu, Zhengyang Duan, Zhenhua Zhu, Yongpan Liu, Yu Wang, Huazhong Yang, Xueqing Li
Compute-in-memory (CiM) is a promising approach to alleviating the memory wall problem for domain-specific applications.
Emerging Technologies
no code implementations • 8 Feb 2021 • Alper Kamil Bozkurt, Yu Wang, Michael M. Zavlanos, Miroslav Pajic
By deriving distinct rewards and discount factors from the acceptance condition of the DPA, we reduce the maximization of the worst-case probability of satisfying the LTL specification into the maximization of a discounted reward objective in the product game; this enables the use of model-free RL algorithms to learn an optimal controller strategy.
no code implementations • 9 Feb 2021 • Bing Zhang, Yu Wang, Liang Li
The jet composition and radiative efficiency of GRBs are poorly constrained from the data.
High Energy Astrophysical Phenomena
no code implementations • 9 Feb 2021 • Yu Wang, Ke Wang, Linzhang Wang
Attribution methods have emerged as a popular approach to interpreting model predictions based on the relevance of input features.
no code implementations • 16 Feb 2021 • Ke Huang, Yu Wang, Xiao Li
Recently a class of quantum systems exhibiting weak ergodicity breaking has attracted much attention.
Disordered Systems and Neural Networks Statistical Mechanics
no code implementations • 23 Feb 2021 • Merle Behr, Yu Wang, Xiao Li, Bin Yu
Iterative Random Forests (iRF) use a tree ensemble from iteratively modified RF to obtain predictive and stable non-linear or Boolean interactions of features.
Statistics Theory Statistics Theory
1 code implementation • 1 Mar 2021 • Yukuo Cen, Zhenyu Hou, Yan Wang, Qibin Chen, Yizhen Luo, Zhongming Yu, Hengrui Zhang, Xingcheng Yao, Aohan Zeng, Shiguang Guo, Yuxiao Dong, Yang Yang, Peng Zhang, Guohao Dai, Yu Wang, Chang Zhou, Hongxia Yang, Jie Tang
In CogDL, we propose a unified design for the training and evaluation of GNN models for various graph tasks, making it unique among existing graph learning libraries.
15 code implementations • 2 Mar 2021 • Chao Yu, Akash Velu, Eugene Vinitsky, Jiaxuan Gao, Yu Wang, Alexandre Bayen, Yi Wu
This is often due to the belief that PPO is significantly less sample efficient than off-policy methods in multi-agent systems.
Multi-agent Reinforcement Learning reinforcement-learning +3
2 code implementations • ICLR 2021 • Zhenggang Tang, Chao Yu, Boyuan Chen, Huazhe Xu, Xiaolong Wang, Fei Fang, Simon Du, Yu Wang, Yi Wu
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games.
no code implementations • 10 Mar 2021 • Amir Khazraei, Spencer Hallyburton, Qitong Gao, Yu Wang, Miroslav Pajic
This work focuses on the use of deep learning for vulnerability analysis of cyber-physical systems (CPS).
2 code implementations • 13 Mar 2021 • Shaowei Chen, Yu Wang, Jie Liu, Yuelin Wang
Aspect sentiment triplet extraction (ASTE), which aims to identify aspects from review sentences along with their corresponding opinion expressions and sentiments, is an emerging task in fine-grained opinion mining.
Aspect Sentiment Triplet Extraction Machine Reading Comprehension +2
1 code implementation • CVPR 2021 • Jialian Wu, Jiale Cao, Liangchen Song, Yu Wang, Ming Yang, Junsong Yuan
Most online multi-object trackers perform object detection stand-alone in a neural net without any input from tracking.
Ranked #1 on Instance Segmentation on nuScenes
no code implementations • CVPR 2022 • Minxue Tang, Xuefei Ning, Yitu Wang, Jingwei Sun, Yu Wang, Hai Li, Yiran Chen
In this work, we propose FedCor -- an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL.
no code implementations • 26 Mar 2021 • Alper Kamil Bozkurt, Yu Wang, Miroslav Pajic
We study the problem of learning safe control policies that are also effective; i. e., maximizing the probability of satisfying a linear temporal logic (LTL) specification of a task, and the discounted reward capturing the (classic) control performance.
no code implementations • AAAI Workshop AdvML 2022 • Yi Cai, Xuefei Ning, Huazhong Yang, Yu Wang
It provides high scalability because the paths within an EIO network exponentially increase with the network depth.
no code implementations • 29 Mar 2021 • Kalpa Gunaratna, Yu Wang, Hongxia Jin
Then we learn entity embeddings through this new type of triples.
1 code implementation • 3 Apr 2021 • Yu Wang, Chee Siang Leow, Akio Kobayashi, Takehito Utsuro, Hiromitsu Nishizaki
This paper describes the ExKaldi-RT online automatic speech recognition (ASR) toolkit that is implemented based on the Kaldi ASR toolkit and Python language.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • CVPR 2021 • Tong Wu, Ziwei Liu, Qingqiu Huang, Yu Wang, Dahua Lin
We then perform a systematic study on existing long-tailed recognition methods in conjunction with the adversarial training framework.
no code implementations • 16 Apr 2021 • Yu Wang, Lifu Huang, Philip S. Yu, Lichao Sun
Membership inference attacks (MIAs) infer whether a specific data record is used for target model training.
1 code implementation • 2 May 2021 • Zhiwei Liu, Ziwei Fan, Yu Wang, Philip S. Yu
We firstly pre-train a transformer with sequences in a reverse direction to predict prior items.
no code implementations • CVPR 2021 • Yu Wang, Rui Zhang, Shuo Zhang, Miao Li, Yangyang Xia, Xishan Zhang, Shaoli Liu
The directions of weights, and the gradients, can be divided into domain-specific and domain-invariant parts, and the goal of domain adaptation is to concentrate on the domain-invariant direction while eliminating the disturbance from domain-specific one.
1 code implementation • 8 May 2021 • JiaMing Wang, Zhenfeng Shao, Tao Lu, Xiao Huang, Ruiqian Zhang, Yu Wang
Despite their success, however, low/high spatial resolution pairs are usually difficult to obtain in satellites with a high temporal resolution, making such approaches in SR impractical to use.
no code implementations • 19 May 2021 • Yu Wang, Hejia Luo, Ying Chen, Jun Wang, Rong Li, Bin Wang
Next generation beyond 5G networks are expected to provide both Terabits per second data rate communication services and centimeter-level accuracy localization services in an efficient, seamless and cost-effective manner.
no code implementations • 20 May 2021 • Yu Wang, Xin Xin, Zaiqiao Meng, Xiangnan He, Joemon Jose, Fuli Feng
A noisy negative example which is uninteracted because of unawareness of the user could also denote potential positive user preference.
1 code implementation • 26 May 2021 • Yu Wang, Alfred Hero
We propose a new graphical model inference procedure, called SG-PALM, for learning conditional dependency structure of high-dimensional tensor-variate data.
no code implementations • 1 Jun 2021 • Yu Wang, Hongxia Jin
In this paper, we present a coarse to fine question answering (CFQA) system based on reinforcement learning which can efficiently processes documents with different lengths by choosing appropriate actions.
no code implementations • 6 Jun 2021 • Yu Wang, Yilin Shen, Hongxia Jin
In this paper, we introduce a novel multi-step spoken language understanding system based on adversarial learning that can leverage the multiround user's feedback to update slot values.
1 code implementation • 13 Jun 2021 • Nianhui Guo, Joseph Bethge, Haojin Yang, Kai Zhong, Xuefei Ning, Christoph Meinel, Yu Wang
Recent works on Binary Neural Networks (BNNs) have made promising progress in narrowing the accuracy gap of BNNs to their 32-bit counterparts.
no code implementations • 25 Jun 2021 • Yu Wang, Jinchao Li, Tristan Naumann, Chenyan Xiong, Hao Cheng, Robert Tinn, Cliff Wong, Naoto Usuyama, Richard Rogahn, Zhihong Shen, Yang Qin, Eric Horvitz, Paul N. Bennett, Jianfeng Gao, Hoifung Poon
A prominent case in point is the explosion of the biomedical literature on COVID-19, which swelled to hundreds of thousands of papers in a matter of months.
no code implementations • 27 Jul 2021 • Yu Wang, Yuesong Shen, Daniel Cremers
To learn the direct influence among output nodes in a graph, we propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
1 code implementation • 28 Jul 2021 • Sindi Shkodrani, Yu Wang, Marco Manfredi, Nóra Baka
Attempts of learning from hierarchical taxonomies in computer vision have been mostly focusing on image classification.
no code implementations • SEMEVAL 2021 • Jinquan Sun, Qi Zhang, Yu Wang, Lei Zhang
Due to the increasing concerns for data privacy, source-free unsupervised domain adaptation attracts more and more research attention, where only a trained source model is assumed to be available, while the labeled source data remain private.
no code implementations • ACL 2021 • Jiaqi Guo, Ziliang Si, Yu Wang, Qian Liu, Ming Fan, Jian-Guang Lou, Zijiang Yang, Ting Liu
However, we identify two biases in existing datasets for XDTS: (1) a high proportion of context-independent questions and (2) a high proportion of easy SQL queries.
no code implementations • 5 Aug 2021 • Yu Wang, Jingyang Lin, Qi Cai, Yingwei Pan, Ting Yao, Hongyang Chao, Tao Mei
In this paper, we construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning, referred to as LORAC.
1 code implementation • 25 Aug 2021 • Yu Wang, Tyler Derr
Nevertheless, iterative propagation restricts the information of higher-layer neighborhoods to be transported through and fused with the lower-layer neighborhoods', which unavoidably results in feature smoothing between neighborhoods in different layers and can thus compromise the performance, especially on heterophily networks.
Ranked #25 on Node Classification on Cora
1 code implementation • 26 Aug 2021 • Yu Wang, Zhiwei Liu, Ziwei Fan, Lichao Sun, Philip S. Yu
In the information explosion era, recommender systems (RSs) are widely studied and applied to discover user-preferred information.
1 code implementation • 28 Aug 2021 • Yu Wang, Fang Liu, Daniele E. Schiavazzi
To reduce the computational cost without sacrificing inferential accuracy, we propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and surrogate model parameters.
1 code implementation • 22 Sep 2021 • Fan Zhang, Yu Wang, Hua Yang
For the context block, we propose strip pooling module to capture anisotropic and long-range contextual information, which exists in abdominal scene.
no code implementations • 29 Sep 2021 • Yu Wang, Jan Drgona, Jiaxin Zhang, Karthik Somayaji NS, Frank Y Liu, Malachi Schram, Peng Li
Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture.
no code implementations • 29 Sep 2021 • Nianhui Guo, Joseph Bethge, Haojin Yang, Kai Zhong, Xuefei Ning, Christoph Meinel, Yu Wang
Recent works on Binary Neural Networks (BNNs) have made promising progress in narrowing the accuracy gap of BNNs to their 32-bit counterparts, often based on specialized model designs using additional 32-bit components.
no code implementations • 12 Oct 2021 • Chao Yu, Xinyi Yang, Jiaxuan Gao, Huazhong Yang, Yu Wang, Yi Wu
In this paper, we extend the state-of-the-art single-agent visual navigation method, Active Neural SLAM (ANS), to the multi-agent setting by introducing a novel RL-based planning module, Multi-agent Spatial Planner (MSP). MSP leverages a transformer-based architecture, Spatial-TeamFormer, which effectively captures spatial relations and intra-agent interactions via hierarchical spatial self-attentions.
no code implementations • 18 Oct 2021 • Hengrui Zhang, Zhongming Yu, Guohao Dai, Guyue Huang, Yufei Ding, Yuan Xie, Yu Wang
The same data are propagated through the graph structure to perform the same neural operation multiple times in GNNs, leading to redundant computation which accounts for 92. 4% of total operators.
1 code implementation • 22 Oct 2021 • Yu Wang, Charu Aggarwal, Tyler Derr
Recent years have witnessed the significant success of applying graph neural networks (GNNs) in learning effective node representations for classification.
2 code implementations • NeurIPS 2021 • Huaxiu Yao, Yu Wang, Ying WEI, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn
In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks.
no code implementations • 28 Oct 2021 • Jiabo He, Wei Liu, Yu Wang, Xingjun Ma, Xian-Sheng Hua
Spinal degeneration plagues many elders, office workers, and even the younger generations.
1 code implementation • NeurIPS 2021 • Jiayu Chen, Yuanxin Zhang, Yuanfan Xu, Huimin Ma, Huazhong Yang, Jiaming Song, Yu Wang, Yi Wu
We motivate our paradigm through a variational perspective, where the learning objective can be decomposed into two terms: task learning on the current task distribution, and curriculum update to a new task distribution.
no code implementations • 14 Nov 2021 • Yuzi Yan, Xiaoxiang Li, Xinyou Qiu, Jiantao Qiu, Jian Wang, Yu Wang, Yuan Shen
In this paper, we propose a distributed formation and obstacle avoidance method based on multi-agent reinforcement learning (MARL).
Model Predictive Control Multi-agent Reinforcement Learning +2
no code implementations • 23 Nov 2021 • Pengfei Zhu, Hongtao Yu, Kaihua Zhang, Yu Wang, Shuai Zhao, Lei Wang, Tianzhu Zhang, QinGhua Hu
To address this issue, segmentation-based trackers have been proposed that employ per-pixel matching to improve the tracking performance of deformable objects effectively.
1 code implementation • 23 Nov 2021 • Tongya Zheng, Zunlei Feng, Yu Wang, Chengchao Shen, Mingli Song, Xingen Wang, Xinyu Wang, Chun Chen, Hao Xu
Our proposed Dynamic Preference Structure (DPS) framework consists of two stages: structure sampling and graph fusion.
2 code implementations • 1 Dec 2021 • Yu Wang, Yuying Zhao, Neil Shah, Tyler Derr
To this end, we introduce a novel framework, Graph-of-Graph Neural Networks (G$^2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from stochastic augmentations of graphs.
1 code implementation • 8 Dec 2021 • Yu Wang, Alfred Hero
In this work, we study the emergence of sparsity and multiway structures in second-order statistical characterizations of dynamical processes governed by partial differential equations (PDEs).
no code implementations • 12 Dec 2021 • Weilin Liu, Ye Mu, Chao Yu, Xuefei Ning, Zhong Cao, Yi Wu, Shuang Liang, Huazhong Yang, Yu Wang
These scenarios indeed correspond to the vulnerabilities of the under-test driving policies, thus are meaningful for their further improvements.
no code implementations • 14 Dec 2021 • Yang Chen, Yingwei Pan, Yu Wang, Ting Yao, Xinmei Tian, Tao Mei
From this point, we present a particular paradigm of self-supervised learning tailored for domain adaptation, i. e., Transferrable Contrastive Learning (TCL), which links the SSL and the desired cross-domain transferability congruently.
no code implementations • ICCV 2021 • Yang Chen, Yu Wang, Yingwei Pan, Ting Yao, Xinmei Tian, Tao Mei
Correspondingly, we also propose a novel "jury" mechanism, which is particularly effective in learning useful semantic feature commonalities among domains.
Ranked #37 on Domain Generalization on PACS
1 code implementation • NeurIPS 2021 • Yu Wang, Jingyang Lin, Jingjing Zou, Yingwei Pan, Ting Yao, Tao Mei
Our work reveals a structured shortcoming of the existing mainstream self-supervised learning methods.
1 code implementation • CVPR 2022 • Baisong Guo, Xiaoyun Zhang, HaoNing Wu, Yu Wang, Ya zhang, Yan-Feng Wang
Previous super-resolution (SR) approaches often formulate SR as a regression problem and pixel wise restoration, which leads to a blurry and unreal SR output.
2 code implementations • 2 Jan 2022 • Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, Chelsea Finn
Machine learning algorithms typically assume that training and test examples are drawn from the same distribution.
1 code implementation • 7 Feb 2022 • Liangwei Yang, Zhiwei Liu, Yu Wang, Chen Wang, Ziwei Fan, Philip S. Yu
We conduct a comprehensive analysis of users' online game behaviors, which motivates the necessity of handling those three characteristics in the online game recommendation.
no code implementations • 9 Feb 2022 • Renquan Zhang, Yu Wang, Zheng Lv, Sen Pei
We generate counterfactual simulations to estimate effectiveness of quarantine measures.
1 code implementation • 10 Feb 2022 • Benedek Rozemberczki, Charles Tapley Hoyt, Anna Gogleva, Piotr Grabowski, Klas Karis, Andrej Lamov, Andriy Nikolov, Sebastian Nilsson, Michael Ughetto, Yu Wang, Tyler Derr, Benjamin M Gyori
In this paper, we introduce ChemicalX, a PyTorch-based deep learning library designed for providing a range of state of the art models to solve the drug pair scoring task.
no code implementations • 13 Feb 2022 • Yu Wang, Yarong Ji, Hongbing Xiao
Then the tensor was mapped to a matrix which was used to mix the one-hot encoded labels of the above image patches.
1 code implementation • 17 Mar 2022 • Kai Zhang, Yu Wang, Hongyi Wang, Lifu Huang, Carl Yang, Xun Chen, Lichao Sun
Furthermore, we propose a Federated learning paradigm with privacy-preserving Relation embedding aggregation (FedR) to tackle the privacy issue in FedE.
no code implementations • CVPR 2022 • Tianchen Zhao, Niansong Zhang, Xuefei Ning, He Wang, Li Yi, Yu Wang
We propose CodedVTR (Codebook-based Voxel TRansformer), which improves data efficiency and generalization ability for 3D sparse voxel transformers.
1 code implementation • 19 Mar 2022 • Junwen Pan, Pengfei Zhu, Kaihua Zhang, Bing Cao, Yu Wang, Dingwen Zhang, Junwei Han, QinGhua Hu
Semantic segmentation with limited annotations, such as weakly supervised semantic segmentation (WSSS) and semi-supervised semantic segmentation (SSSS), is a challenging task that has attracted much attention recently.
Ranked #34 on Weakly-Supervised Semantic Segmentation on COCO 2014 val
no code implementations • 6 Apr 2022 • Wenhan Cao, Jingliang Duan, Shengbo Eben Li, Chen Chen, Chang Liu, Yu Wang
Both the primal and dual estimators are learned from data using supervised learning techniques, and the explicit sample size is provided, which enables us to guarantee the quality of each learned estimator in terms of feasibility and optimality.
1 code implementation • 7 Apr 2022 • Zeyu Sun, Monica G. Bobra, Xiantong Wang, Yu Wang, Hu Sun, Tamas Gombosi, Yang Chen, Alfred Hero
We consider the flare prediction problem that distinguishes flare-imminent active regions that produce an M- or X-class flare in the future 24 hours, from quiet active regions that do not produce any flare within $\pm 24$ hours.
no code implementations • 21 Apr 2022 • Yu Wang, Shuo Ye, Shujian Yu, Xinge You
In this paper, we present a novel approach for FGVC, which can simultaneously make use of partial yet sufficient discriminative information in environmental cues and also compress the redundant information in class-token with respect to the target.
1 code implementation • 26 Apr 2022 • Yu Wang, Yu Dong, Xuesong Lu, Aoying Zhou
Current deep learning models for code summarization generally follow the principle in neural machine translation and adopt the encoder-decoder framework, where the encoder learns the semantic representations from source code and the decoder transforms the learnt representations into human-readable text that describes the functionality of code snippets.
no code implementations • 13 May 2022 • Chaoqin Huang, Qinwei Xu, Yanfeng Wang, Yu Wang, Ya zhang
To extend the reconstruction-based anomaly detection architecture to the localized anomalies, we propose a self-supervised learning approach through random masking and then restoring, named Self-Supervised Masking (SSM) for unsupervised anomaly detection and localization.
no code implementations • 14 May 2022 • Wenhao Huang, Haifan Gong, huan zhang, Yu Wang, Haofeng Li, Guanbin Li, Hong Shen
CT-based bronchial tree analysis plays an important role in the computer-aided diagnosis for respiratory diseases, as it could provide structured information for clinicians.
no code implementations • 17 May 2022 • Yu Wang, Binbin Zhu, Lingsi Kong, Jianlin Wang, Bin Gao, Jianhua Wang, Dingcheng Tian, YuDong Yao
With the help of deep learning methods, the automatic identification or segmentation of nerves can be realized, assisting doctors in completing nerve block anesthesia accurately and efficiently.
no code implementations • 23 May 2022 • Yu Wang, Fang Liu
The current work on reinforcement learning (RL) from demonstrations often assumes the demonstrations are samples from an optimal policy, an unrealistic assumption in practice.
no code implementations • 31 May 2022 • Yu Wang, An Zhang, Xiang Wang, Yancheng Yuan, Xiangnan He, Tat-Seng Chua
This paper proposes Differentiable Invariant Causal Discovery (DICD), utilizing the multi-environment information based on a differentiable framework to avoid learning spurious edges and wrong causal directions.
1 code implementation • 7 Jun 2022 • Yu Wang, Yuying Zhao, Yushun Dong, Huiyuan Chen, Jundong Li, Tyler Derr
Motivated by our analysis, we propose Fair View Graph Neural Network (FairVGNN) to generate fair views of features by automatically identifying and masking sensitive-correlated features considering correlation variation after feature propagation.
1 code implementation • CVPR 2023 • Ying Ji, Yu Wang, Kensaku MORI, Jien Kato
Recent studies have achieved outstanding success in explaining 2D image recognition ConvNets.
no code implementations • 23 Jun 2022 • Dongqiangzi Ye, Weijia Chen, Zixiang Zhou, Yufei Xie, Yu Wang, Panqu Wang, Hassan Foroosh
This technical report presents the 1st place winning solution for the Waymo Open Dataset 3D semantic segmentation challenge 2022.
1 code implementation • 24 Jun 2022 • Yushun Dong, Song Wang, Yu Wang, Tyler Derr, Jundong Li
The low transparency on how the structure of the input network influences the bias in GNN outcome largely limits the safe adoption of GNNs in various decision-critical scenarios.
1 code implementation • 30 Jun 2022 • Huitong Chen, Yu Wang, QinGhua Hu
Re-balancing methods are used to alleviate the influence of data imbalance; however, we empirically discover that they would under-fit new classes.
1 code implementation • 3 Jul 2022 • Yu Wang, Yuying Zhao, Yi Zhang, Tyler Derr
Graph Neural Networks (GNNs) have been successfully adopted in recommender systems by virtue of the message-passing that implicitly captures collaborative effect.
no code implementations • 5 Jul 2022 • Hongzhi Huang, Yu Wang, QinGhua Hu, Ming-Ming Cheng
In this study, we propose a novel method, called Class-Specific Semantic Reconstruction (CSSR), that integrates the power of AE and prototype learning.
1 code implementation • 11 Jul 2022 • Ting Yao, Yehao Li, Yingwei Pan, Yu Wang, Xiao-Ping Zhang, Tao Mei
Dual-ViT is henceforth able to reduce the computational complexity without compromising much accuracy.
no code implementations • 11 Jul 2022 • Zihan Zhao, Yanfeng Wang, Yu Wang
The research and applications of multimodal emotion recognition have become increasingly popular recently.
2 code implementations • 14 Jul 2022 • Yu Wang, Guan Gui, Yun Lin, Hsiao-Chun Wu, Chau Yuen, Fumiyuki Adachi
Thus, we focus on few-shot SEI (FS-SEI) for aircraft identification via automatic dependent surveillance-broadcast (ADS-B) signals, and a novel FS-SEI method is proposed, based on deep metric ensemble learning (DMEL).
1 code implementation • 16 Jul 2022 • Zixuan Zhou, Xuefei Ning, Yi Cai, Jiashu Han, Yiping Deng, Yuhan Dong, Huazhong Yang, Yu Wang
Specifically, we train the supernet with a large sharing extent (an easier curriculum) at the beginning and gradually decrease the sharing extent of the supernet (a harder curriculum).
no code implementations • 7 Aug 2022 • Yifan Hu, Yu Wang
However, due to the inconsistent frequency of human activities, the amount of data for each activity in the human activity dataset is imbalanced.
1 code implementation • 9 Aug 2022 • Lantu Guo, Yu Wang, Yun Lin, Haitao Zhao, Guan Gui
Automatic modulation classification (AMC) is a key technique for designing non-cooperative communication systems, and deep learning (DL) is applied effectively to AMC for improving classification accuracy.
no code implementations • 2022/08/15 2022 • Yu Wang, Wenbin, FENG Chongchong YU, Xinyu Hu, Yuqiu ZHANG4
In order to solve the problems of low model accuracy, poor computing power, poor parallel ability and excessive power consumption in the deployment of RGBD based 3 D target detection model at the embedded end, this paper first proposes an improved RGBD 3 D target detection model based on ENet semantic segmentation model, which takes ENet as the semantic segmentation network, RGB image and depth information are fused to realize 3 D target detection. Secondly, in order to apply the model at the edge, this paper constructs a lightweight network and cuts the network in the down-sampling stage of ENet model. Finally, this paper uses Xilinx ZCU104 as the hardware development kit, which takes FPGA as the auxiliary parallel operation unit and ARM as the main operation unit. It is a heterogeneous computing architecture with the ability to deal with complex operations. The architecture uses FPGA to accelerate the depth model in parallel, which improves the operation speed and reduces the power consumption. The test results of the model on ZCU104 are compared with other hardware. The results show that while ensuring the accuracy, the power consumption of the heterogeneous computing architecture used in this paper is 93% lowerthan that of Intel Xeon e5-2620 v4 CPU, the speed is 12 times higher, and the speed is more than 180 times higher than that of ARM Cortex-A53 commonly used at the edge.
1 code implementation • 27 Aug 2022 • Yu Wang, Hengrui Zhang, Zhiwei Liu, Liangwei Yang, Philip S. Yu
Then we propose Contrastive Variational AutoEncoder (ContrastVAE in short), a two-branched VAE model with contrastive regularization as an embodiment of ContrastELBO for sequential recommendation.
no code implementations • 29 Aug 2022 • Pengfei Zhu, Xinjie Yao, Yu Wang, Meng Cao, Binyuan Hui, Shuai Zhao, QinGhua Hu
Multi-view learning has progressed rapidly in recent years.
1 code implementation • 12 Sep 2022 • Zixiang Zhou, Xiangchen Zhao, Yu Wang, Panqu Wang, Hassan Foroosh
It then uses the feature of the center candidate as the query embedding in the transformer.
Ranked #2 on 3D Object Detection on waymo cyclist
no code implementations • 13 Sep 2022 • Yinan Yang, Yu Wang, Ying Ji, Heng Qi, Jien Kato
Recently, there is a growing belief that data is unnecessary in OPaI.
no code implementations • 18 Sep 2022 • Chi Zhang, Yu Wang, Linzhang Wang
The recent breakthroughs in deep learning methods have sparked a wave of interest in learning-based bug detectors.
no code implementations • 19 Sep 2022 • Dongqiangzi Ye, Zixiang Zhou, Weijia Chen, Yufei Xie, Yu Wang, Panqu Wang, Hassan Foroosh
LidarMultiNet is extensively tested on both Waymo Open Dataset and nuScenes dataset, demonstrating for the first time that major LiDAR perception tasks can be unified in a single strong network that is trained end-to-end and achieves state-of-the-art performance.