no code implementations • CCL 2020 • Yu Wang
“不v1不v2”是汉语中典型的双重否定结构形式之一, 它包括“不+助动词+不+v2”(不得不去)、“不+是+不v2”(不是不好)、述宾结构“不v1... 不v2”(不认为他不去)等多种双重否定结构, 情况复杂。本文以“不v1不v2”为例, 结合“元语否定”、“动词叙实性”、“否定焦点”等概念, 对“不v1不v2”进行了全面的考察, 制定了“不v1不v2”双重否定结构的识别策略。根据识别策略, 设计了双重否定自动识别程序, 并在此过程中补充了助动词表、非叙实动词表等词库。最终, 对28033句语料进行了识别, 识别正确率为97. 87%, 召回率约为93. 10%。
no code implementations • ICLR 2019 • Yu Wang, Fengjuan Gao, Amin Alipour, Linzhang Wang, Xuandong Li, Zhendong Su
Boolean satisfiability (SAT) is one of the most well-known NP-complete problems and has been extensively studied.
no code implementations • ICLR 2019 • Yu Wang, Jack W. Stokes, Mady Marinescu
Antimalware products are a key component in detecting malware attacks, and their engines typically execute unknown programs in a sandbox prior to running them on the native operating system.
no code implementations • EMNLP 2020 • Yu Wang, Yun Li, Hanghang Tong, Ziye Zhu
Specifically, we design (1) Head-Tail Detector based on the multi-head self-attention mechanism and bi-affine classifier to detect boundary tokens, and (2) Token Interaction Tagger based on traditional sequence labeling approaches to characterize the internal token connection within the boundary.
1 code implementation • ICML 2020 • Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, Hsiao-Wuen Hon
We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM).
no code implementations • 17 May 2022 • Yu Wang, Binbin Zhu, Lingsi Kong, Jianlin Wang, Bin Gao, Jianhua Wang, Dingcheng Tian, YuDong Yao
With the help of deep learning methods, the automatic identification or segmentation of nerves can be realized, assisting doctors in completing nerve block anesthesia accurately and efficiently.
no code implementations • 14 May 2022 • Wenhao Huang, Haifan Gong, huan zhang, Yu Wang, Haofeng Li, Guanbin Li, Hong Shen
CT-based bronchial tree analysis plays an important role in the computer-aided diagnosis for respiratory diseases, as it could provide structured information for clinicians.
no code implementations • 13 May 2022 • Chaoqin Huang, Qinwei Xu, Yanfeng Wang, Yu Wang, Ya zhang
To extend the reconstruction-based anomaly detection architecture to the localized anomalies, we propose a self-supervised learning approach through random masking and then restoring, named Self-Supervised Masking (SSM) for unsupervised anomaly detection and localization.
1 code implementation • 26 Apr 2022 • Yu Wang, Yu Dong, Xuesong Lu, Aoying Zhou
Current deep learning models for code summarization generally follow the principle in neural machine translation and adopt the encoder-decoder framework, where the encoder learns the semantic representations from source code and the decoder transforms the learnt representations into human-readable text that describes the functionality of code snippets.
no code implementations • 21 Apr 2022 • Yu Wang, Shuo Ye, Shujian Yu, Xinge You
In this paper, we present a novel approach for FGVC, which can simultaneously make use of partial yet sufficient discriminative information in environmental cues and also compress the redundant information in class-token with respect to the target.
1 code implementation • 7 Apr 2022 • Zeyu Sun, Monica G. Bobra, Xiantong Wang, Yu Wang, Hu Sun, Tamas Gombosi, Yang Chen, Alfred Hero
We consider the flare prediction problem that distinguishes flare-imminent active regions that produce an M- or X-class flare in the future 24 hours, from quiet active regions that do not produce any flare within $\pm 24$ hours.
no code implementations • 6 Apr 2022 • Wenhan Cao, Jingliang Duan, Shengbo Eben Li, Chen Chen, Chang Liu, Yu Wang
Both the primal and dual estimators are learned from data using supervised learning techniques, and the explicit sample size is provided, which enables us to guarantee the quality of each learned estimator in terms of feasibility and optimality.
1 code implementation • 19 Mar 2022 • Junwen Pan, Pengfei Zhu, Kaihua Zhang, Bing Cao, Yu Wang, Dingwen Zhang, Junwei Han, QinGhua Hu
Semantic segmentation with limited annotations, such as weakly supervised semantic segmentation (WSSS) and semi-supervised semantic segmentation (SSSS), is a challenging task that has attracted much attention recently.
Ranked #17 on
Weakly-Supervised Semantic Segmentation
on COCO 2014 val
Semi-Supervised Semantic Segmentation
Weakly-Supervised Semantic Segmentation
no code implementations • 18 Mar 2022 • Tianchen Zhao, Niansong Zhang, Xuefei Ning, He Wang, Li Yi, Yu Wang
We propose CodedVTR (Codebook-based Voxel TRansformer), which improves data efficiency and generalization ability for 3D sparse voxel transformers.
1 code implementation • 17 Mar 2022 • Kai Zhang, Yu Wang, Hongyi Wang, Lifu Huang, Carl Yang, Lichao Sun
Federated Learning (FL) on knowledge graphs (KGs) has yet to be as well studied as other domains, such as computer vision and natural language processing.
no code implementations • 13 Feb 2022 • Yu Wang, Yarong Ji, Hongbing Xiao
Then the tensor was mapped to a matrix which was used to mix the one-hot encoded labels of the above image patches.
1 code implementation • 10 Feb 2022 • Benedek Rozemberczki, Charles Tapley Hoyt, Anna Gogleva, Piotr Grabowski, Klas Karis, Andrej Lamov, Andriy Nikolov, Sebastian Nilsson, Michael Ughetto, Yu Wang, Tyler Derr, Benjamin M Gyori
In this paper, we introduce ChemicalX, a PyTorch-based deep learning library designed for providing a range of state of the art models to solve the drug pair scoring task.
no code implementations • 9 Feb 2022 • Renquan Zhang, Yu Wang, Zheng Lv, Sen Pei
We generate counterfactual simulations to estimate effectiveness of quarantine measures.
1 code implementation • 7 Feb 2022 • Liangwei Yang, Zhiwei Liu, Yu Wang, Chen Wang, Ziwei Fan, Philip S. Yu
We conduct a comprehensive analysis of users' online game behaviors, which motivates the necessity of handling those three characteristics in the online game recommendation.
1 code implementation • 2 Jan 2022 • Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, Chelsea Finn
Machine learning algorithms typically assume that training and test examples are drawn from the same distribution.
1 code implementation • NeurIPS 2021 • Yu Wang, Jingyang Lin, Jingjing Zou, Yingwei Pan, Ting Yao, Tao Mei
Our work reveals a structured shortcoming of the existing mainstream self-supervised learning methods.
no code implementations • 14 Dec 2021 • Yang Chen, Yingwei Pan, Yu Wang, Ting Yao, Xinmei Tian, Tao Mei
From this point, we present a particular paradigm of self-supervised learning tailored for domain adaptation, i. e., Transferrable Contrastive Learning (TCL), which links the SSL and the desired cross-domain transferability congruently.
no code implementations • ICCV 2021 • Yang Chen, Yu Wang, Yingwei Pan, Ting Yao, Xinmei Tian, Tao Mei
Correspondingly, we also propose a novel "jury" mechanism, which is particularly effective in learning useful semantic feature commonalities among domains.
Ranked #15 on
Domain Generalization
on PACS
no code implementations • 12 Dec 2021 • Weilin Liu, Ye Mu, Chao Yu, Xuefei Ning, Zhong Cao, Yi Wu, Shuang Liang, Huazhong Yang, Yu Wang
These scenarios indeed correspond to the vulnerabilities of the under-test driving policies, thus are meaningful for their further improvements.
1 code implementation • 8 Dec 2021 • Yu Wang, Alfred Hero
In this work, we study the emergence of sparsity and multiway structures in second-order statistical characterizations of dynamical processes governed by partial differential equations (PDEs).
no code implementations • NeurIPS 2021 • Reid McIlroy-Young, Yu Wang, Siddhartha Sen, Jon Kleinberg, Ashton Anderson
The advent of machine learning models that surpass human decision-making ability in complex domains has initiated a movement towards building AI systems that interact with humans.
1 code implementation • 1 Dec 2021 • Yu Wang, Yuying Zhao, Neil Shah, Tyler Derr
Graph Neural Networks (GNNs) have achieved unprecedented success in learning graph representations to identify categorical labels of graphs.
no code implementations • 23 Nov 2021 • Pengfei Zhu, Hongtao Yu, Kaihua Zhang, Yu Wang, Shuai Zhao, Lei Wang, Tianzhu Zhang, QinGhua Hu
To address this issue, segmentation-based trackers have been proposed that employ per-pixel matching to improve the tracking performance of deformable objects effectively.
1 code implementation • 23 Nov 2021 • Tongya Zheng, Zunlei Feng, Yu Wang, Chengchao Shen, Mingli Song, Xingen Wang, Xinyu Wang, Chun Chen, Hao Xu
Our proposed Dynamic Preference Structure (DPS) framework consists of two stages: structure sampling and graph fusion.
no code implementations • 14 Nov 2021 • Yuzi Yan, Xiaoxiang Li, Xinyou Qiu, Jiantao Qiu, Jian Wang, Yu Wang, Yuan Shen
In this paper, we propose a distributed formation and obstacle avoidance method based on multi-agent reinforcement learning (MARL).
1 code implementation • NeurIPS 2021 • Jiayu Chen, Yuanxin Zhang, Yuanfan Xu, Huimin Ma, Huazhong Yang, Jiaming Song, Yu Wang, Yi Wu
We motivate our paradigm through a variational perspective, where the learning objective can be decomposed into two terms: task learning on the current task distribution, and curriculum update to a new task distribution.
no code implementations • 28 Oct 2021 • Jiabo He, Wei Liu, Yu Wang, Xingjun Ma, Xian-Sheng Hua
Spinal degeneration plagues many elders, office workers, and even the younger generations.
1 code implementation • NeurIPS 2021 • Huaxiu Yao, Yu Wang, Ying WEI, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn
In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks.
1 code implementation • 22 Oct 2021 • Yu Wang, Charu Aggarwal, Tyler Derr
Recent years have witnessed the significant success of applying graph neural networks (GNNs) in learning effective node representations for classification.
no code implementations • 18 Oct 2021 • Hengrui Zhang, Zhongming Yu, Guohao Dai, Guyue Huang, Yufei Ding, Yuan Xie, Yu Wang
The same data are propagated through the graph structure to perform the same neural operation multiple times in GNNs, leading to redundant computation which accounts for 92. 4% of total operators.
no code implementations • 12 Oct 2021 • Chao Yu, Xinyi Yang, Jiaxuan Gao, Huazhong Yang, Yu Wang, Yi Wu
In this paper, we extend the state-of-the-art single-agent visual navigation method, Active Neural SLAM (ANS), to the multi-agent setting by introducing a novel RL-based planning module, Multi-agent Spatial Planner (MSP). MSP leverages a transformer-based architecture, Spatial-TeamFormer, which effectively captures spatial relations and intra-agent interactions via hierarchical spatial self-attentions.
no code implementations • 29 Sep 2021 • Yu Wang, Jan Drgona, Jiaxin Zhang, Karthik Somayaji NS, Frank Y Liu, Malachi Schram, Peng Li
Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture.
no code implementations • 29 Sep 2021 • Nianhui Guo, Joseph Bethge, Haojin Yang, Kai Zhong, Xuefei Ning, Christoph Meinel, Yu Wang
Recent works on Binary Neural Networks (BNNs) have made promising progress in narrowing the accuracy gap of BNNs to their 32-bit counterparts, often based on specialized model designs using additional 32-bit components.
1 code implementation • 22 Sep 2021 • Fan Zhang, Yu Wang, Hua Yang
For the context block, we propose strip pooling module to capture anisotropic and long-range contextual information, which exists in abdominal scene.
1 code implementation • 28 Aug 2021 • Yu Wang, Fang Liu, Daniele E. Schiavazzi
To reduce the computational cost without sacrificing inferential accuracy, we propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and surrogate model parameters.
1 code implementation • 26 Aug 2021 • Yu Wang, Zhiwei Liu, Ziwei Fan, Lichao Sun, Philip S. Yu
In the information explosion era, recommender systems (RSs) are widely studied and applied to discover user-preferred information.
1 code implementation • 25 Aug 2021 • Yu Wang, Tyler Derr
Nevertheless, iterative propagation restricts the information of higher-layer neighborhoods to be transported through and fused with the lower-layer neighborhoods', which unavoidably results in feature smoothing between neighborhoods in different layers and can thus compromise the performance, especially on heterophily networks.
Ranked #10 on
Node Classification
on Cornell
no code implementations • 5 Aug 2021 • Yu Wang, Jingyang Lin, Qi Cai, Yingwei Pan, Ting Yao, Hongyang Chao, Tao Mei
In this paper, we construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning, referred to as LORAC.
no code implementations • ACL 2021 • Jiaqi Guo, Ziliang Si, Yu Wang, Qian Liu, Ming Fan, Jian-Guang Lou, Zijiang Yang, Ting Liu
However, we identify two biases in existing datasets for XDTS: (1) a high proportion of context-independent questions and (2) a high proportion of easy SQL queries.
no code implementations • SEMEVAL 2021 • Jinquan Sun, Qi Zhang, Yu Wang, Lei Zhang
Due to the increasing concerns for data privacy, source-free unsupervised domain adaptation attracts more and more research attention, where only a trained source model is assumed to be available, while the labeled source data remain private.
1 code implementation • 28 Jul 2021 • Sindi Shkodrani, Yu Wang, Marco Manfredi, Nóra Baka
Attempts of learning from hierarchical taxonomies in computer vision have been mostly focusing on image classification.
no code implementations • 27 Jul 2021 • Yu Wang, Yuesong Shen, Daniel Cremers
To learn the direct influence among output nodes in a graph, we propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
no code implementations • 25 Jun 2021 • Yu Wang, Jinchao Li, Tristan Naumann, Chenyan Xiong, Hao Cheng, Robert Tinn, Cliff Wong, Naoto Usuyama, Richard Rogahn, Zhihong Shen, Yang Qin, Eric Horvitz, Paul N. Bennett, Jianfeng Gao, Hoifung Poon
A prominent case in point is the explosion of the biomedical literature on COVID-19, which swelled to hundreds of thousands of papers in a matter of months.
1 code implementation • 13 Jun 2021 • Nianhui Guo, Joseph Bethge, Haojin Yang, Kai Zhong, Xuefei Ning, Christoph Meinel, Yu Wang
Recent works on Binary Neural Networks (BNNs) have made promising progress in narrowing the accuracy gap of BNNs to their 32-bit counterparts.
no code implementations • 6 Jun 2021 • Yu Wang, Yilin Shen, Hongxia Jin
In this paper, we introduce a novel multi-step spoken language understanding system based on adversarial learning that can leverage the multiround user's feedback to update slot values.
no code implementations • 1 Jun 2021 • Yu Wang, Hongxia Jin
In this paper, we present a coarse to fine question answering (CFQA) system based on reinforcement learning which can efficiently processes documents with different lengths by choosing appropriate actions.
1 code implementation • 26 May 2021 • Yu Wang, Alfred Hero
We propose a new graphical model inference procedure, called SG-PALM, for learning conditional dependency structure of high-dimensional tensor-variate data.
no code implementations • 20 May 2021 • Yu Wang, Xin Xin, Zaiqiao Meng, Xiangnan He, Joemon Jose, Fuli Feng
A noisy negative example which is uninteracted because of unawareness of the user could also denote potential positive user preference.
no code implementations • 19 May 2021 • Yu Wang, Hejia Luo, Ying Chen, Jun Wang, Rong Li, Bin Wang
Next generation beyond 5G networks are expected to provide both Terabits per second data rate communication services and centimeter-level accuracy localization services in an efficient, seamless and cost-effective manner.
no code implementations • CVPR 2021 • Yu Wang, Rui Zhang, Shuo Zhang, Miao Li, Yangyang Xia, Xishan Zhang, Shaoli Liu
The directions of weights, and the gradients, can be divided into domain-specific and domain-invariant parts, and the goal of domain adaptation is to concentrate on the domain-invariant direction while eliminating the disturbance from domain-specific one.
1 code implementation • 8 May 2021 • JiaMing Wang, Zhenfeng Shao, Tao Lu, Xiao Huang, Ruiqian Zhang, Yu Wang
Despite their success, however, low/high spatial resolution pairs are usually difficult to obtain in satellites with a high temporal resolution, making such approaches in SR impractical to use.
1 code implementation • 2 May 2021 • Zhiwei Liu, Ziwei Fan, Yu Wang, Philip S. Yu
We firstly pre-train a transformer with sequences in a reverse direction to predict prior items.
no code implementations • 16 Apr 2021 • Yu Wang, Lifu Huang, Philip S. Yu, Lichao Sun
Membership inference attacks (MIAs) infer whether a specific data record is used for target model training.
1 code implementation • CVPR 2021 • Tong Wu, Ziwei Liu, Qingqiu Huang, Yu Wang, Dahua Lin
We then perform a systematic study on existing long-tailed recognition methods in conjunction with the adversarial training framework.
1 code implementation • 3 Apr 2021 • Yu Wang, Chee Siang Leow, Akio Kobayashi, Takehito Utsuro, Hiromitsu Nishizaki
This paper describes the ExKaldi-RT online automatic speech recognition (ASR) toolkit that is implemented based on the Kaldi ASR toolkit and Python language.
no code implementations • 29 Mar 2021 • Kalpa Gunaratna, Yu Wang, Hongxia Jin
Then we learn entity embeddings through this new type of triples.
no code implementations • AAAI Workshop AdvML 2022 • Yi Cai, Xuefei Ning, Huazhong Yang, Yu Wang
It provides high scalability because the paths within an EIO network exponentially increase with the network depth.
no code implementations • 26 Mar 2021 • Alper Kamil Bozkurt, Yu Wang, Miroslav Pajic
We study the problem of learning safe control policies that are also effective; i. e., maximizing the probability of satisfying a linear temporal logic (LTL) specification of a task, and the discounted reward capturing the (classic) control performance.
no code implementations • 24 Mar 2021 • Minxue Tang, Xuefei Ning, Yitu Wang, Jingwei Sun, Yu Wang, Hai Li, Yiran Chen
In this work, we propose FedCor -- an FL framework built on a correlation-based client selection strategy, to boost the convergence rate of FL.
1 code implementation • CVPR 2021 • Jialian Wu, Jiale Cao, Liangchen Song, Yu Wang, Ming Yang, Junsong Yuan
Most online multi-object trackers perform object detection stand-alone in a neural net without any input from tracking.
Ranked #1 on
Online Multi-Object Tracking
on MOT16
2 code implementations • 13 Mar 2021 • Shaowei Chen, Yu Wang, Jie Liu, Yuelin Wang
Aspect sentiment triplet extraction (ASTE), which aims to identify aspects from review sentences along with their corresponding opinion expressions and sentiments, is an emerging task in fine-grained opinion mining.
Aspect Sentiment Triplet Extraction
Machine Reading Comprehension
+1
no code implementations • 10 Mar 2021 • Amir Khazraei, Spencer Hallyburton, Qitong Gao, Yu Wang, Miroslav Pajic
This work focuses on the use of deep learning for vulnerability analysis of cyber-physical systems (CPS).
2 code implementations • ICLR 2021 • Zhenggang Tang, Chao Yu, Boyuan Chen, Huazhe Xu, Xiaolong Wang, Fei Fang, Simon Du, Yu Wang, Yi Wu
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games.
6 code implementations • 2 Mar 2021 • Chao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, Yi Wu
Proximal Policy Optimization (PPO) is a popular on-policy reinforcement learning algorithm but is significantly less utilized than off-policy learning algorithms in multi-agent settings.
1 code implementation • 1 Mar 2021 • Yukuo Cen, Zhenyu Hou, Yan Wang, Qibin Chen, Yizhen Luo, Zhongming Yu, Hengrui Zhang, Xingcheng Yao, Aohan Zeng, Shiguang Guo, Yuxiao Dong, Yang Yang, Peng Zhang, Guohao Dai, Yu Wang, Chang Zhou, Hongxia Yang, Jie Tang
Deep learning on graphs has attracted tremendous attention from the graph learning community in recent years.
no code implementations • 23 Feb 2021 • Merle Behr, Yu Wang, Xiao Li, Bin Yu
Iterative Random Forests (iRF) use a tree ensemble from iteratively modified RF to obtain predictive and stable non-linear or Boolean interactions of features.
Statistics Theory Statistics Theory
no code implementations • 16 Feb 2021 • Ke Huang, Yu Wang, Xiao Li
Recently a class of quantum systems exhibiting weak ergodicity breaking has attracted much attention.
Disordered Systems and Neural Networks Statistical Mechanics
no code implementations • 9 Feb 2021 • Bing Zhang, Yu Wang, Liang Li
The jet composition and radiative efficiency of GRBs are poorly constrained from the data.
High Energy Astrophysical Phenomena
no code implementations • 9 Feb 2021 • Yu Wang, Ke Wang, Linzhang Wang
The last decade has witnessed a rapid advance in machine learning models.
no code implementations • 8 Feb 2021 • Alper Kamil Bozkurt, Yu Wang, Michael M. Zavlanos, Miroslav Pajic
By deriving distinct rewards and discount factors from the acceptance condition of the DPA, we reduce the maximization of the worst-case probability of satisfying the LTL specification into the maximization of a discounted reward objective in the product game; this allows for the use of model-free RL algorithms to learn an optimal controller strategy.
no code implementations • 2 Feb 2021 • Guodong Yin, Yi Cai, Juejian Wu, Zhengyang Duan, Zhenhua Zhu, Yongpan Liu, Yu Wang, Huazhong Yang, Xueqing Li
Compute-in-memory (CiM) is a promising approach to alleviating the memory wall problem for domain-specific applications.
Emerging Technologies
1 code implementation • 10 Jan 2021 • Guyue Huang, Jingbo Hu, Yifan He, Jialong Liu, Mingyuan Ma, Zhaoyang Shen, Juejian Wu, Yuanfan Xu, Hengrui Zhang, Kai Zhong, Xuefei Ning, Yuzhe ma, HaoYu Yang, Bei Yu, Huazhong Yang, Yu Wang
With the down-scaling of CMOS technology, the design complexity of very large-scale integrated (VLSI) is increasing.
no code implementations • 1 Jan 2021 • Kai Zhong, Xuefei Ning, Tianchen Zhao, Zhenhua Zhu, Shulin Zeng, Guohao Dai, Yu Wang, Huazhong Yang
Through this dynamic precision framework, we can reduce the bit-width of convolution, which is the most computational cost, while keeping the training process close to the full precision floating-point training.
no code implementations • 1 Jan 2021 • Chao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, Yi Wu
We benchmark commonly used multi-agent deep reinforcement learning (MARL) algorithms on a variety of cooperative multi-agent games.
1 code implementation • 22 Dec 2020 • Xuefei Ning, Junbo Zhao, Wenshuo Li, Tianchen Zhao, Yin Zheng, Huazhong Yang, Yu Wang
In this paper, considering scenarios with capacity budget, we aim to discover adversarially robust architecture at targeted capacities.
no code implementations • 3 Dec 2020 • Yu Wang, Ziqiao Guan, Wei Hou, Fusheng Wang
The early detection of CKD faces challenges of insufficient medical histories of positive patients and complicated risk factors.
1 code implementation • NeurIPS 2020 • Hong Liu, Mingsheng Long, Jianmin Wang, Yu Wang
(2) Since the target data arrive online, the agent should also maintain competence on previous target domains, i. e. to adapt without forgetting.
1 code implementation • 25 Nov 2020 • Xuefei Ning, Changcheng Tang, Wenshuo Li, Songyi Yang, Tianchen Zhao, Niansong Zhang, Tianyi Lu, Shuang Liang, Huazhong Yang, Yu Wang
Neural Architecture Search (NAS) has received extensive attention due to its capability to discover neural network architectures in an automated manner.
no code implementations • 21 Nov 2020 • Tianchen Zhao, Xuefei Ning, Xiangsheng Shi, Songyi Yang, Shuang Liang, Peng Lei, Jianfei Chen, Huazhong Yang, Yu Wang
We also design the micro-level search space to strengthen the information flow for BNN.
no code implementations • 18 Nov 2020 • Feng Gao, Jincheng Yu, Hao Shen, Yu Wang, Huazhong Yang
Learning depth and ego-motion from unlabeled videos via self-supervision from epipolar projection can improve the robustness and accuracy of the 3D perception and localization of vision-based robots.
no code implementations • 9 Nov 2020 • Yu Wang, Shu Jiang, Weiman Lin, Yu Cao, Longtao Lin, Jiangtao Hu, Jinghao Miao, Qi Luo
This paper presents the design of a tune-free (human-out-of-the-loop parameter tuning) control framework, aiming at accelerating large scale autonomous driving system deployed on various vehicles and driving environments.
1 code implementation • 22 Oct 2020 • Tao Lu, Yuanzhi Wang, Yanduo Zhang, Yu Wang, Wei Liu, Zhongyuan Wang, Junjun Jiang
However, most of them fail to take into account the overall facial profile and fine texture details simultaneously, resulting in reduced naturalness and fidelity of the reconstructed face, and further impairing the performance of downstream tasks (e. g., face detection, facial recognition).
no code implementations • Findings of the Association for Computational Linguistics 2020 • Shayne Longpre, Yu Wang, Christopher DuBois
Task-agnostic forms of data augmentation have proven widely effective in computer vision, even on pretrained models.
1 code implementation • NeurIPS 2020 • Qi Cai, Yu Wang, Yingwei Pan, Ting Yao, Tao Mei
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
no code implementations • 28 Sep 2020 • Xuefei Ning, Wenshuo Li, Zixuan Zhou, Tianchen Zhao, Shuang Liang, Yin Zheng, Huazhong Yang, Yu Wang
A major challenge in NAS is to conduct a fast and accurate evaluation of neural architectures.
no code implementations • 13 Sep 2020 • Zishen Wan, Bo Yu, Thomas Yuang Li, Jie Tang, Yuhao Zhu, Yu Wang, Arijit Raychowdhury, Shaoshan Liu
On the other hand, FPGA-based robotic accelerators are becoming increasingly competitive alternatives, especially in latency-critical and power-limited scenarios.
no code implementations • 11 Sep 2020 • Mark Cartwright, Jason Cramer, Ana Elisa Mendez Mendez, Yu Wang, Ho-Hsiang Wu, Vincent Lostanlen, Magdalena Fuentes, Graham Dove, Charlie Mydlarz, Justin Salamon, Oded Nov, Juan Pablo Bello
In this article, we describe our data collection procedure and propose evaluation metrics for multilabel classification of urban sound tags.
1 code implementation • BMVC 2020 • Dichao Liu, Yu Wang, Jien Kato, Kenji Mase
The evaluation information is backpropagated and forces the classification stream to improve its awareness of visual attention, which helps classification.
Ranked #17 on
Fine-Grained Image Classification
on Stanford Cars
no code implementations • 20 Aug 2020 • Yu Wang, Guangbing Zhou, Chenlu Xiang, Shunqing Zhang, Shugong Xu
The existing localization systems for indoor applications basically rely on wireless signal.
no code implementations • 13 Aug 2020 • Marco Manfredi, Yu Wang
Robustness to small image translations is a highly desirable property for object detectors.
1 code implementation • NeurIPS 2021 • Xuefei Ning, Changcheng Tang, Wenshuo Li, Zixuan Zhou, Shuang Liang, Huazhong Yang, Yu Wang
Conducting efficient performance estimations of neural architectures is a major challenge in neural architecture search (NAS).
no code implementations • 3 Aug 2020 • Yu Wang, Mojtaba Zarei, Borzoo Bonakdarpoor, Miroslav Pajic
In system analysis, conformance indicates that two systems simultaneously satisfy the same set of specifications of interest; thus, the results from analyzing one system automatically transfer to the other, or one system can safely replace the other in practice.
no code implementations • 31 Jul 2020 • Tong Wu, Xuefei Ning, Wenshuo Li, Ranran Huang, Huazhong Yang, Yu Wang
In this paper, we tackle the issue of physical adversarial examples for object detectors in the wild.
1 code implementation • ECCV 2020 • Tong Wu, Qingqiu Huang, Ziwei Liu, Yu Wang, Dahua Lin
We present a new loss function called Distribution-Balanced Loss for the multi-label recognition problems that exhibit long-tailed class distributions.
1 code implementation • 13 Jul 2020 • Ahmed Elnaggar, Michael Heinzinger, Christian Dallago, Ghalia Rihawi, Yu Wang, Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer, Martin Steinegger, Debsindhu Bhowmik, Burkhard Rost
Here, we trained two auto-regressive models (Transformer-XL, XLNet) and four auto-encoder models (BERT, Albert, Electra, T5) on data from UniRef and BFD containing up to 393 billion amino acids.
Ranked #1 on
Protein Secondary Structure Prediction
on CB513
Dimensionality Reduction
Protein Secondary Structure Prediction
no code implementations • 13 Jul 2020 • Yucan Zhou, Yu Wang, Jianfei Cai, Yu Zhou, QinGhua Hu, Weiping Wang
Some works in the optimization of deep neural networks have shown that a better arrangement of training data can make the classifier converge faster and perform better.
2 code implementations • 7 Jul 2020 • Guyue Huang, Guohao Dai, Yu Wang, Huazhong Yang
GE-SpMM performs SpMM-like operation on sparse matrices represented in the most common Compressed Sparse Row (CSR) format, so it can be embedded in GNN frameworks with no preprocessing overheads and support general GNN algorithms.
Distributed, Parallel, and Cluster Computing
no code implementations • 1 Jul 2020 • Yi-Peng Zhang, Hanjia Lyu, Yubao Liu, Xiyang Zhang, Yu Wang, Jiebo Luo
The COVID-19 pandemic has severely affected people's daily lives and caused tremendous economic loss worldwide.
1 code implementation • ACL 2020 • Shaowei Chen, Jie Liu, Yu Wang, Wenzheng Zhang, Ziming Chi
The opinion entity extraction unit and the relation detection unit are developed as two channels to extract opinion entities and relations simultaneously.
no code implementations • 28 Jun 2020 • Sijia Chen, Yu Wang, Li Huang, Runzhou Ge, Yihan Hu, Zhuangzhuang Ding, Jie Liao
A practical autonomous driving system urges the need to reliably and accurately detect vehicles and persons.
no code implementations • 28 Jun 2020 • Zhuangzhuang Ding, Yihan Hu, Runzhou Ge, Li Huang, Sijia Chen, Yu Wang, Jie Liao
We proposed a one-stage, anchor-free and NMS-free 3D point cloud object detector AFDet, using object key-points to encode the 3D attributes, and to learn an end-to-end point cloud object detection without the need of hand-engineering or learning the anchors.
no code implementations • 28 Jun 2020 • Yu Wang, Sijia Chen, Li Huang, Runzhou Ge, Yihan Hu, Zhuangzhuang Ding, Jie Liao
This technical report presents the online and real-time 2D and 3D multi-object tracking (MOT) algorithms that reached the 1st places on both Waymo Open Dataset 2D tracking and 3D tracking challenges.
4 code implementations • 23 Jun 2020 • Runzhou Ge, Zhuangzhuang Ding, Yihan Hu, Yu Wang, Sijia Chen, Li Huang, Yuan Li
High-efficiency point cloud 3D object detection operated on embedded systems is important for many robotics applications including autonomous driving.
no code implementations • 11 Jun 2020 • Yu Wang, Qitong Gao, Miroslav Pajic
Monotone systems, originating from real-world (e. g., biological or chemical) applications, are a class of dynamical systems that preserves a partial order of system states over time.
1 code implementation • CVPR 2020 • Qi Cai, Yingwei Pan, Yu Wang, Jingen Liu, Ting Yao, Tao Mei
To this end, we devise a general loss function to cover most region-based object detectors with various sampling strategies, and then based on it we propose a unified sample weighting network to predict a sample's task weights.
no code implementations • 4 Jun 2020 • Kai Zhong, Xuefei Ning, Guohao Dai, Zhenhua Zhu, Tianchen Zhao, Shulin Zeng, Yu Wang, Huazhong Yang
For training a variety of models on CIFAR-10, using 1-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within $1\%$.
no code implementations • 27 May 2020 • Yu Wang, Junpeng Bao, JianQiang Du, Yongfeng Li
Compared with the existing AKI predictors, the predictor in this work greatly improves the precision of early prediction of AKI by using the Convolutional Neural Network architecture and a more concise input vector.
no code implementations • 22 May 2020 • Kechen Qin, Yu Wang, Cheng Li, Kalpa Gunaratna, Hongxia Jin, Virgil Pavlu, Javed A. Aslam
Multi-hop knowledge based question answering (KBQA) is a complex task for natural language understanding.
no code implementations • 18 May 2020 • Yu Wang, Fengjuan Gao, Linzhang Wang, Ke Wang
We have also created a neural bug detector based on GINN to catch null pointer deference bugs in Java code.
1 code implementation • 16 May 2020 • Nick Altieri, Rebecca L. Barter, James Duncan, Raaz Dwivedi, Karl Kumbier, Xiao Li, Robert Netzorg, Briton Park, Chandan Singh, Yan Shuo Tan, Tiffany Tang, Yu Wang, Chao Zhang, Bin Yu
We use this data to develop predictions and corresponding prediction intervals for the short-term trajectory of COVID-19 cumulative death counts at the county-level in the United States up to two weeks ahead.
1 code implementation • 12 May 2020 • Yu Wang, Rong Ge, Shuang Qiu
Unlike existing work in deep neural network (DNN) graphs optimization for inference performance, we explore DNN graph optimization for energy awareness and savings for power- and resource-constrained machine learning devices.
no code implementations • 2 May 2020 • Yu Wang, Yuelin Wang, Jie Liu, Zhuo Liu
More importantly, we discuss four kinds of basic approaches, including statistical machine translation based approach, neural machine translation based approach, classification based approach and language model based approach, six commonly applied performance boosting techniques for GEC systems and two data augmentation methods.
no code implementations • 21 Apr 2020 • Long Chen, Hanjia Lyu, Tongyu Yang, Yu Wang, Jiebo Luo
To model the substantive difference of tweets with controversial terms and those with non-controversial terms, we apply topic modeling and LIWC-based sentiment analysis.
no code implementations • 21 Apr 2020 • Viet Duong, Phu Pham, Tongyu Yang, Yu Wang, Jiebo Luo
Recently, the pandemic of the novel Coronavirus Disease-2019 (COVID-19) has presented governments with ultimate challenges.
2 code implementations • 20 Apr 2020 • Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, Jianfeng Gao
In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning.
Ranked #3 on
Natural Language Inference
on ANLI test
(using extra training data)
no code implementations • 9 Apr 2020 • Lei Zhang, Cunhua Pan, Yu Wang, Hong Ren, Kezhi Wang
Simulation results verify the efficiency of the proposed algorithms and reveal the impacts of CSI uncertainties on ST's minimum transmit power and feasibility rate of the optimization problems.
1 code implementation • ECCV 2020 • Xuefei Ning, Tianchen Zhao, Wenshuo Li, Peng Lei, Yu Wang, Huazhong Yang
In budgeted pruning, how to distribute the resources across layers (i. e., sparsity allocation) is the key problem.
1 code implementation • ECCV 2020 • Xuefei Ning, Yin Zheng, Tianchen Zhao, Yu Wang, Huazhong Yang
Experimental results on various search spaces confirm GATES's effectiveness in improving the performance predictor.
no code implementations • 1 Apr 2020 • Yu Wang, Hussein Sibai, Sayan Mitra, Geir E. Dullerud
Using the two examples: sequential probability ratio test and sequential empirical risk minimization, we show that the number of steps such algorithms execute before termination can jeopardize the differential privacy of the input data in a similar fashion as their outputs, and it is impossible to use the usual Laplace mechanism to achieve standard differentially private in these examples.
no code implementations • 1 Apr 2020 • Yu Wang, Nima Roohi, Matthew West, Mahesh Viswanathan, Geir E. Dullerud
Probabilistic Computation Tree Logic (PCTL) is frequently used to formally specify control objectives such as probabilistic reachability and safety.
no code implementations • 26 Mar 2020 • Shulin Zeng, Guohao Dai, Hanbo Sun, Kai Zhong, Guangjun Ge, Kaiyuan Guo, Yu Wang, Huazhong Yang
Currently, the majority of FPGA-based DNN accelerators in the cloud run in a time-division multiplexing way for multiple users sharing a single FPGA, and require re-compilation with $\sim$100 s overhead.
no code implementations • 21 Mar 2020 • Yang Feng, Yu Wang, Jiebo Luo
In this paper, we introduce a novel gating mechanism to deep neural networks.
no code implementations • 20 Mar 2020 • Xuefei Ning, Guangjun Ge, Wenshuo Li, Zhenhua Zhu, Yin Zheng, Xiaoming Chen, Zhen Gao, Yu Wang, Huazhong Yang
By inspecting the discovered architectures, we find that the operation primitives, the weight quantization range, the capacity of the model, and the connection pattern have influences on the fault resilience capability of NN models.
2 code implementations • 28 Feb 2020 • Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon
We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM).
Ranked #3 on
Question Generation
on SQuAD1.1
(using extra training data)
2 code implementations • ACL 2020 • Xiaodong Liu, Yu Wang, Jianshu ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, Jianfeng Gao
We present MT-DNN, an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models.
no code implementations • 9 Feb 2020 • Yu Wang, Yining Sun, Zuchang Ma, Lisheng Gao, Yang Xu, Ting Sun
Then, we apply these pre-training models to a NER task by fine-tuning, and compare the effects of the different model architecture and pre-training tasks on the NER task.
1 code implementation • 1 Feb 2020 • Yu Wang, Byoungwook Jang, Alfred Hero
We apply the SyGlasso to an electroencephalography (EEG) study to compare the brain connectivity of alcoholic and nonalcoholic subjects.
1 code implementation • 19 Jan 2020 • Yi Wang, Yang Yang, Weiguo Zhu, Yi Wu, Xu Yan, Yongfeng Liu, Yu Wang, Liang Xie, Ziyao Gao, Wenjing Zhu, Xiang Chen, Wei Yan, Mingjie Tang, Yuan Tang
Previous database systems extended their SQL dialect to support ML.
no code implementations • 8 Nov 2019 • Xiaoming Chen, Yinhe Han, Yu Wang
Evaluations based on the 65nm technology demonstrate that the proposed architecture nearly reaches the theoretical minimum communication in a three-level memory hierarchy and it is computation dominant.
Distributed, Parallel, and Cluster Computing Hardware Architecture
no code implementations • 2 Oct 2019 • Hao Zhou, Jorge Laval, Anye Zhou, Yu Wang, Wenchao Wu, Zhu Qing, Srinivas Peeta
Some suggestions towards congestion mitigation for future mMP studies are proposed: i) enrich data collection to facilitate the congestion learning, ii) incorporate non-imitation learning methods to combine traffic efficiency into a safety-oriented technical route, and iii) integrate domain knowledge from the traditional car following (CF) theory to improve the string stability of mMP.
no code implementations • 30 Sep 2019 • Lin-Lin Wang, Yu Wang, Mark J. F. Gales
These systems are explored for non-native spoken English data in this paper.
1 code implementation • 16 Sep 2019 • Alper Kamil Bozkurt, Yu Wang, Michael M. Zavlanos, Miroslav Pajic
We present a reinforcement learning (RL) framework to synthesize a control policy from a given linear temporal logic (LTL) specification in an unknown stochastic environment that can be modeled as a Markov Decision Process (MDP).
no code implementations • IJCNLP 2019 • Yu Wang
In this paper, we present a fast and reliable method based on PCA to select the number of dimensions for word embeddings.
no code implementations • 19 Jul 2019 • Shuqiang Lu, Lingyun Ying, Wenjie Lin, Yu Wang, Meining Nie, Kaiwen Shen, Lu Liu, Haixin Duan
With the development of artificial intelligence algorithms like deep learning models and the successful applications in many different fields, further similar trails of deep learning technology have been made in cyber security area.
1 code implementation • 12 Jul 2019 • Yu Wang, Fengjuan Gao, Linzhang Wang, Ke Wang
In a cross-project prediction task, three neural bug detectors we instantiate from NeurSA are effective in catching null pointer dereference, array index out of bound and class cast bugs in unseen code.
3 code implementations • NeurIPS 2019 • Xiao Li, Yu Wang, Sumanta Basu, Karl Kumbier, Bin Yu
Based on the original definition of MDI by Breiman et al. for a single tree, we derive a tight non-asymptotic bound on the expected bias of MDI importance of noisy features, showing that deep trees have higher (expected) feature selection bias than shallow ones.
1 code implementation • 24 Jun 2019 • Yu Wang, Quan Zhou, Xiaofu Wu
The whole network has nearly symmetric architecture, which is mainly composed of a series of factorized convolution unit (FCU) and its parallel counterparts (PFCU).
Ranked #25 on
Real-Time Semantic Segmentation
on Cityscapes test
1 code implementation • 7 Jun 2019 • Xudong Sun, Alexej Gossmann, Yu Wang, Bernd Bischl
A novel variational inference based resampling framework is proposed to evaluate the robustness and generalization capability of deep learning models with respect to distribution shift.
3 code implementations • 14 May 2019 • Weitian Li, Haiguang Xu, Zhixian Ma, Dan Hu, Zhenghao Zhu, Chenxi Shan, Jingying Wang, Junhua Gu, Dongchao Zheng, Xiaoli Lian, Qian Zheng, Yu Wang, Jie Zhu, Xiang-Ping Wu
The overwhelming foreground contamination is one of the primary impediments to probing the EoR through measuring the redshifted 21 cm signal.
Cosmology and Nongalactic Astrophysics
8 code implementations • NeurIPS 2019 • Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks.
Ranked #2 on
Generative Question Answering
on CoQA
(using extra training data)
6 code implementations • 7 May 2019 • Yu Wang, Quan Zhou, Jia Liu, Jian Xiong, Guangwei Gao, Xiaofu Wu, Longin Jan Latecki
LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation
Ranked #26 on
Real-Time Semantic Segmentation
on Cityscapes test
no code implementations • CVPR 2019 • Yingwei Pan, Ting Yao, Yehao Li, Yu Wang, Chong-Wah Ngo, Tao Mei
Specifically, we present Transferrable Prototypical Networks (TPN) for adaptation such that the prototypes for each class in source and target domains are close in the embedding space and the score distributions predicted by prototypes separately on source and target data are similar.
1 code implementation • 25 Apr 2019 • Han Xu, Junning Li, Liqiang Liu, Yu Wang, Haidong Yuan, Xin Wang
Measurement and estimation of parameters are essential for science and engineering, where one of the main quests is to find systematic schemes that can achieve high precision.
Quantum Physics Mesoscale and Nanoscale Physics
no code implementations • 22 Feb 2019 • Yu Wang, Siqi Wu, Bin Yu
First, we obtain a necessary and sufficient norm condition for the reference dictionary $D^*$ to be a sharp local minimum of the expected $\ell_1$ objective function.
no code implementations • 20 Feb 2019 • Yu Xing, Shuang Liang, Lingzhi Sui, Xijie Jia, Jiantao Qiu, Xin Liu, Yushun Wang, Yu Wang, Yi Shan
On the Xilinx ZU2 @330 MHz and ZU9 @330 MHz, we achieve equivalently state-of-the-art performance on our benchmarks by na\"ive implementations without optimizations, and the throughput is further improved up to 1. 26x by leveraging heterogeneous optimizations in DNNVM.
no code implementations • 26 Dec 2018 • Yu Wang, Abhishek Patel, Hongxia Jin
In this paper, a new deep reinforcement learning based augmented general sequence tagging system is proposed.
1 code implementation • NAACL 2018 • Yu Wang, Yilin Shen, Hongxia Jin
The most effective algorithms are based on the structures of sequence to sequence models (or "encoder-decoder" models), and generate the intents and semantic tags either using separate models or a joint model.
Ranked #1 on
Slot Filling
on ATIS
no code implementations • 30 Oct 2018 • Anton Ragni, Qiujia Li, Mark Gales, Yu Wang
These errors are not accounted for by the standard confidence estimation schemes and are hard to rectify in the upstream and downstream processing.
1 code implementation • EMNLP 2018 • Bailin Wang, Wei Lu, Yu Wang, Hongxia Jin
It is common that entity mentions can contain other mentions recursively.
Ranked #6 on
Nested Named Entity Recognition
on NNE
no code implementations • 18 Sep 2018 • Yilin Shen, Xiangyu Zeng, Yu Wang, Hongxia Jin
The results show that our approach leverages such simple user information to outperform state-of-the-art approaches by 0. 25% for intent detection and 0. 31% for slot filling using standard training data.
no code implementations • COLING 2018 • Yu Wang, Abhishek Patel, Hongxia Jin
In this paper, a new deep reinforcement learning based augmented general tagging system is proposed.
2 code implementations • 29 Jul 2018 • Xiujun Li, Yu Wang, Siqi Sun, Sarah Panda, Jingjing Liu, Jianfeng Gao
This proposal introduces a Dialogue Challenge for building end-to-end task-completion dialogue systems, with the goal of encouraging the dialogue research community to collaborate and benchmark on standard datasets and unified experimental environment.
no code implementations • 18 Jun 2018 • Xuefei Ning, Yin Zheng, Zhuxi Jiang, Yu Wang, Huazhong Yang, Junzhou Huang
Moreover, we also propose HiTM-VAE, where the document-specific topic distributions are generated in a hierarchical manner.
no code implementations • 15 May 2018 • Xinghao Ding, Zhirui Lin, Fujin He, Yu Wang, Yue Huang
The estimation of crowd count in images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning.
no code implementations • 14 May 2018 • Wenshuo Li, Jincheng Yu, Xuefei Ning, Pengjun Wang, Qi Wei, Yu Wang, Huazhong Yang
So, in this paper, we propose a hardware-software collaborative attack framework to inject hidden neural network Trojans, which works as a back-door without requiring manipulating input images and is flexible for different scenarios.
no code implementations • 6 May 2018 • David Güera, Yu Wang, Luca Bondi, Paolo Bestagini, Stefano Tubaro, Edward J. Delp
We examine in this paper the problem of identifying the camera model or type that was used to take an image and that can be spoofed.
no code implementations • 24 Apr 2018 • Yuantian Miao, Zichan Ruan, Lei Pan, Yu Wang, Jun Zhang, Yang Xiang
Network traffic analytics technology is a cornerstone for cyber security systems.
Cryptography and Security
no code implementations • 8 Feb 2018 • Minhui Zou, Yang Shi, Chengliang Wang, Fangyu Li, WenZhan Song, Yu Wang
With the popularity of deep learning (DL), artificial intelligence (AI) has been applied in many areas of human life.
no code implementations • 1 Feb 2018 • Yu Wang, Xie Chen, Mark Gales, Anton Ragni, Jeremy Wong
As the combination approaches become more complicated the difference between the phonetic and graphemic systems further decreases.
no code implementations • ICLR 2018 • Xuefei Ning, Yin Zheng, Zhuxi Jiang, Yu Wang, Huazhong Yang, Junzhou Huang
On the other hand, different with the other BNP topic models, the inference of iTM-VAE is modeled by neural networks, which has rich representation capacity and can be computed in a simple feed-forward manner.
no code implementations • 24 Dec 2017 • Kaiyuan Guo, Shulin Zeng, Jincheng Yu, Yu Wang, Huazhong Yang
Various FPGA based accelerator designs have been proposed with software and hardware optimization techniques to achieve high speed and energy efficiency.
Hardware Architecture
1 code implementation • The International Conference on Learning Representations 2017 • Yujun Lin, Song Han, Huizi Mao, Yu Wang, W. Dally
Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure.
2 code implementations • ICLR 2018 • Yujun Lin, Song Han, Huizi Mao, Yu Wang, William J. Dally
The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections.