1 code implementation • ACL 2022 • Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, Chao Zhang
Weakly-supervised learning (WSL) has shown promising results in addressing label scarcity on many NLP tasks, but manually designing a comprehensive, high-quality labeling rule set is tedious and difficult.
1 code implementation • NAACL 2022 • Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang
We develop AcTune, a new framework that improves the label efficiency of active PLM fine-tuning by unleashing the power of unlabeled data via self-training.
no code implementations • ICLR 2019 • Yaohua Tang, Kaixiang Mo, Qian Xu, Chao Zhang, Qiang Yang
When building models for novel natural language domains, a major challenge is the lack of data in the new domains, no matter whether the data is annotated or not.
1 code implementation • 10 Jan 2023 • ran Xu, Yue Yu, Hejie Cui, Xuan Kan, Yanqiao Zhu, Joyce Ho, Chao Zhang, Carl Yang
Our further analysis demonstrates that our proposed data selection strategy reduces the noise of pseudo labels by 36. 8% and saves 57. 3% of the time when compared with the best baseline.
no code implementations • 12 Dec 2022 • Junke Wang, Zhenxin Li, Chao Zhang, Jingjing Chen, Zuxuan Wu, Larry S. Davis, Yu-Gang Jiang
Online media data, in the forms of images and videos, are becoming mainstream communication channels.
no code implementations • 2 Dec 2022 • Hao Wang, Lixue Liu, Xueguan Song, Chao Zhang, DaCheng Tao
In tunnel boring machine (TBM) underground projects, an accurate description of the rock-soil types distributed in the tunnel can decrease the construction risk ({\it e. g.} surface settlement and landslide) and improve the efficiency of construction.
no code implementations • 26 Nov 2022 • Yuhui Li, Zejia Wu, Chao Zhang, Hongyang Zhang
We study the problem of out-of-distribution (o. o. d.)
no code implementations • 25 Nov 2022 • Lingkai Kong, Jiaming Cui, Yuchen Zhuang, Rui Feng, B. Aditya Prakash, Chao Zhang
Decision-focused learning (DFL) was recently proposed for stochastic optimization problems that involve unknown parameters.
no code implementations • 16 Nov 2022 • Chao Zhang, Siqi Han, Milin Zhang
It is easy for the electroencephalogram (EEG) signal to be incomplete due to packet loss, electrode falling off, etc.
no code implementations • 10 Nov 2022 • Chao Zhang, Hang Zou, Samson Lasaulce, Walid Saad, Marios Kountouris, Mehdi Bennis
Internet of Things (IoT) devices will play an important role in emerging applications, since their sensing, actuation, processing, and wireless communication capabilities stimulate data collection, transmission and decision processes of smart applications.
1 code implementation • 9 Nov 2022 • Wen Wu, Chao Zhang, Philip C. Woodland
Automatic emotion recognition in conversation (ERC) is crucial for emotion-aware conversational artificial intelligence.
no code implementations • 1 Nov 2022 • Shaan Bijwadia, Shuo-Yiin Chang, Bo Li, Tara Sainath, Chao Zhang, Yanzhang He
In this work, we propose a method to jointly train the ASR and EP tasks in a single end-to-end (E2E) multitask model, improving EP quality by optionally leveraging information from the ASR audio encoder.
1 code implementation • 1 Nov 2022 • Yue Yu, Xuan Kan, Hejie Cui, ran Xu, Yujia Zheng, Xiangchen Song, Yanqiao Zhu, Kun Zhang, Razieh Nabi, Ying Guo, Chao Zhang, Carl Yang
To better adapt GNNs for fMRI analysis, we propose TBDS, an end-to-end framework based on \underline{T}ask-aware \underline{B}rain connectivity \underline{D}AG (short for Directed Acyclic Graph) \underline{S}tructure generation for fMRI analysis.
no code implementations • 31 Oct 2022 • Jianjian Qin, Chunzhi Gu, Jun Yu, Chao Zhang
Moreover, our method only requires very few normal samples to train the student network due to the teacher-student distillation mechanism.
no code implementations • 29 Oct 2022 • Guangzhi Sun, Chao Zhang, Philip C. Woodland
Specifically, a tree-constrained pointer generator (TCPGen), a powerful and efficient biasing model component, is studied, which leverages a slot shortlist with corresponding entities to extract biasing lists.
1 code implementation • 28 Oct 2022 • Zihan Zhang, Jinfeng Li, Ning Shi, Bo Yuan, Xiangyu Liu, Rong Zhang, Hui Xue, Donghong Sun, Chao Zhang
Despite of the superb performance on a wide range of tasks, pre-trained language models (e. g., BERT) have been proved vulnerable to adversarial texts.
1 code implementation • 27 Oct 2022 • Yue Yu, Chenyan Xiong, Si Sun, Chao Zhang, Arnold Overwijk
We present a new zero-shot dense retrieval (ZeroDR) method, COCO-DR, to improve the generalization ability of dense retrieval by combating the distribution shifts between source training tasks and target scenarios.
Ranked #1 on
Zero-shot Text Search
on TREC-COVID
no code implementations • 26 Oct 2022 • Yanbo Xu, Alind Khare, Glenn Matlin, Monish Ramadoss, Rishikesan Kamaleswaran, Chao Zhang, Alexey Tumanov
It achieves within 0. 1% accuracy from the highest-performing multi-class baseline, while saving close to 20X on spatio-temporal cost of inference and earlier (3. 5hrs) disease onset prediction.
1 code implementation • 26 Oct 2022 • Yuchen Zhuang, Yinghao Li, Jerry Junyang Cheung, Yue Yu, Yingjun Mou, Xiang Chen, Le Song, Chao Zhang
We study the problem of extracting N-ary relation tuples from scientific articles.
no code implementations • 26 Oct 2022 • Wei Wang, Chao Zhang, Xiaopei Wu
In this paper, we make use of limited code-switching data as driving materials and explore a shortcut to quickly develop intra-sentential code-switching recognition skill on the commissioned native language acoustic model, where we propose a data-driven method to make the seed lexicon which is used to train grapheme-to-phoneme model to predict mapping pronunciations for foreign language word in code-switching sentences.
no code implementations • 9 Oct 2022 • Yukun Zheng, Jiang Bian, Guanghao Meng, Chao Zhang, Honggang Wang, Zhixuan Zhang, Sen Li, Tao Zhuang, Qingwen Liu, Xiaoyi Zeng
These problems promote us to further strengthen the capabilities of our EBR model in both relevance estimation and personalized retrieval.
no code implementations • 3 Oct 2022 • Weicong Liang, Yuhui Yuan, Henghui Ding, Xiao Luo, WeiHong Lin, Ding Jia, Zheng Zhang, Chao Zhang, Han Hu
Vision transformers have recently achieved competitive results across various vision tasks but still suffer from heavy computation costs when processing a large number of tokens.
no code implementations • 30 Sep 2022 • Hang Zou, Chao Zhang, Samson Lasaulce, Lucas Saludjian, Vincent Poor
The task is modeled by the minimization problem of a general goal function $f(x;g)$ for which the decision $x$ has to be taken from a quantized version of the parameters $g$.
no code implementations • 27 Sep 2022 • Pranav Shetty, Arunkumar Chitteth Rajan, Christopher Kuenneth, Sonkakshi Gupta, Lakshmi Prerana Panchumarti, Lauren Holm, Chao Zhang, Rampi Ramprasad
The ever-increasing number of materials science articles makes it hard to infer chemistry-structure-property relations from published literature.
1 code implementation • 15 Sep 2022 • Yue Yu, Rongzhi Zhang, ran Xu, Jieyu Zhang, Jiaming Shen, Chao Zhang
We propose PATRON, a new method that uses prompt-based uncertainty estimation for data selection for pre-trained language model fine-tuning under cold-start scenarios, i. e., no initial labeled data are available.
no code implementations • 15 Sep 2022 • Simiao Zuo, Qingyu Yin, Haoming Jiang, Shaohui Xi, Bing Yin, Chao Zhang, Tuo Zhao
The model subsequently calculates session representations by combining the contextual information with the instant search query using an aggregation network.
no code implementations • 13 Sep 2022 • Chao Zhang, Bo Li, Tara Sainath, Trevor Strohman, Sepand Mavandadi, Shuo-Yiin Chang, Parisa Haghani
Language identification is critical for many downstream tasks in automatic speech recognition (ASR), and is beneficial to integrate into multilingual end-to-end ASR as an additional task.
no code implementations • 3 Sep 2022 • Chao Zhang, Zijian Tang, Taoming Guo, Jiaxin Lei, Jiaxin Xiao, Anhe Wang, Shuo Bai, Milin Zhang
This paper proposes SaleNet - an end-to-end convolutional neural network (CNN) for sustained attention level evaluation using prefrontal electroencephalogram (EEG).
no code implementations • 29 Aug 2022 • Shuo-Yiin Chang, Bo Li, Tara N. Sainath, Chao Zhang, Trevor Strohman, Qiao Liang, Yanzhang He
This makes doing speech recognition with conversational speech, including one with multiple queries, a challenging task.
1 code implementation • 7 Aug 2022 • Mengyang Liu, Haozheng Luo, Leonard Thong, Yinghao Li, Chao Zhang, Le Song
Compared to frequently used text annotation tools, our annotation tool allows for the development of weak labels in addition to providing a manual annotation experience.
1 code implementation • 26 Jul 2022 • Ding Jia, Yuhui Yuan, Haodi He, Xiaopei Wu, Haojun Yu, WeiHong Lin, Lei Sun, Chao Zhang, Han Hu
This end-to-end signature is important for the versatility of DETR, and it has been generalized to a wide range of visual problems, including instance/semantic segmentation, human pose estimation, and point cloud/multi-view-images based detection, etc.
1 code implementation • 16 Jul 2022 • Zizheng Huang, Chao Zhang, Huaxiong Li, Bo wang, Chunlin Chen
It has been identified that the temperature $ \tau $ of CL loss plays an essential role in automatically concentrating on hard negative samples.
no code implementations • 8 Jul 2022 • Xianrui Zheng, Chao Zhang, Philip C. Woodland
Self-supervised-learning-based pre-trained models for speech data, such as Wav2Vec 2. 0 (W2V2), have become the backbone of many speech tasks.
no code implementations • 4 Jul 2022 • Chunzhi Gu, Jun Yu, Chao Zhang
Specifically, the inductive bias imposed by the extra CVAE path encourages two latent variables in two paths to respectively govern separate representations for each partial-body motion.
no code implementations • 2 Jul 2022 • Guangzhi Sun, Chao Zhang, Philip C. Woodland
Incorporating biasing words obtained as contextual knowledge is critical for many automatic speech recognition (ASR) applications.
no code implementations • 28 Jun 2022 • Rongzhi Zhang, Rebecca West, Xiquan Cui, Chao Zhang
We develop AMRule, a multi-view rule discovery framework that can (1) adaptively and iteratively discover novel rulers that can complement the current weakly-supervised model to improve compatibility prediction; (2) discover interpretable rules from both structured attribute tables and unstructured product descriptions.
no code implementations • 20 Jun 2022 • Guile Wu, Chao Zhang, Stephan Liwicki
In global consistent quantization, we employ contrastive learning for both embedding and quantized representations and fuses these representations for consistent contrastive regularization between instances.
1 code implementation • 16 Jun 2022 • Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash
We close both these gaps and propose PROFHIT, which is a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
1 code implementation • 27 May 2022 • Yinghao Li, Le Song, Chao Zhang
Weakly supervised named entity recognition methods train label models to aggregate the token annotations of multiple noisy labeling functions (LFs) without seeing any manually annotated labels.
Named Entity Recognition
Weakly-Supervised Named Entity Recognition
no code implementations • 18 May 2022 • Guangzhi Sun, Chao Zhang, Philip C Woodland
MBWE and BLMD further improved the effectiveness of TCPGen and achieved more significant WER reductions on the biasing words.
no code implementations • 18 May 2022 • Wensheng Li, Chao Zhang, Chuncheng Wang, Hanting Guan, DaCheng Tao
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs), and have been widely used in a variety of PDE problems.
no code implementations • NAACL 2022 • Rui Feng, Chen Luo, Qingyu Yin, Bing Yin, Tuo Zhao, Chao Zhang
User sessions empower many search and recommendation tasks on a daily basis.
1 code implementation • 30 Mar 2022 • Zhaoyang Huang, Xiaoyu Shi, Chao Zhang, Qiang Wang, Ka Chun Cheung, Hongwei Qin, Jifeng Dai, Hongsheng Li
We introduce optical Flow transFormer, dubbed as FlowFormer, a transformer-based neural network architecture for learning optical flow.
Ranked #1 on
Optical Flow Estimation
on Sintel-final
no code implementations • CVPR 2022 • Yingjie Cai, Kwan-Yee Lin, Chao Zhang, Qiang Wang, Xiaogang Wang, Hongsheng Li
Specifically, we map a series of related partial point clouds into multiple complete shape and occlusion code pairs and fuse the codes to obtain their representations in the unified latent space.
1 code implementation • 18 Mar 2022 • Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, Chao Zhang
Weakly-supervised learning (WSL) has shown promising results in addressing label scarcity on many NLP tasks, but manually designing a comprehensive, high-quality labeling rule set is tedious and difficult.
no code implementations • 18 Mar 2022 • Jun Quan, Ze Wei, Qiang Gan, Jingqi Yao, Jingyi Lu, Yuchen Dong, Yiming Liu, Yi Zeng, Chao Zhang, Yongzhi Li, Huang Hu, Yingying He, Yang Yang, Daxin Jiang
The conversational recommender systems (CRSs) have received extensive attention in recent years.
1 code implementation • CVPR 2022 • Xingbo Dong, Wanyan Xu, Zhihui Miao, Lan Ma, Chao Zhang, Jiewen Yang, Zhe Jin, Andrew Beng Jin Teoh, Jiajun Shen
Next, a fully convolutional network is proposed to achieve the low-light image enhancement by fusing colored raw data with synthesized monochrome raw data.
no code implementations • 8 Mar 2022 • Wen Wu, Chao Zhang, Xixin Wu, Philip C. Woodland
In this paper, a novel Bayesian training loss based on per-utterance Dirichlet prior distributions is proposed for verbal emotion recognition, which models the uncertainty in one-hot labels created when human annotators assign the same utterance to different emotion classes.
no code implementations • 7 Mar 2022 • Qi Zhu, Chao Zhang, Chanyoung Park, Carl Yang, Jiawei Han
Then a shift-robust classifier is optimized on training graph and adversarial samples on target graph, which are generated by cluster GNN.
no code implementations • 3 Mar 2022 • Rama Cont, Mihai Cucuringu, Renyuan Xu, Chao Zhang
The estimation of loss distributions for dynamic portfolios requires the simulation of scenarios representing realistic joint dynamics of their components, with particular importance devoted to the simulation of tail risk scenarios.
1 code implementation • 14 Feb 2022 • Jingwei Yi, Fangzhao Wu, Bin Zhu, Yang Yu, Chao Zhang, Guangzhong Sun, Xing Xie
Our study reveals a critical security issue in existing federated news recommendation systems and calls for research efforts to address the issue.
1 code implementation • 11 Feb 2022 • Jieyu Zhang, Cheng-Yu Hsieh, Yue Yu, Chao Zhang, Alexander Ratner
Labeling training data has become one of the major roadblocks to using machine learning.
no code implementations • 8 Feb 2022 • Chao Zhang, Yihuang Zhang, Mihai Cucuringu, Zhongmin Qian
We apply machine learning models to forecast intraday realized volatility (RV), by exploiting commonality in intraday volatility via pooling stock data together, and by incorporating a proxy for the market volatility.
no code implementations • 6 Feb 2022 • Weijie Liu, Chao Zhang, Nenggan Zheng, Hui Qian
In this paper, we propose a novel criterion to measure the graph matching accuracy, structural inconsistency (SI), which is defined based on the network topological structure.
no code implementations • 25 Jan 2022 • Chao Zhang, Bo Li, Zhiyun Lu, Tara N. Sainath, Shuo-Yiin Chang
The recurrent neural network transducer (RNN-T) has recently become the mainstream end-to-end approach for streaming automatic speech recognition (ASR).
1 code implementation • 4 Jan 2022 • Fangcheng Liu, Chao Zhang, Hongyang Zhang
Extensive experiments verify the effectiveness of our framework on balancing imperceptibility and transferability of the crafted adversarial examples.
no code implementations • CVPR 2022 • Jiewen Yang, Xingbo Dong, Liujun Liu, Chao Zhang, Jiajun Shen, Dahai Yu
Besides, the proposed RViT can work on both fixed-length and variant-length video clips properly without requiring large GPU memory thanks to the frame by frame processing flow.
no code implementations • 25 Dec 2021 • Rama Cont, Mihai Cucuringu, Chao Zhang
Second, we examine the notion of cross-impact and show that, once the information from multiple levels is included in OFI, multi-asset models with cross-impact do not provide additional explanatory power for contemporaneous impact compared to a sparse model without cross-impact terms.
no code implementations • 17 Dec 2021 • Yiyuan She, Jiahui Shen, Chao Zhang
In this paper, new information-theoretical limits are presented to reveal the intrinsic cost of seeking for clusters, as well as the blessing from dimensionality in multivariate learning.
1 code implementation • 16 Dec 2021 • Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang
We propose {\ours}, a new framework that leverages unlabeled data to improve the label efficiency of active PLM fine-tuning.
no code implementations • 2 Dec 2021 • Chao Zhang, Zhijian Li, Hui Qian, Xin Du
We develop a general Dynamic-weight Particle-based Variational Inference (DPVI) framework according to a novel continuous composite flow, which evolves the positions and weights of particles simultaneously.
2 code implementations • NeurIPS 2021 • Yuhui Yuan, Rao Fu, Lang Huang, WeiHong Lin, Chao Zhang, Xilin Chen, Jingdong Wang
We present a High-Resolution Transformer (HRFormer) that learns high-resolution representations for dense prediction tasks, in contrast to the original Vision Transformer that produces low-resolution representations and has high memory and computational cost.
no code implementations • 24 Nov 2021 • Katsuya Hotta, Takuya Akashi, Shogo Tokai, Chao Zhang
Subspace clustering methods embrace a self-expressive model that represents each data point as a linear combination of other data points in the dataset are powerful unsupervised learning techniques.
no code implementations • 17 Nov 2021 • Chao Zhang, Zihao Zhang, Mihai Cucuringu, Stefan Zohren
The designed framework circumvents the traditional forecasting step and avoids the estimation of the covariance matrix, lifting the bottleneck for generalizing to a large amount of instruments.
no code implementations • 12 Nov 2021 • Xiaoye Qian, Chao Zhang, Jaswanth Yella, Yu Huang, Ming-Chun Huang, Sthitie Bom
To understand how the proposed model works, the deep visualization approach is applied.
no code implementations • 12 Nov 2021 • Chunzhi Gu, Shuofeng Zhao, Chao Zhang
In this paper, we present a deep generative model based method to generate diverse human motion interpolation results.
no code implementations • 12 Nov 2021 • Jaswanth Yella, Chao Zhang, Sergei Petrov, Yu Huang, Xiaoye Qian, Ali A. Minai, Sthitie Bom
Over the last few decades, modern industrial processes have investigated several cost-effective methodologies to improve the productivity and yield of semiconductor manufacturing.
no code implementations • 12 Nov 2021 • Yu Huang, Chao Zhang, Jaswanth Yella, Sergei Petrov, Xiaoye Qian, Yufei Tang, Xingquan Zhu, Sthitie Bom
In the era of big data, data-driven based classification has become an essential method in smart manufacturing to guide production and optimize inspection.
no code implementations • 12 Nov 2021 • Weijie Liu, Chao Zhang, Nenggan Zheng, Hui Qian
Optimal transport (OT) naturally arises in a wide range of machine learning applications but may often become the computational bottleneck.
1 code implementation • 10 Nov 2021 • Chao Zhang, Jaswanth Yella, Yu Huang, Xiaoye Qian, Sergei Petrov, Andrey Rzhetsky, Sthitie Bom
We demonstrate the challenges and effectiveness of modeling industrial big data by a Soft Sensing Transformer model on these data sets.
1 code implementation • 2 Nov 2021 • Wenyu Zhu, Zhiyao Feng, Zihan Zhang, Jianjun Chen, Zhijian Ou, Min Yang, Chao Zhang
Recovering binary programs' call graphs is crucial for inter-procedural analysis tasks and applications based on them. transfer One of the core challenges is recognizing targets of indirect calls (i. e., indirect callees).
no code implementations • 28 Oct 2021 • Chao Zhang, Hanxin Zhang, Atif Khan, Ted Kim, Olasubomi Omoleye, Oluwamayomikun Abiona, Amy Lehman, Christopher O. Olopade, Olufunmilayo I. Olopade, Pedro Lopes, Andrey Rzhetsky
Importance: Lower-resource areas in Africa and Asia face a unique set of healthcare challenges: the dual high burden of communicable and non-communicable diseases; a paucity of highly trained primary healthcare providers in both rural and densely populated urban areas; and a lack of reliable, inexpensive internet connections.
1 code implementation • 18 Oct 2021 • Yuhui Yuan, Rao Fu, Lang Huang, WeiHong Lin, Chao Zhang, Xilin Chen, Jingdong Wang
We present a High-Resolution Transformer (HRFormer) that learns high-resolution representations for dense prediction tasks, in contrast to the original Vision Transformer that produces low-resolution representations and has high memory and computational cost.
Ranked #1 on
Pose Estimation
on AIC
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
no code implementations • 8 Oct 2021 • Zhiyun Lu, Yanwei Pan, Thibault Doutre, Parisa Haghani, Liangliang Cao, Rohit Prabhavalkar, Chao Zhang, Trevor Strohman
Our experiments show that for both losses, the WER on long-form speech reduces substantially as the training utterance length increases.
1 code implementation • 15 Sep 2021 • Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash
We use CAMul for multiple domains with varied sources and modalities and show that CAMul outperforms other state-of-art probabilistic forecasting models by over 25\% in accuracy and calibration.
no code implementations • Findings (NAACL) 2022 • Simiao Zuo, Yue Yu, Chen Liang, Haoming Jiang, Siawpeng Er, Chao Zhang, Tuo Zhao, Hongyuan Zha
In self-training, the student contributes to the prediction performance, and the teacher controls the training process by generating pseudo-labels.
no code implementations • 13 Sep 2021 • Chunzhi Gu, Yan Zhao, Chao Zhang
Human motion prediction, which plays a key role in computer vision, generally requires a past motion sequence as input.
no code implementations • Findings (EMNLP) 2021 • Yaqing Wang, Haoda Chu, Chao Zhang, Jing Gao
In this work, we study the problem of named entity recognition (NER) in a low resource scenario, focusing on few-shot and zero-shot settings.
no code implementations • 8 Sep 2021 • Yunqi Shao, Florian M. Dietrich, Carl Nettelblad, Chao Zhang
Here we compare the performance of two popular training algorithms, the adaptive moment estimation algorithm (Adam) and the Extended Kalman Filter algorithm (EKF), using the Behler-Parrinello neural network (BPNN) and two publicly accessible datasets of liquid water [Proc.
no code implementations • 7 Sep 2021 • Sergei Petrov, Chao Zhang, Jaswanth Yella, Yu Huang, Xiaoye Qian, Sthitie Bom
The scope of this challenge is to tackle the task of classifying soft sensing data with machine learning techniques.
no code implementations • 1 Sep 2021 • Guangzhi Sun, Chao Zhang, Philip C. Woodland
Contextual knowledge is important for real-world automatic speech recognition (ASR) applications.
no code implementations • 4 Aug 2021 • Chao Zhang, Sthitie Bom
However, the successful applications of deep learning in soft sensing are still not widely integrated in factory control systems, because most of the research on soft sensing do not have access to large scale industrial data which are varied, noisy and incomplete.
1 code implementation • 2 Aug 2021 • Zhen Li, Jing Tang, Deqing Zou, Qian Chen, Shouhuai Xu, Chao Zhang, Yichen Li, Hai Jin
Automatically detecting software vulnerabilities in source code is an important problem that has attracted much attention.
no code implementations • 29 Jul 2021 • Xianrui Zheng, Chao Zhang, Philip C. Woodland
Furthermore, on the AMI corpus, the proposed conversion for language prior probabilities enables BERT to obtain an extra 3% relative WERR, and the combination of BERT, GPT and GPT-2 results in further improvements.
1 code implementation • 3 Jul 2021 • Zhiwei Hao, Jianyuan Guo, Ding Jia, Kai Han, Yehui Tang, Chao Zhang, Han Hu, Yunhe Wang
Specifically, we train a tiny student model to match a pre-trained teacher model in the patch-level manifold space.
1 code implementation • 1 Jul 2021 • Qiujia Li, Chao Zhang, Philip C. Woodland
Commonly used automatic speech recognition (ASR) systems can be classified into frame-synchronous and label-synchronous categories, based on whether the speech is decoded on a per-frame or per-label basis.
no code implementations • 29 Jun 2021 • Fadi Boutros, Naser Damer, Jan Niklas Kolf, Kiran Raja, Florian Kirchbuchner, Raghavendra Ramachandra, Arjan Kuijper, Pengcheng Fang, Chao Zhang, Fei Wang, David Montero, Naiara Aginako, Basilio Sierra, Marcos Nieto, Mustafa Ekrem Erakin, Ugur Demir, Hazim Kemal, Ekenel, Asaki Kataoka, Kohei Ichikawa, Shizuma Kubo, Jie Zhang, Mingjie He, Dan Han, Shiguang Shan, Klemen Grm, Vitomir Štruc, Sachith Seneviratne, Nuran Kasthuriarachchi, Sanka Rasnayaka, Pedro C. Neto, Ana F. Sequeira, Joao Ribeiro Pinto, Mohsen Saffari, Jaime S. Cardoso
These teams successfully submitted 18 valid solutions.
no code implementations • 24 Jun 2021 • Lixue Liu, Chao Zhang, DaCheng Tao
Multi-fidelity data fusion (MDF) methods aims to use massive LF samples and small amounts of HF samples to develop an accurate and efficient model for describing the system with a reasonable computation burden.
no code implementations • CVPR 2021 • Jianyuan Guo, Kai Han, Han Wu, Chao Zhang, Xinghao Chen, Chunjing Xu, Chang Xu, Yunhe Wang
In this paper, we present a positive-unlabeled learning based scheme to expand training data by purifying valuable images from massive unlabeled ones, where the original training data are viewed as positive data and the unlabeled images in the wild are unlabeled data.
no code implementations • ICML Workshop AML 2021 • Fangcheng Liu, Chao Zhang, Hongyang Zhang
In this work, we propose a \emph{geometry-aware framework} to generate transferable adversarial perturbation with minimum norm for each input.
no code implementations • 8 Jun 2021 • Takumi Nakane, Haoran Xie, Chao Zhang
Specifically, by partitioning the template image into several regions and measuring the similarity of each region independently, multiple objectives are built and deformation estimation can thus be realized by solving the MOP with off-the-shelf multi-objective evolutionary algorithms (MOEAs).
1 code implementation • NeurIPS 2021 • Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang, B. Aditya Prakash
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP, which directly models the probability density of the forecast value.
no code implementations • 2 Jun 2021 • Chao Zhang, Samson Lasaulce, Martin Hennebel, Lucas Saludjian, Patrick Panciatici, H. Vincent Poor
For this purpose, we formulate the framework of decision-making oriented clustering and propose an algorithm providing a decision-based partition of the data space and good representative decisions.
no code implementations • 29 May 2021 • Jiahao Xie, Chao Zhang, Zebang Shen, Weijie Liu, Hui Qian
In many machine learning applications where massive and privacy-sensitive data are generated on numerous mobile or IoT devices, collecting data in a centralized location may be prohibitive.
2 code implementations • ACL 2021 • Yinghao Li, Pranav Shetty, Lucas Liu, Chao Zhang, Le Song
To address this challenge, we propose a conditional hidden Markov model (CHMM), which can effectively infer true labels from multi-source noisy labels in an unsupervised way.
1 code implementation • 8 May 2021 • Lingwei Peng, Hui Qian, Zhebang Shen, Chao Zhang, Fei Li
Model-free deep reinforcement learning has achieved great success in many domains, such as video games, recommendation systems and robotic control tasks.
2 code implementations • 26 Apr 2021 • Wei Zeng, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, ZhenZhang Yang, Kaisheng Wang, Xiaoda Zhang, Chen Li, Ziyan Gong, Yifan Yao, Xinjing Huang, Jun Wang, Jianfeng Yu, Qi Guo, Yue Yu, Yan Zhang, Jin Wang, Hengtao Tao, Dasen Yan, Zexuan Yi, Fang Peng, Fangqing Jiang, Han Zhang, Lingfeng Deng, Yehong Zhang, Zhe Lin, Chao Zhang, Shaojie Zhang, Mingyue Guo, Shanzhi Gu, Gaojun Fan, YaoWei Wang, Xuefeng Jin, Qun Liu, Yonghong Tian
To enhance the generalization ability of PanGu-$\alpha$, we collect 1. 1TB high-quality Chinese data from a wide range of domains to pretrain the model.
Ranked #1 on
Reading Comprehension (Zero-Shot)
on CMRC 2018
Cloze (multi-choices) (Few-Shot)
Cloze (multi-choices) (One-Shot)
+19
no code implementations • 23 Apr 2021 • Sheldon Fung, Xuequan Lu, Chao Zhang, Chang-Tsun Li
Extensive experiments show that our unsupervised learning method enables comparable detection performance to state-of-the-art supervised techniques, in both the intra- and inter-dataset settings.
no code implementations • 23 Apr 2021 • Yi He, Haoran Xie, Chao Zhang, Xi Yang, Kazunori Miyata
This paper proposes a deep generative model for generating normal maps from users sketch with geometric sampling.
no code implementations • 12 Apr 2021 • Samson Lasaulce, Chao Zhang, Vineeth Varma, Irinel Constantin Morarescu
Should the measures be more (or less) restrictive?
no code implementations • 12 Apr 2021 • Xianjie Gao, Xueguan Song, Maolin Shi, Chao Zhang, Hongwei Zhang
In this paper, based on in-situ TBM operational data, we use the machine-learning (ML) methods to build the real-time forecast models for TBM load parameters, which can instantaneously provide the future values of the TBM load parameters as long as the current data are collected.
no code implementations • 9 Apr 2021 • Chao Zhang, Xiaojun Chen, Shiqian Ma
In this paper, we propose a Riemannian smoothing steepest descent method to minimize a nonconvex and non-Lipschitz function on submanifolds.
1 code implementation • CVPR 2021 • Yingjie Cai, Xuesong Chen, Chao Zhang, Kwan-Yee Lin, Xiaogang Wang, Hongsheng Li
The key insight is that we decouple the instances from a coarsely completed semantic scene instead of a raw input image to guide the reconstruction of instances and the overall scene.
Ranked #1 on
3D Semantic Scene Completion
on NYUv2
no code implementations • 12 Mar 2021 • Adnan Haider, Chao Zhang, Florian L. Kreyssig, Philip C. Woodland
This paper presents a novel natural gradient and Hessian-free (NGHF) optimisation framework for neural network training that can operate efficiently in a distributed manner.
no code implementations • 10 Mar 2021 • Chao Zhang, Shihan Wang, Henk Aarts, Mehdi Dastani
Reinforcement learning (RL) agents in human-computer interactions applications require repeated user interactions before they can perform well.
no code implementations • 3 Mar 2021 • Chao Zhang, Wenqiang Yi, Yuanwei Liu, Qiang Wang
Numerical results indicate that 1) although the interference from other cells is enhanced via the RISs, the performance of the RIS-aided user still enhances since the channel quality is strengthened more obviously; and 2) the SIC order can be altered by employing the RISs since the RISs improve the channel quality of the aided user.
Information Theory Information Theory
2 code implementations • 21 Jan 2021 • Lang Huang, Chao Zhang, Hongyang Zhang
We propose self-adaptive training -- a unified training algorithm that dynamically calibrates and enhances training processes by model predictions without incurring an extra computational cost -- to advance both supervised and self-supervised learning of deep neural networks.
no code implementations • 15 Jan 2021 • Samuel Yen-Chi Chen, Tzu-Chieh Wei, Chao Zhang, Haiwang Yu, Shinjae Yoo
This research provides a hybrid quantum-classical graph convolutional network (QGCNN) for learning HEP data.
no code implementations • 5 Jan 2021 • Chao Zhang, Joaquin Vanschoren, Arlette van Wissen, Daniel Lakens, Boris de Ruyter, Wijnand A. IJsselsteijn
Psychological theories of habit posit that when a strong habit is formed through behavioral repetition, it can trigger behavior automatically in the same environment.
no code implementations • 22 Dec 2020 • Samuel Yen-Chi Chen, Tzu-Chieh Wei, Chao Zhang, Haiwang Yu, Shinjae Yoo
This work presents a quantum convolutional neural network (QCNN) for the classification of high energy physics events.
no code implementations • 2 Dec 2020 • Weijie Liu, Chao Zhang, Jiahao Xie, Zebang Shen, Hui Qian, Nenggan Zheng
Graph matching finds the correspondence of nodes across two graphs and is a basic task in graph-based machine learning.
1 code implementation • 25 Nov 2020 • Wei Wang, Chao Zhang, Xiaopei Wu
Accent recognition with deep learning framework is a similar work to deep speaker identification, they're both expected to give the input speech an identifiable representation.
no code implementations • 6 Nov 2020 • Guanghui Xu, Wei Song, Zhengchen Zhang, Chao Zhang, Xiaodong He, BoWen Zhou
Despite prosody is related to the linguistic information up to the discourse structure, most text-to-speech (TTS) systems only take into account that within each sentence, which makes it challenging when converting a paragraph of texts into natural and expressive speech.
no code implementations • 27 Oct 2020 • Wen Wu, Chao Zhang, Philip C. Woodland
In this paper, a novel two-branch neural network model structure is proposed for multimodal emotion recognition, which consists of a time synchronous branch (TSB) and a time asynchronous branch (TAB).
1 code implementation • EMNLP 2020 • Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, Chao Zhang
Fine-tuned pre-trained language models can suffer from severe miscalibration for both in-distribution and out-of-distribution (OOD) data due to over-parameterization.
no code implementations • 22 Oct 2020 • Guangzhi Sun, Chao Zhang, Phil Woodland
Significant progress has recently been made in speaker diarisation after the introduction of d-vectors as speaker embeddings extracted from neural network (NN) speaker classifiers for clustering speech segments.
no code implementations • 21 Oct 2020 • Rui Feng, Jie Yuan, Chao Zhang
We argue that the event extraction models so trained are inherently label-hungry, and can generalize poorly across domains and text genres. We propose a reading comprehension framework for event extraction. Specifically, we formulate event detection as a textual entailment prediction problem, and argument detection as a question answer-ing problem.
no code implementations • 19 Oct 2020 • Hao Wang, Jia Zhang, Yingce Xia, Jiang Bian, Chao Zhang, Tie-Yan Liu
However, most existing studies overlook the code's intrinsic structural logic, which indeed contains a wealth of semantic information, and fails to capture intrinsic features of codes.
1 code implementation • NAACL 2021 • Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, Chao Zhang
To address this problem, we develop a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision.
Ranked #1 on
Word Sense Disambiguation
on Words in Context
2 code implementations • EMNLP 2020 • Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, Jiawei Han
In this paper, we explore the potential of only using the label name of each class to train classification models on unlabeled data, without using any labeled documents.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Wendi Ren, Yinghao Li, Hanting Su, David Kartchner, Cassie Mitchell, Chao Zhang
We study the problem of learning neural text classifiers without using any labeled data, but only easy-to-provide rules as multiple weak supervision sources.
1 code implementation • EMNLP 2020 • Rongzhi Zhang, Yue Yu, Chao Zhang
Our method, SeqMix, simply augments the queried samples by generating extra labeled sequences in each iteration.
1 code implementation • 5 Oct 2020 • Yinghao Li, Rui Feng, Isaac Rehg, Chao Zhang
We study the problem of using (partial) constituency parse trees as syntactic guidance for controlled text generation.
1 code implementation • 5 Oct 2020 • Wanzheng Zhu, Chao Zhang, Shuochao Yao, Xiaobin Gao, Jiawei Han
We propose SHMM, a multi-modal spherical hidden Markov model for semantics-rich human mobility modeling.
1 code implementation • 4 Oct 2020 • Yue Yu, Kexin Huang, Chao Zhang, Lucas M. Glass, Jimeng Sun, Cao Xiao
Furthermore, most previous works focus on binary DDI prediction whereas the multi-typed DDI pharmacological effect prediction is a more meaningful but harder task.
no code implementations • 12 Sep 2020 • Yi Zhou, Shuyang Sun, Chao Zhang, Yikang Li, Wanli Ouyang
By assigning each relationship a single label, current approaches formulate the relationship detection as a classification problem.
1 code implementation • NeurIPS 2021 • Qi Zhu, Carl Yang, Yidan Xu, Haonan Wang, Chao Zhang, Jiawei Han
Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs.
no code implementations • 31 Aug 2020 • Chunzhi Gu, Xuequan Lu, Chao Zhang
In particular, we relate the transferred image with the example image under the Gaussian Mixture Model (GMM) and regard the transferred image color as the GMM centroids.
2 code implementations • ICML 2020 • Lingkai Kong, Jimeng Sun, Chao Zhang
We propose a new method for quantifying uncertainties of DNNs from a dynamical system perspective.
no code implementations • 22 Aug 2020 • Jinfeng Zeng, Chenfeng Cao, Chao Zhang, Pengxiang Xu, Bei Zeng
To obtain the full spectrum of the Hamiltonian, we use a quantum imaginary time evolution algorithm with high temperature, which prepares a thermal state with a small correlation length.
Quantum Physics
no code implementations • 28 Jul 2020 • Wei Xue, Gang Quan, Chao Zhang, Guohong Ding, Xiaodong He, BoWen Zhou
Statistical signal processing based speech enhancement methods adopt expert knowledge to design the statistical models and linear filters, which is complementary to the deep neural network (DNN) based methods which are data-driven.
1 code implementation • 18 Jul 2020 • Yu Meng, Yunyi Zhang, Jiaxin Huang, Yu Zhang, Chao Zhang, Jiawei Han
Mining a set of meaningful topics organized into a hierarchy is intuitively appealing since topic correlations are ubiquitous in massive text corpora.
Ranked #1 on
Topic Models
on NYT
no code implementations • 11 Jul 2020 • Dongbo Zhang, Zheng Fang, Xuequan Lu, Hong Qin, Antonio Robles-Kelly, Chao Zhang, Ying He
3D human segmentation has seen noticeable progress in re-cent years.
1 code implementation • 28 Jun 2020 • Chen Liang, Yue Yu, Haoming Jiang, Siawpeng Er, Ruijia Wang, Tuo Zhao, Chao Zhang
We study the open-domain named entity recognition (NER) problem under distant supervision.
1 code implementation • 18 Jun 2020 • Yue Yu, Yinghao Li, Jiaming Shen, Hao Feng, Jimeng Sun, Chao Zhang
We propose a self-supervised taxonomy expansion model named STEAM, which leverages natural supervision in the existing taxonomy for expansion.
1 code implementation • 16 May 2020 • Nick Altieri, Rebecca L. Barter, James Duncan, Raaz Dwivedi, Karl Kumbier, Xiao Li, Robert Netzorg, Briton Park, Chandan Singh, Yan Shuo Tan, Tiffany Tang, Yu Wang, Chao Zhang, Bin Yu
We use this data to develop predictions and corresponding prediction intervals for the short-term trajectory of COVID-19 cumulative death counts at the county-level in the United States up to two weeks ahead.
no code implementations • 1 May 2020 • Shi Zhi, Liyuan Liu, Yu Zhang, Shiyin Wang, Qi Li, Chao Zhang, Jiawei Han
While typical named entity recognition (NER) models require the training set to be annotated with all target types, each available datasets may only cover a part of them.
no code implementations • 13 Apr 2020 • Huajie Shao, Dachun Sun, Jiahao Wu, Zecheng Zhang, Aston Zhang, Shuochao Yao, Shengzhong Liu, Tianshi Wang, Chao Zhang, Tarek Abdelzaher
Motivated by this trend, we describe a novel item-item cross-platform recommender system, $\textit{paper2repo}$, that recommends relevant repositories on GitHub that match a given paper in an academic search system such as Microsoft Academic.
no code implementations • 13 Apr 2020 • Chao Zhang, Xiang Zhao, Kai Lin, Shaojun Zhang, Wen Zhao, Anzhong Wang
In particular, we find that, out of the five non-trivial field equations, only three are independent, so the problem is well-posed, as now generically there are only three unknown functions, {$F(r), B(r), A(r)$, where $F$ and $B$ are metric coefficients, and $A$ describes the aether field.}
General Relativity and Quantum Cosmology Astrophysics of Galaxies High Energy Physics - Phenomenology High Energy Physics - Theory
no code implementations • 30 Mar 2020 • Takumi Nakane, Xuequan Lu, Chao Zhang
In evolutionary algorithms, genetic operators iteratively generate new offspring which constitute a potentially valuable set of search history.
1 code implementation • CVPR 2020 • Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhaohui Yang, Han Wu, Xinghao Chen, Chang Xu
To this end, we propose a hierarchical trinity search framework to simultaneously discover efficient architectures for all components (i. e. backbone, neck, and head) of object detector in an end-to-end manner.
3 code implementations • NeurIPS 2020 • Lang Huang, Chao Zhang, Hongyang Zhang
We propose self-adaptive training---a new training algorithm that dynamically corrects problematic training labels by model predictions without incurring extra computational cost---to improve generalization of deep learning for potentially corrupted training data.
no code implementations • 20 Jan 2020 • Chao Zhang, Xuequan Lu, Katsuya Hotta, Xi Yang
The WA data can be naturally obtained in an interactive way for specific tasks, for example, in the case of homography estimation, one can easily annotate points on the same plane/object with a single label by observing the image.
no code implementations • 10 Nov 2019 • Yassir Fathullah, Chao Zhang, Philip C. Woodland
Speaker diarisation systems nowadays use embeddings generated from speech segments in a bottleneck layer, which are needed to be discriminative for unseen speakers.
no code implementations • 10 Nov 2019 • Chao Zhang, Zichao Yang, Xiaodong He, Li Deng
This review provides a comprehensive analysis of recent works on multimodal deep learning from three perspectives: learning multimodal representations, fusing multimodal signals at various levels, and multimodal applications.
1 code implementation • NeurIPS 2019 • Yu Meng, Jiaxin Huang, Guangyuan Wang, Chao Zhang, Honglei Zhuang, Lance Kaplan, Jiawei Han
While text embeddings are typically learned in the Euclidean space, directional similarity is often more effective in tasks such as word similarity and document clustering, which creates a gap between the training stage and usage stage of text embedding.
1 code implementation • 22 Oct 2019 • Qiujia Li, Florian L. Kreyssig, Chao Zhang, Philip C. Woodland
In this paper, we propose Discriminative Neural Clustering (DNC) that formulates data clustering with a maximum number of clusters as a supervised sequence-to-sequence learning problem.
1 code implementation • ICCV 2019 • Jianyuan Guo, Yuhui Yuan, Lang Huang, Chao Zhang, Jinge Yao, Kai Han
On the other hand, there still exist many useful contextual cues that do not fall into the scope of predefined human parts or attributes.
Ranked #57 on
Person Re-Identification
on DukeMTMC-reID
no code implementations • 21 Oct 2019 • Jiahao Xie, Zebang Shen, Chao Zhang, Boyu Wang, Hui Qian
This paper focuses on projection-free methods for solving smooth Online Convex Optimization (OCO) problems.
no code implementations • 21 Oct 2019 • Chao Zhang, Jiahao Xie, Zebang Shen, Peilin Zhao, Tengfei Zhou, Hui Qian
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the Markov Chain Monte Carlo (MCMC) sampling.
no code implementations • 17 Oct 2019 • Jiaming Shen, Zeqiu Wu, Dongming Lei, Chao Zhang, Xiang Ren, Michelle T. Vanni, Brian M. Sadler, Jiawei Han
Taxonomies are of great value to many knowledge-rich applications.
1 code implementation • 10 Oct 2019 • Wanzheng Zhu, Hongyu Gong, Jiaming Shen, Chao Zhang, Jingbo Shang, Suma Bhat, Jiawei Han
In this paper, we study the task of multi-faceted set expansion, which aims to capture all semantic facets in the seed set and return multiple sets of entities, one for each semantic facet.
no code implementations • 8 Oct 2019 • Chao Zhang, Min-Hsiu Hsieh, DaCheng Tao
We also develop the tail inequalities for matrix random series and matrix martingale difference sequence.
2 code implementations • 8 Oct 2019 • Yunqi Shao, Matti Hellström, Pavlin D. Mitev, Lisanne Knijff, Chao Zhang
Atomic neural networks (ANNs) constitute a class of machine learning methods for predicting potential energy surfaces and physico-chemical properties of molecules and materials.
Computational Physics Disordered Systems and Neural Networks Chemical Physics
no code implementations • 30 Sep 2019 • Yuan-Yuan Zhao, Chao Zhang, Shuming Cheng, Xinhui Li, Yu Guo, Bi-Heng Liu, Huan-Yu Ku, Shin-Liang Chen, Qiaoyan Wen, Yun-Feng Huang, Guo-Yong Xiang, Chuan-Feng Li, Guang-Can Guo
If entanglement could be verified without any trust in the devices of observers, i. e., in a device-independent (DI) way, then unconditional security can be guaranteed for various quantum information tasks.
Quantum Physics
no code implementations • 16 Sep 2019 • Hang Zou, Chao Zhang, Samson Lasaulce, Lucas Saludjian, Patrick Panciatici
We propose a framework to find a good (finite) decision set which induces a minimal performance loss w. r. t.
no code implementations • 14 Sep 2019 • Qiujia Li, Chao Zhang, Philip C. Woodland
This paper proposes a novel automatic speech recognition (ASR) framework called Integrated Source-Channel and Attention (ISCA) that combines the advantages of traditional systems based on the noisy source-channel model (SC) and end-to-end style systems using attention-based sequence-to-sequence models.
1 code implementation • 20 Aug 2019 • Yu Meng, Jiaxin Huang, Guangyuan Wang, Zihan Wang, Chao Zhang, Yu Zhang, Jiawei Han
We propose a new task, discriminative topic mining, which leverages a set of user-provided category names to mine discriminative topics from text corpora.
1 code implementation • ICCV 2019 • Chao Zhang, Stephan Liwicki, William Smith, Roberto Cipolla
For the spherical domain, several methods recently adopt an icosahedron mesh, but systems are typically rotation invariant or require significant memory and parameters, thus enabling execution only at very low resolutions.
Ranked #14 on
Semantic Segmentation
on Stanford2D3D Panoramic
6 code implementations • 29 Jul 2019 • Lang Huang, Yuhui Yuan, Jianyuan Guo, Chao Zhang, Xilin Chen, Jingdong Wang
There are two successive attention modules each estimating a sparse affinity matrix.
no code implementations • 2 Jul 2019 • Yi Zhang, Chao Zhang, Takuya Akashi
We propose a novel multi-scale template matching method which is robust against both scaling and rotation in unconstrained environments.
no code implementations • 23 Jun 2019 • Jian-Ya Ding, Chao Zhang, Lei Shen, Shengyin Li, Bing Wang, Yinghui Xu, Le Song
In many applications, a similar MIP model is solved on a regular basis, maintaining remarkable similarities in model structures and solution appearances but differing in formulation coefficients.
no code implementations • 21 Jun 2019 • Patrick von Platen, Chao Zhang, Philip Woodland
This paper proposes a novel multi-span structure for acoustic modelling based on the raw waveform with multiple streams of CNN input layers, each processing a different span of the raw waveform signal.
no code implementations • 18 Jun 2019 • Zhisheng Zhong, Fangyin Wei, Zhouchen Lin, Chao Zhang
Furthermore, we propose that weight tensors in networks with proper order and balanced dimension are easier to be compressed.
no code implementations • 8 Jun 2019 • Yu-cheng Chen, Matus Telgarsky, Chao Zhang, Bolton Bailey, Daniel Hsu, Jian Peng
This paper provides a simple procedure to fit generative networks to target distributions, with the goal of a small Wasserstein distance (or other optimal transport costs).
no code implementations • 6 Jun 2019 • Chuang-Shi Shen, Chao Zhang, Xiaosheng Gao, Yulong Li
We recognize that the fragmentation problem in shells is analogous to the cracking behavior of tree bark, and closed form solutions is obtained to describe the relationship between the expansion velocity and the number of necks with consideration of the strain rate dependent strength of the shell material.
Soft Condensed Matter Applied Physics
no code implementations • 17 May 2019 • Hang Zou, Chao Zhang, Samson Lasaulce, Lucas Saludjian, Patrick Panciatici
In this paper, we introduce the problem of decision-oriented communications, that is, the goal of the source is to send the right amount of information in order for the intended destination to execute a task.
no code implementations • 11 Apr 2019 • Zhuo Lei, Chao Zhang, Qian Zhang, Guoping Qiu
In constructing the dataset, because of the subjectivity of user-generated video summarization, we manually annotate 25 summaries for each video, which are in total 1300 summaries.