no code implementations • Findings (NAACL) 2022 • Le Qi, Yu Zhang, Qingyu Yin, Guidong Zheng, Wen Junjie, Jinlong Li, Ting Liu
In this process, there are two kinds of critical information that are commonly employed: the representation information of original questions and the interactive information between pairs of questions.
no code implementations • EMNLP 2020 • Zheng Li, Mukul Kumar, William Headden, Bing Yin, Ying WEI, Yu Zhang, Qiang Yang
Recent emergence of multilingual pre-training language model (mPLM) has enabled breakthroughs on various downstream cross-lingual transfer (CLT) tasks.
no code implementations • COLING 2022 • Meiguo Wang, Benjamin Yao, Bin Guo, Xiaohu Liu, Yu Zhang, Tuan-Hung Pham, Chenlei Guo
To evaluate the performance of a multi-domain goal-oriented Dialogue System (DS), it is important to understand what the users’ goals are for the conversations and whether those goals are successfully achieved.
1 code implementation • Findings (ACL) 2022 • Le Qi, Shangwen Lv, Hongyu Li, Jing Liu, Yu Zhang, Qiaoqiao She, Hua Wu, Haifeng Wang, Ting Liu
Open-domain question answering has been used in a wide range of applications, such as web search and enterprise search, which usually takes clean texts extracted from various formats of documents (e. g., web pages, PDFs, or Word documents) as the information source.
no code implementations • ECCV 2020 • Song Zhang, Yu Zhang, Zhe Jiang, Dongqing Zou, Jimmy Ren, Bin Zhou
A detail enhancing branch is proposed to reconstruct day light-specific features from the domain-invariant representations in a residual manner, regularized by a ranking loss.
1 code implementation • CoNLL (EMNLP) 2021 • Yang Hou, Houquan Zhou, Zhenghua Li, Yu Zhang, Min Zhang, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan
In the coarse labeling stage, the joint model outputs a bracketed tree, in which each node corresponds to one of four labels (i. e., phrase, subphrase, word, subword).
no code implementations • 28 Jan 2023 • Kejun Chen, Yu Zhang
In addition, based on our proposed framework, we design three methods to initialize the weights of the shortcut connection layer according to the physical characteristics of AC-PF equations.
1 code implementation • 19 Jan 2023 • Junde Wu, Rao Fu, Huihui Fang, Yu Zhang, Yanwu Xu
This architectural improvement leads to a new diffusion-based medical image segmentation method called MedSegDiff-V2, which significantly improves the performance of MedSegDiff.
no code implementations • 19 Jan 2023 • Chao-Han Huck Yang, Bo Li, Yu Zhang, Nanxin Chen, Rohit Prabhavalkar, Tara N. Sainath, Trevor Strohman
In this work, we propose a new parameter-efficient learning framework based on neural model reprogramming for cross-lingual speech recognition, which can \textbf{re-purpose} well-trained English automatic speech recognition (ASR) models to recognize the other languages.
no code implementations • 17 Jan 2023 • Yu Zhang, Yue Wang, Zhi Tian, Geert Leus, Gong Zhang
This paper proposes a super-resolution harmonic retrieval method for uncorrelated strictly non-circular signals, whose covariance and pseudo-covariance present Toeplitz and Hankel structures, respectively.
no code implementations • 13 Jan 2023 • Xiaomeng Chu, Jiajun Deng, Yuan Zhao, Jianmin Ji, Yu Zhang, Houqiang Li, Yanyong Zhang
To this end, we propose OA-BEV, a network that can be plugged into the BEV-based 3D object detection framework to bring out the objects by incorporating object-aware pseudo-3D features and depth features.
no code implementations • 5 Jan 2023 • Xu Yang, Zhangzikang Li, Haiyang Xu, Hanwang Zhang, Qinghao Ye, Chenliang Li, Ming Yan, Yu Zhang, Fei Huang, Songfang Huang
Besides T2W attention, we also follow previous VDL-BERTs to set a word-to-patch (W2P) attention in the cross-modal encoder.
no code implementations • 5 Jan 2023 • Zihua Wang, Xu Yang, Haiyang Xu, Hanwang Zhang, Chenliang Li, Songfang Huang, Fei Huang, Yu Zhang
We design a novel global-local Transformer named \textbf{Ada-ClustFormer} (\textbf{ACF}) to generate captions.
1 code implementation • 19 Dec 2022 • Qiao Xiao, Boqian Wu, Yu Zhang, Shiwei Liu, Mykola Pechenizkiy, Elena Mocanu, Decebal Constantin Mocanu
The receptive field (RF), which determines the region of time series to be ``seen'' and used, is critical to improve the performance for time series classification (TSC).
no code implementations • 19 Dec 2022 • Yong Cheng, Yu Zhang, Melvin Johnson, Wolfgang Macherey, Ankur Bapna
We present Mu$^{2}$SLAM, a multilingual sequence-to-sequence model pre-trained jointly on unlabeled speech, unlabeled text and supervised data spanning Automatic Speech Recognition (ASR), Automatic Speech Translation (AST) and Machine Translation (MT), in over 100 languages.
1 code implementation • 12 Dec 2022 • Yu Zhang, Yunyi Zhang, Martin Michalski, Yucheng Jiang, Yu Meng, Jiawei Han
Instead of mining coherent topics from a given text corpus in a completely unsupervised manner, seed-guided topic discovery methods leverage user-provided seed words to extract distinctive and coherent topics so that the mined topics can better cater to the user's interest.
no code implementations • 7 Dec 2022 • Kejun Chen, Shourya Bose, Yu Zhang
Non-convex AC optimal power flow (AC-OPF) is a fundamental optimization problem in power system analysis.
no code implementations • 5 Dec 2022 • Yu Zhang, Yunyi Zhang, Yucheng Jiang, Martin Michalski, Yu Deng, Lucian Popa, ChengXiang Zhai, Jiawei Han
Given a few seed entities of a certain type (e. g., Software or Programming Language), entity set expansion aims to discover an extensive set of entities that share the same type as the seeds.
1 code implementation • 2 Dec 2022 • Tao Zhou, Yi Zhou, Chen Gong, Jian Yang, Yu Zhang
In this paper, we propose a novel Feature Aggregation and Propagation Network (FAP-Net) for camouflaged object detection.
no code implementations • 29 Nov 2022 • Junde Wu, Huihui Fang, Yehui Yang, Yu Zhang, Haoyi Xiong, Huazhu Fu, Yanwu Xu
In the paper, we call them expert-level classification.
no code implementations • 24 Nov 2022 • Yueqing Sun, Yu Zhang, Le Qi, Qi Shi
In this paper, we aim to address the above limitation by leveraging the implicit knowledge stored in PrLMs and propose a two-stage prompt-based unsupervised commonsense question answering framework (TSGP).
no code implementations • 20 Nov 2022 • Yunhao Gou, Tom Ko, Hansi Yang, James Kwok, Yu Zhang, Mingxuan Wang
(2) Under-utilization of the unmasked tokens: CMLM primarily focuses on the masked token but it cannot simultaneously leverage other tokens to learn vision-language associations.
no code implementations • 16 Nov 2022 • Juan Zha, Zheng Li, Ying WEI, Yu Zhang
However, most prior works assume that all the tasks are sampled from a single data source, which cannot adapt to real-world scenarios where tasks are heterogeneous and lie in different distributions.
no code implementations • 13 Nov 2022 • Xuetong Wang, Kanhao Zhao, Rong Zhou, Alex Leow, Ricardo Osorio, Yu Zhang, Lifang He
Normative modeling is an emerging and promising approach to effectively study disorder heterogeneity in individual participants.
1 code implementation • 7 Nov 2022 • Yi Zhai, Yu Zhang, Shuo Liu, Xiaomeng Chu, Jie Peng, Jianmin Ji, Yanyong Zhang
Instead of extracting features from the tensor program itself, TLP extracts features from the schedule primitives.
1 code implementation • 6 Nov 2022 • Yu Meng, Martin Michalski, Jiaxin Huang, Yu Zhang, Tarek Abdelzaher, Jiawei Han
In this work, we study few-shot learning with PLMs from a different perspective: We first tune an autoregressive PLM on the few-shot samples and then use it as a generator to synthesize a large amount of novel training samples which augment the original training set.
no code implementations • 2 Nov 2022 • Chao-Han Huck Yang, Bo Li, Yu Zhang, Nanxin Chen, Tara N. Sainath, Sabato Marco Siniscalchi, Chin-Hui Lee
We propose a quantum kernel learning (QKL) framework to address the inherent data sparsity issues often encountered in training large-scare acoustic models in low-resource scenarios.
no code implementations • 2 Nov 2022 • Yu Zhang, Mitchell Bucklew
In this paper, we introduce Max Markov Chain (MMC), a novel representation for a useful subset of High-order Markov Chains (HMCs) with sparse correlations among the states.
1 code implementation • 1 Nov 2022 • Junde Wu, Rao Fu, Huihui Fang, Yu Zhang, Yehui Yang, Haoyi Xiong, Huiying Liu, Yanwu Xu
Inspired by the success of DPM, we propose the first DPM based model toward general medical image segmentation tasks, which we named MedSegDiff.
no code implementations • 31 Oct 2022 • Zhong Meng, Tongzhou Chen, Rohit Prabhavalkar, Yu Zhang, Gary Wang, Kartik Audhkhasi, Jesse Emond, Trevor Strohman, Bhuvana Ramabhadran, W. Ronny Huang, Ehsan Variani, Yinghui Huang, Pedro J. Moreno
In this work, we propose a modular hybrid autoregressive transducer (MHAT) that has structurally separated label and blank decoders to predict label and blank distributions, respectively, along with a shared acoustic encoder.
no code implementations • 29 Oct 2022 • Yongqiang Wang, Zhehuai Chen, Chengjian Zheng, Yu Zhang, Wei Han, Parisa Haghani
We propose a novel method to accelerate training and inference process of recurrent neural network transducer (RNN-T) based on the guidance from a co-trained connectionist temporal classification (CTC) model.
no code implementations • 28 Oct 2022 • Nobuyuki Morioka, Heiga Zen, Nanxin Chen, Yu Zhang, Yifan Ding
Adapting a neural text-to-speech (TTS) model to a target speaker typically involves fine-tuning most if not all of the parameters of a pretrained multi-speaker backbone model.
1 code implementation • 28 Oct 2022 • Xubo Liu, Qiushi Huang, Xinhao Mei, Haohe Liu, Qiuqiang Kong, Jianyuan Sun, Shengchen Li, Tom Ko, Yu Zhang, Lilian H. Tang, Mark D. Plumbley, Volkan Kılıç, Wenwu Wang
Audio captioning is the task of generating captions that describe the content of audio clips.
no code implementations • 27 Oct 2022 • Takaaki Saeki, Heiga Zen, Zhehuai Chen, Nobuyuki Morioka, Gary Wang, Yu Zhang, Ankur Bapna, Andrew Rosenberg, Bhuvana Ramabhadran
This paper proposes Virtuoso, a massively multilingual speech-text joint semi-supervised learning framework for text-to-speech synthesis (TTS) models.
1 code implementation • 27 Oct 2022 • Qiushi Huang, Yu Zhang, Tom Ko, Xubo Liu, Bo Wu, Wenwu Wang, Lilian Tang
Persona-based dialogue systems aim to generate consistent responses based on historical context and predefined persona.
no code implementations • 18 Oct 2022 • Zhehuai Chen, Ankur Bapna, Andrew Rosenberg, Yu Zhang, Bhuvana Ramabhadran, Pedro Moreno, Nanxin Chen
First, we show that by combining speech representations with byte-level text representations and use of language embeddings, we can dramatically reduce the Character Error Rate (CER) on languages with no supervised speech from 64. 8\% to 30. 8\%, a relative reduction of 53\%.
1 code implementation • 14 Oct 2022 • Kuan-Po Huang, Yu-Kuan Fu, Tsu-Yuan Hsu, Fabian Ritter Gutierrez, Fan-Lin Wang, Liang-Hsuan Tseng, Yu Zhang, Hung-Yi Lee
Self-supervised learned (SSL) speech pre-trained models perform well across various speech processing tasks.
no code implementations • 13 Oct 2022 • Tara N. Sainath, Rohit Prabhavalkar, Ankur Bapna, Yu Zhang, Zhouyuan Huo, Zhehuai Chen, Bo Li, Weiran Wang, Trevor Strohman
In addition, we explore JOIST using a streaming E2E model with an order of magnitude more data, which are also novelties compared to previous works.
no code implementations • 11 Oct 2022 • Dongseong Hwang, Khe Chai Sim, Yu Zhang, Trevor Strohman
Knowledge distillation is an effective machine learning technique to transfer knowledge from a teacher model to a smaller student model, especially with unlabeled data.
no code implementations • 4 Oct 2022 • Zixiao Wang, Yuluo Guo, Jin Zhao, Yu Zhang, Hui Yu, Xiaofei Liao, Hai Jin, Biao Wang, Ting Yu
In this paper, we propose a Graph Inception Diffusion Networks(GIDN) model.
Ranked #1 on
Link Property Prediction
on ogbl-ddi
no code implementations • 3 Oct 2022 • Yu Zhang, Li Liu, Chen Diao, Ning Cai
Computer model has been extensively adopted to overcome the time limitation of language evolution by transforming language theory into physical modeling mechanism, which helps to explore the general laws of the evolution.
no code implementations • 26 Sep 2022 • Gabriel Intriago, Andres Intriago, Raul Intriago, Yu Zhang
An observer-based fault detection scheme for grid-forming inverters operating in islanded droop-controlled AC microgrids is proposed.
no code implementations • 26 Sep 2022 • Xinnan Ding, Shan Du, Yu Zhang, Kejun Wang
The critical goal of gait recognition is to acquire the inter-frame walking habit representation from the gait sequences.
no code implementations • 25 Sep 2022 • Gabriel Intriago, Yu Zhang
Instance selection is a vital technique for energy big data analytics.
1 code implementation • 22 Sep 2022 • Shengcai Liu, Yu Zhang, Ke Tang, Xin Yao
Traditional solvers for tackling combinatorial optimization (CO) problems are usually designed by human experts.
no code implementations • 21 Sep 2022 • Yu Zhang, Bing-Zhao Li
In this paper, we propose and design the definition of the discrete linear canonical transform on graphs (GLCT), which is an extension of the discrete linear canonical transform (DLCT), just as the graph Fourier transform (GFT) is an extension of the discrete Fourier transform (DFT).
no code implementations • 9 Sep 2022 • Yu Zhang, Tawfik Osman, Ahmed Alkhateeb
Furthermore, a hardware proof-of-concept prototype based on mmWave phased arrays is built and used to implement and evaluate the developed online beam learning solutions in realistic scenarios.
no code implementations • 26 Aug 2022 • Yu Zhang, Shuaifei Chen, Jiayi Zhang
Cell-free massive multiple-input-multiple-output is promising to meet the stringent quality-of-experience (QoE) requirements of railway wireless communications by coordinating many successional access points (APs) to serve the onboard users coherently.
1 code implementation • journal 2022 • Shujun Yang, Yu Zhang, Yuheng Jia, and Weijia Zhang
By taking advantage of the local manifold structure, a Laplacian graph is constructed from the superpixels to ensure that a typical pixel should be similar to its neighbors within the same superpixel.
no code implementations • 16 Aug 2022 • Enqiang Zhu, Yu Zhang, Chanjuan Liu
The maximum independent set (MIS) problem, a classical NP-hard problem with extensive applications in various areas, aims to find the largest set of vertices with no edge among them.
no code implementations • 7 Aug 2022 • Zesheng Ye, Lina Yao, Yu Zhang, Sylvia Gustin
Recent studies demonstrate the use of a two-stage supervised framework to generate images that depict human perception to visual stimuli from EEG, referring to EEG-visual reconstruction.
1 code implementation • 5 Aug 2022 • Yongxiang Tang, Wentao Bai, Guilin Li, Xialong Liu, Yu Zhang
In this paper, we proposed the Customizable Recall@N Optimization Loss (CROLoss), a loss function that can directly optimize the Recall@N metrics and is customizable for different choices of N. This proposed CROLoss formulation defines a more generalized loss function space, covering most of the conventional loss functions as special cases.
1 code implementation • 5 Aug 2022 • Junde Wu, Yu Zhang, Rao Fu, Yuanpei Liu, Jing Gao
Then, to ensure that the method adapts to the dynamic and unseen person flow, we propose Graph Convolutional Network (GCN) with a simple Nearest Neighbor (NN) strategy to accurately cluster the instances of CSG.
no code implementations • 3 Aug 2022 • Qibing Bai, Tom Ko, Yu Zhang
In human speech, the attitude of a speaker cannot be fully expressed only by the textual content.
1 code implementation • 18 Jul 2022 • Xinyu Shi, Dong Wei, Yu Zhang, Donghuan Lu, Munan Ning, Jiashun Chen, Kai Ma, Yefeng Zheng
A key to this challenging task is to fully utilize the information in the support images by exploiting fine-grained correlations between the query and support images.
Ranked #1 on
Few-Shot Semantic Segmentation
on PASCAL-5i (1-Shot)
no code implementations • 16 Jul 2022 • Jiahao Qi, Zhiqiang Gong, Xingyue Liu, Kangcheng Bin, Chen Chen, YongQian Li, Wei Xue, Yu Zhang, Ping Zhong
Deep learning methodology contributes a lot to the development of hyperspectral image (HSI) analysis community.
1 code implementation • 7 Jul 2022 • Jiashun Chen, Donghuan Lu, Yu Zhang, Dong Wei, Munan Ning, Xinyu Shi, Zhe Xu, Yefeng Zheng
In this study, we propose a novel Deformer module along with a multi-scale framework for the deformable image registration task.
no code implementations • 18 Jun 2022 • Zhanghao Sun, Yu Zhang, Yicheng Wu, Dong Huo, Yiming Qian, Jian Wang
We propose three applications using our redundancy codes: (1) Self error-correction for SL imaging under strong ambient light, (2) Error detection for adaptive reconstruction under global illumination, and (3) Interference filtering with device-specific projection sequence encoding, especially for event camera-based SL and light curtain devices.
no code implementations • 4 Jun 2022 • Xiaochen Li, Xin Song, Pengjia Yuan, Xialong Liu, Yu Zhang
In this paper, we focus on a new type of user interest, i. e., user retargeting interest.
no code implementations • 25 May 2022 • Alexis Conneau, Min Ma, Simran Khanuja, Yu Zhang, Vera Axelrod, Siddharth Dalmia, Jason Riesa, Clara Rivera, Ankur Bapna
We introduce FLEURS, the Few-shot Learning Evaluation of Universal Representations of Speech benchmark.
no code implementations • 24 May 2022 • Shourya Bose, Sifat Chowdhury, Yu Zhang
Mobile energy storage systems (MESS) offer great operational flexibility to enhance the resiliency of distribution systems in an emergency condition.
no code implementations • 20 May 2022 • Bowen Jin, Yu Zhang, Qi Zhu, Jiawei Han
We study node representation learning on heterogeneous text-rich networks, where nodes and edges are multi-typed and some types of nodes are associated with text information.
no code implementations • 19 May 2022 • Yu Zhang, Zhiqiang Gong, Yichuang Zhang, YongQian Li, Kangcheng Bin, Jiahao Qi, Wei Xue, Ping Zhong
Transferable adversarial attack is always in the spotlight since deep learning models have been demonstrated to be vulnerable to adversarial samples.
1 code implementation • 18 May 2022 • Qianqian Dong, Fengpeng Yue, Tom Ko, Mingxuan Wang, Qibing Bai, Yu Zhang
Direct Speech-to-speech translation (S2ST) has drawn more and more attention recently.
1 code implementation • NAACL 2022 • Yu Zhang, Yu Meng, Xuan Wang, Sheng Wang, Jiawei Han
Discovering latent topics from text corpora has been studied for decades.
no code implementations • 3 May 2022 • Yun Li, Zhe Liu, Lina Yao, Molly Lucas, Jessica J. M. Monaghan, Yu Zhang
With the development of digital technology, machine learning has paved the way for the next generation of tinnitus diagnoses.
no code implementations • 2 May 2022 • Kejun Chen, Yu Zhang
With an increasing high penetration of solar photovoltaic generation in electric power grids, voltage phasors and branch power flows experience more severe fluctuations.
no code implementations • 29 Apr 2022 • Shourya Bose, Yu Zhang
Distributed energy storage systems (ESSs) can be efficiently leveraged for load restoration (LR) for a microgrid (MG) in island mode.
no code implementations • 27 Apr 2022 • Houliang Zhou, Lifang He, Yu Zhang, Li Shen, Brian Chen
Identification of brain regions related to the specific neurological disorders are of great importance for biomarker and diagnostic studies.
no code implementations • 25 Apr 2022 • Xiaochen Li, Rui Zhong, Jian Liang, Xialong Liu, Yu Zhang
Rich user behavior information is of great importance for capturing and understanding user interest in click-through rate (CTR) prediction.
no code implementations • 7 Apr 2022 • Zhehuai Chen, Yu Zhang, Andrew Rosenberg, Bhuvana Ramabhadran, Pedro Moreno, Ankur Bapna, Heiga Zen
Self-supervised learning from speech signals aims to learn the latent structure inherent in the signal, while self-supervised learning from text attempts to capture lexical information.
no code implementations • 5 Apr 2022 • Zhiyun Lu, Yongqiang Wang, Yu Zhang, Wei Han, Zhehuai Chen, Parisa Haghani
Self-supervised learning of speech representations has achieved impressive results in improving automatic speech recognition (ASR).
no code implementations • 4 Apr 2022 • Zhe Sage Chen, Prathamesh, Kulkarni, Isaac R. Galatzer-Levy, Benedetta Bigio, Carla Nasca, Yu Zhang
In this review, we provide a comprehensive review of the ML methodologies and applications by combining neuroimaging, neuromodulation, and advanced mobile technologies in psychiatry practice.
no code implementations • 30 Mar 2022 • Kuan Po Huang, Yu-Kuan Fu, Yu Zhang, Hung-Yi Lee
Speech distortions are a long-standing problem that degrades the performance of supervisely trained speech processing models.
1 code implementation • 29 Mar 2022 • Rui Wang, Qibing Bai, Junyi Ao, Long Zhou, Zhixiang Xiong, Zhihua Wei, Yu Zhang, Tom Ko, Haizhou Li
LightHuBERT outperforms the original HuBERT on ASR and five SUPERB tasks with the HuBERT size, achieves comparable performance to the teacher model in most tasks with a reduction of 29% parameters, and obtains a $3. 5\times$ compression ratio in three SUPERB tasks, e. g., automatic speaker verification, keyword spotting, and intent classification, with a slight accuracy loss.
1 code implementation • 29 Mar 2022 • Zhixue Wang, Yu Zhang, Lin Luo, Nan Wang
This paper proposed a novel anomaly detection (AD) approach of High-speed Train images based on convolutional neural networks and the Vision Transformer.
1 code implementation • 27 Mar 2022 • Baijiong Lin, Yu Zhang
This paper presents LibMTL, an open-source Python library built on PyTorch, which provides a unified, comprehensive, reproducible, and extensible implementation framework for Multi-Task Learning (MTL).
1 code implementation • 27 Mar 2022 • Yu Zhang, Yun Wang, Haidong Zhang, Bin Zhu, Siming Chen, Dongmei Zhang
In this paper, we propose a conceptual framework for data labeling and OneLabeler based on the conceptual framework to support easy building of labeling tools for diverse usage scenarios.
1 code implementation • 26 Mar 2022 • Xuesong Wang, Lina Yao, Islem Rekik, Yu Zhang
Nonetheless, existing contrastive methods generate resemblant pairs only on pixel-level features of 3D medical images, while the functional connectivity that reveals critical cognitive information is under-explored.
no code implementations • 24 Mar 2022 • Ye Jia, Yifan Ding, Ankur Bapna, Colin Cherry, Yu Zhang, Alexis Conneau, Nobuyuki Morioka
End-to-end speech-to-speech translation (S2ST) without relying on intermediate text representations is a rapidly emerging frontier of research.
no code implementations • 21 Mar 2022 • Alexis Conneau, Ankur Bapna, Yu Zhang, Min Ma, Patrick von Platen, Anton Lozhkov, Colin Cherry, Ye Jia, Clara Rivera, Mihir Kale, Daan van Esch, Vera Axelrod, Simran Khanuja, Jonathan H. Clark, Orhan Firat, Michael Auli, Sebastian Ruder, Jason Riesa, Melvin Johnson
Covering 102 languages from 10+ language families, 3 different domains and 4 task families, XTREME-S aims to simplify multilingual speech representation evaluation, as well as catalyze research in "universal" speech representation learning.
no code implementations • 9 Mar 2022 • Donghui Hu, Yu Zhang, Cong Yu, Jian Wang, Yaofei Wang
Image steganography is the art and science of using images as cover for covert communications.
no code implementations • 24 Feb 2022 • Murali Karthick Baskar, Andrew Rosenberg, Bhuvana Ramabhadran, Yu Zhang, Pedro Moreno
They treat all unsupervised speech samples with equal weight, which hinders learning as not all samples have relevant information to learn meaningful representations.
no code implementations • 23 Feb 2022 • Fang Da, Yu Zhang
The success of motion prediction for autonomous driving relies on integration of information from the HD maps.
1 code implementation • 15 Feb 2022 • Long Yang, Jiaming Ji, Juntao Dai, Yu Zhang, Pengfei Li, Gang Pan
Although using bounds as surrogate functions to design safe RL algorithms have appeared in some existing works, we develop them at least three aspects: (i) We provide a rigorous theoretical analysis to extend the surrogate functions to generalized advantage estimator (GAE).
1 code implementation • 11 Feb 2022 • Yu Zhang, Zhihong Shen, Chieh-Han Wu, Boya Xie, Junheng Hao, Ye-Yi Wang, Kuansan Wang, Jiawei Han
Large-scale multi-label text classification (LMTC) aims to associate a document with its relevant labels from a large candidate set.
1 code implementation • 9 Feb 2022 • Yu Meng, Jiaxin Huang, Yu Zhang, Jiawei Han
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural language processing tasks: Unidirectional PLMs (e. g., GPT) are well known for their superior text generation capabilities; bidirectional PLMs (e. g., BERT) have been the prominent choice for natural language understanding (NLU) tasks.
Ranked #4 on
Zero-Shot Text Classification
on AG News
1 code implementation • 9 Feb 2022 • Yu Meng, Yunyi Zhang, Jiaxin Huang, Yu Zhang, Jiawei Han
Interestingly, there have not been standard approaches to deploy PLMs for topic discovery as better alternatives to topic models.
no code implementations • 3 Feb 2022 • Chung-Cheng Chiu, James Qin, Yu Zhang, Jiahui Yu, Yonghui Wu
In particular the quantizer projects speech inputs with a randomly initialized matrix, and does a nearest-neighbor lookup in a randomly-initialized codebook.
no code implementations • 3 Feb 2022 • Ankur Bapna, Colin Cherry, Yu Zhang, Ye Jia, Melvin Johnson, Yong Cheng, Simran Khanuja, Jason Riesa, Alexis Conneau
We present mSLAM, a multilingual Speech and LAnguage Model that learns cross-lingual cross-modal representations of speech and text by pre-training jointly on large amounts of unlabeled speech and text in multiple languages.
Ranked #1 on
Spoken language identification
on Fleurs
(using extra training data)
no code implementations • 20 Jan 2022 • Qi Shi, Qian Liu, Bei Chen, Yu Zhang, Ting Liu, Jian-Guang Lou
In this work, we propose LEMON, a general framework for language-based environment manipulation tasks.
no code implementations • 15 Jan 2022 • Xiyu Wang, Pengxin Guo, Yu Zhang
Specifically, in BCAT, we design a weight-sharing quadruple-branch transformer with a bidirectional cross-attention mechanism to learn domain-invariant feature representations.
no code implementations • CVPR 2022 • Yuchen Li, Zixuan Li, Siyu Teng, Yu Zhang, YuHang Zhou, Yuchang Zhu, Dongpu Cao, Bin Tian, Yunfeng Ai, Zhe XuanYuan, Long Chen
The main contributions of the AutoMine dataset are as follows: 1. The first autonomous driving dataset for perception and localization in mine scenarios.
1 code implementation • CVPR 2022 • Hanqing Yang, Sijia Cai, Hualian Sheng, Bing Deng, Jianqiang Huang, Xian-Sheng Hua, Yong Tang, Yu Zhang
In this paper, we introduce the balanced and hierarchical learning for our detector.
1 code implementation • NAACL 2022 • Yueqing Sun, Qi Shi, Le Qi, Yu Zhang
Specifically, JointLK performs joint reasoning between LM and GNN through a novel dense bidirectional attention module, in which each question token attends on KG nodes and each KG node attends on question tokens, and the two modal representations fuse and update mutually by multi-step interactions.
1 code implementation • COLING 2022 • Shilin Zhou, Qingrong Xia, Zhenghua Li, Yu Zhang, Yu Hong, Min Zhang
Moreover, we propose a simple constrained Viterbi procedure to ensure the legality of the output graph according to the constraints of the SRL structure.
no code implementations • 5 Dec 2021 • Jiaxuan Xie, Jianxiong Wei, Qingsong Hua, Yu Zhang
User modeling plays a fundamental role in industrial recommender systems, either in the matching stage and the ranking stage, in terms of both the customer experience and business revenue.
1 code implementation • 4 Dec 2021 • Feng Xu, Chuang Zhu, Wenqi Tang, Ying Wang, Yu Zhang, Jie Li, Hongchuan Jiang, Zhongyue Shi, Jun Liu, Mulan Jin
Conclusion: Our study provides a novel DL-based biomarker on primary tumor CNB slides to predict the metastatic status of ALN preoperatively for patients with EBC.
no code implementations • NeurIPS 2021 • Weisen Jiang, James Kwok, Yu Zhang
We study the problem of meta-learning, which has proved to be advantageous to accelerate learning new tasks with a few samples.
no code implementations • 29 Nov 2021 • Hanqi Zhu, Jiajun Deng, Yu Zhang, Jianmin Ji, Qiuyu Mao, Houqiang Li, Yanyong Zhang
However, this approach often suffers from the mismatch between the resolution of point clouds and RGB images, leading to sub-optimal performance.
1 code implementation • 20 Nov 2021 • Baijiong Lin, Feiyang Ye, Yu Zhang, Ivor W. Tsang
Multi-Task Learning (MTL) has achieved success in various fields.
no code implementations • 20 Nov 2021 • Zhixiong Yue, Feiyang Ye, Yu Zhang, Christy Liang, Ivor W. Tsang
We theoretically study the safeness of both learning strategies in the DSMTL model to show that the proposed methods can achieve some versions of safe multi-task learning.
no code implementations • 15 Nov 2021 • Junwen Bai, Bo Li, Yu Zhang, Ankur Bapna, Nikhil Siddhartha, Khe Chai Sim, Tara N. Sainath
Our average WER of all languages outperforms average monolingual baseline by 33. 3%, and the state-of-the-art 2-stage XLSR by 32%.
1 code implementation • 12 Nov 2021 • Yu Zhang, Wei Wei, Binxuan Huang, Kathleen M. Carley, Yan Zhang
Real-time location inference of social media users is the fundamental of some spatial applications such as localized search and event detection.
1 code implementation • 7 Nov 2021 • Yu Zhang, Shweta Garg, Yu Meng, Xiusi Chen, Jiawei Han
We study the problem of weakly supervised text classification, which aims to classify text documents into a set of pre-defined categories with category surface names only and without any annotated training document provided.
no code implementations • 3 Nov 2021 • Shourya Bose, Yu Zhang
In this paper, we consider the problem of load restoration in a microgrid (MG) that is islanded from the upstream DS because of an extreme weather event.
no code implementations • 25 Oct 2021 • Jing Lin, Yu Zhang, Edwin Khoo
Advancing lithium-ion batteries (LIBs) in both design and usage is key to promoting electrification in the coming decades to mitigate human-caused climate change.
no code implementations • 25 Oct 2021 • Yu Zhang, Chen Zhang, Renxin Yang, Jing Lyu, Li Liu, Xu Cai
The MMC-HVDC connected offshore wind farms (OWFs) could suffer short circuit fault (SCF), whereas their transient stability is not well analysed.
no code implementations • 20 Oct 2021 • Ankur Bapna, Yu-An Chung, Nan Wu, Anmol Gulati, Ye Jia, Jonathan H. Clark, Melvin Johnson, Jason Riesa, Alexis Conneau, Yu Zhang
We build a single encoder with the BERT objective on unlabeled text together with the w2v-BERT objective on unlabeled speech.
no code implementations • 18 Oct 2021 • Yu Zhang, Gongbo Liang, Nathan Jacobs
Most research on domain adaptation has focused on the purely unsupervised setting, where no labeled examples in the target domain are available.
no code implementations • 14 Oct 2021 • Ziyang Wang, Yunhao Gou, Jingjing Li, Yu Zhang, Yang Yang
Zero-shot learning (ZSL) aims to recognize unseen classes based on the knowledge of seen classes.
1 code implementation • ACL 2022 • Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning.
1 code implementation • COLING 2022 • Yu Zhang, Qingrong Xia, Shilin Zhou, Yong Jiang, Guohong Fu, Min Zhang
Semantic role labeling (SRL) is a fundamental yet challenging task in the NLP community.
Ranked #1 on
Semantic Role Labeling
on OntoNotes
no code implementations • 11 Oct 2021 • Rui Wang, Junyi Ao, Long Zhou, Shujie Liu, Zhihua Wei, Tom Ko, Qing Li, Yu Zhang
In this work, we propose a novel multi-view self-attention mechanism and present an empirical study of different Transformer variants with or without the proposed attention mechanism for speaker recognition.
no code implementations • 9 Oct 2021 • Joel Shor, Aren Jansen, Wei Han, Daniel Park, Yu Zhang
Many speech applications require understanding aspects beyond the words being spoken, such as recognizing emotion, detecting whether the speaker is wearing a mask, or distinguishing real from synthetic speech.
no code implementations • 7 Oct 2021 • Qiujia Li, Yu Zhang, David Qiu, Yanzhang He, Liangliang Cao, Philip C. Woodland
As end-to-end automatic speech recognition (ASR) models reach promising performance, various downstream tasks rely on good confidence estimators for these systems.
no code implementations • 30 Sep 2021 • John Z. Zhang, Yu Zhang, Pingchuan Ma, Elvis Nava, Tao Du, Philip Arm, Wojciech Matusik, Robert K. Katzschmann
Accurate simulation of soft mechanisms under dynamic actuation is critical for the design of soft robots.
no code implementations • 29 Sep 2021 • Weisen Jiang, James Kwok, Yu Zhang
We propose a MUlti-Subspace structured Meta-Learning (MUSML) algorithm to learn the subspace bases.
no code implementations • 27 Sep 2021 • Hualong Tang, Joseph Post, Achilleas Kourtellis, Brian Porter, Yu Zhang
The results show that a background subtraction-based method can achieve good detection performance on RGB images (F1 scores around 0. 9 for most cases), and a more varied performance is seen on thermal images with different azimuth angles.
no code implementations • 27 Sep 2021 • Yu Zhang, Daniel S. Park, Wei Han, James Qin, Anmol Gulati, Joel Shor, Aren Jansen, Yuanzhong Xu, Yanping Huang, Shibo Wang, Zongwei Zhou, Bo Li, Min Ma, William Chan, Jiahui Yu, Yongqiang Wang, Liangliang Cao, Khe Chai Sim, Bhuvana Ramabhadran, Tara N. Sainath, Françoise Beaufays, Zhifeng Chen, Quoc V. Le, Chung-Cheng Chiu, Ruoming Pang, Yonghui Wu
We summarize the results of a host of efforts using giant automatic speech recognition (ASR) models pre-trained using large, diverse unlabeled datasets containing approximately a million hours of audio.
no code implementations • 26 Sep 2021 • Yu Zhang, Xiaoguang Di, Shiyu Yan, Bin Zhang, Baoling Qi, Chunhui Wang
This paper proposes a simple self-calibration method for the internal time synchronization of MEMS(Micro-electromechanical systems) LiDAR during research and development.
no code implementations • 19 Sep 2021 • Shijie Chen, Yu Zhang, Qiang Yang
Deep learning approaches have achieved great success in the field of Natural Language Processing (NLP).
no code implementations • 18 Sep 2021 • Akkamahadevi Hanni, Yu Zhang
In our experimental evaluation, we verify that our approach generates more efficient explicable plans while successfully capturing the dynamic belief change of the human teammate.
1 code implementation • EMNLP 2021 • Qi Shi, Yu Zhang, Qingyu Yin, Ting Liu
Specifically, we first retrieve logic-level program-like evidence from the given table and statement as supplementary evidence for the table.
no code implementations • 12 Sep 2021 • Zhixiong Yue, Pengxin Guo, Yu Zhang
Base on the PC function, we propose a new method called Domain Adaptation by Maximizing Population Correlation (DAMPC) to learn a domain-invariant feature representation for DA.
1 code implementation • EMNLP 2021 • Yu Meng, Yunyi Zhang, Jiaxin Huang, Xuan Wang, Yu Zhang, Heng Ji, Jiawei Han
We study the problem of training named entity recognition (NER) models using only distantly-labeled data, which can be automatically obtained by matching entity mentions in the raw text with entity types in a knowledge base.
no code implementations • 27 Aug 2021 • Zhehuai Chen, Yu Zhang, Andrew Rosenberg, Bhuvana Ramabhadran, Gary Wang, Pedro Moreno
The proposed method, tts4pretrain complements the power of contrastive learning in self-supervision with linguistic/lexical representations derived from synthesized speech, effectively learning from untranscribed speech and unspoken text.
no code implementations • 25 Aug 2021 • Jing Xiong, Pengyang Zhou, Alan Chen, Yu Zhang
Then, a decoder with hierarchical temporal attention enables a similar day selection, which re-evaluates the importance of historical information at each time step.
no code implementations • 24 Aug 2021 • Gabriel Intriago, Yu Zhang
This paper deals with the event and intrusion detection problem by leveraging a stream data mining classifier (Hoeffding adaptive tree) with semi-supervised learning techniques to distinguish cyber-attacks from regular system perturbations accurately.
no code implementations • 20 Aug 2021 • Sifat Chowdhury, Kai Zhu, Yu Zhang
Over the past decade, the number of wildfire has increased significantly around the world, especially in the State of California.
no code implementations • 13 Aug 2021 • Yu'an Chen, Ruosong Ye, Ziyang Tao, Hongjian Liu, Guangda Chen, Jie Peng, Jun Ma, Yu Zhang, Yanyong Zhang, Jianmin Ji
Deep reinforcement learning (DRL) algorithms have proven effective in robot navigation, especially in unknown environments, through directly mapping perception inputs into robot control commands.
no code implementations • 7 Aug 2021 • Yu-An Chung, Yu Zhang, Wei Han, Chung-Cheng Chiu, James Qin, Ruoming Pang, Yonghui Wu
In particular, when compared to published models such as conformer-based wav2vec~2. 0 and HuBERT, our model shows~5\% to~10\% relative WER reduction on the test-clean and test-other subsets.
Ranked #1 on
Speech Recognition
on LibriSpeech test-other
(using extra training data)
1 code implementation • ICCV 2021 • Yu Zhang, Chang-Bin Zhang, Peng-Tao Jiang, Ming-Ming Cheng, Feng Mao
In this paper, we address the problem of personalized image segmentation.
1 code implementation • 6 Jul 2021 • Xiaomeng Chu, Jiajun Deng, Yao Li, Zhenxun Yuan, Yanyong Zhang, Jianmin Ji, Yu Zhang
As cameras are increasingly deployed in new application domains such as autonomous driving, performing 3D object detection on monocular images becomes an important task for visual scene understanding.
no code implementations • 24 Jun 2021 • Yingjie Wang, Qiuyu Mao, Hanqi Zhu, Yu Zhang, Jianmin Ji, Yanyong Zhang
In this survey, we first introduce the background of popular sensors for autonomous cars, including their common data representations as well as object detection networks developed for each type of sensor data.
no code implementations • CVPR 2021 • Luwei Hou, Yu Zhang, Kui Fu, Jia Li
Cross-domain weakly supervised object detection aims to adapt object-level knowledge from a fully labeled source domain dataset (i. e. with object bounding boxes) to train object detectors for target domains that are weakly labeled (i. e. with image-level tags).
Ranked #4 on
Weakly Supervised Object Detection
on Clipart1k
no code implementations • CVPR 2021 • Yu Zhang, Daniel Lau, David Wipf
Three-dimensional scanning by means of structured light illumination is an active imaging technique involving projecting and capturing a series of striped patterns and then using the observed warping of stripes to reconstruct the target object's surface through triangulating each pixel in the camera to a unique projector coordinate corresponding to a particular feature in the projected patterns.
2 code implementations • 17 Jun 2021 • Nanxin Chen, Yu Zhang, Heiga Zen, Ron J. Weiss, Mohammad Norouzi, Najim Dehak, William Chan
The model takes an input phoneme sequence, and through an iterative refinement process, generates an audio waveform.
no code implementations • 7 Jun 2021 • Yu Zhang, Guoming Tang, Qianyi Huang, Yi Wang, Xudong Wang, Jiadong Lou
Non-intrusive load monitoring (NILM) helps disaggregate the household's main electricity consumption to energy usages of individual appliances, thus greatly cutting down the cost in fine-grained household load monitoring.
1 code implementation • 6 Jun 2021 • Yang Li, Hong Zhang, Yu Zhang
The ImageNet pre-training initialization is the de-facto standard for object detection.
no code implementations • 1 Jun 2021 • Yu Zhang, Guoming Tang, Qianyi Huang, Yi Wang, Hong Xu
Non-intrusive load monitoring (NILM) is a well-known single-channel blind source separation problem that aims to decompose the household energy consumption into itemised energy usage of individual appliances.
no code implementations • 23 May 2021 • Yu Zhang, Chen Zhang, Xu Cai
Grid-synchronization stability (GSS) is an emerging stability issue of grid-tied voltage source converters (VSCs), which can be provoked by severe grid voltage sags.
no code implementations • 30 Apr 2021 • Bo Li, Ruoming Pang, Tara N. Sainath, Anmol Gulati, Yu Zhang, James Qin, Parisa Haghani, W. Ronny Huang, Min Ma, Junwen Bai
Building ASR models across many languages is a challenging multi-task learning problem due to large variations and heavily unbalanced data.
no code implementations • 26 Apr 2021 • David Qiu, Yanzhang He, Qiujia Li, Yu Zhang, Liangliang Cao, Ian McGraw
Confidence scores are very useful for downstream applications of automatic speech recognition (ASR) systems.
no code implementations • 18 Apr 2021 • Huangbin Zhang, Chong Zhao, Yu Zhang, Danlei Wang, Haichao Yang
DLEN is deployed on a real-world multi-task feed recommendation scenario of Tencent QQ-Small-World with a dataset containing over a billion samples, and it exhibits a significant performance advantage over the SOTA MTL model in offline evaluation, together with a considerable increase by 3. 02% in view-count and 2. 63% in user stay-time in production.
no code implementations • 16 Apr 2021 • Yu Zhang, Moming Duan, Duo Liu, Li Li, Ao Ren, Xianzhang Chen, Yujuan Tan, Chengliang Wang
Asynchronous FL has a natural advantage in mitigating the straggler effect, but there are threats of model quality degradation and server crash.
no code implementations • 15 Apr 2021 • Li Li, Moming Duan, Duo Liu, Yu Zhang, Ao Ren, Xianzhang Chen, Yujuan Tan, Chengliang Wang
In our framework, the server evaluates devices' value of training based on their training loss.
no code implementations • 7 Apr 2021 • Edwin G. Ng, Chung-Cheng Chiu, Yu Zhang, William Chan
We combine recent advancements in end-to-end speech recognition to non-autoregressive automatic speech recognition.
no code implementations • 6 Apr 2021 • Zhiyun Lu, Wei Han, Yu Zhang, Liangliang Cao
To attack RNN-T, we find prepending perturbation is more effective than the additive perturbation, and can mislead the models to predict the same short target on utterances of arbitrary length.
no code implementations • 5 Apr 2021 • William Chan, Daniel Park, Chris Lee, Yu Zhang, Quoc Le, Mohammad Norouzi
We present SpeechStew, a speech recognition model that is trained on a combination of various publicly available speech recognition datasets: AMI, Broadcast News, Common Voice, LibriSpeech, Switchboard/Fisher, Tedlium, and Wall Street Journal.
Ranked #1 on
Speech Recognition
on CHiME-6 eval
no code implementations • 2 Apr 2021 • Yu Zhang, Martijn Tennekes, Tim De Jong, Lyana Curier, Bob Coecke, Min Chen
Because QA4ML users have to view a non-trivial amount of data and perform many input actions to correct errors made by the ML model, an optimally-designed user interface (UI) can reduce the cost of interactions significantly.
no code implementations • 28 Mar 2021 • Xingyu Li, Difan Song, Miaozhe Han, Yu Zhang, Rene F. Kizilcec
We tested how well predictive models of human behavior trained in a developed country generalize to people in less developed countries by modeling global variation in 200 predictors of academic achievement on nationally representative student data for 65 countries.
1 code implementation • 28 Mar 2021 • Ye Jia, Heiga Zen, Jonathan Shen, Yu Zhang, Yonghui Wu
This paper introduces PnG BERT, a new encoder model for neural TTS.
no code implementations • 25 Mar 2021 • Qiujia Li, Yu Zhang, Bo Li, Liangliang Cao, Philip C. Woodland
End-to-end models with auto-regressive decoders have shown impressive results for automatic speech recognition (ASR).
no code implementations • IEEE Transactions on Intelligent Transportation Systems 2021 • Quan Tang, Fagui Liu, Jun Jiang, Yu Zhang
Current scene segmentation methods suffer from cumbersome model structures and high computational complexity, impeding their applications to real-world scenarios that require real-time processing.
no code implementations • 11 Mar 2021 • David Qiu, Qiujia Li, Yanzhang He, Yu Zhang, Bo Li, Liangliang Cao, Rohit Prabhavalkar, Deepti Bhatia, Wei Li, Ke Hu, Tara N. Sainath, Ian McGraw
We study the problem of word-level confidence estimation in subword-based end-to-end (E2E) models for automatic speech recognition (ASR).
no code implementations • 4 Mar 2021 • Qingshun Hu, Yu Zhang, Ali Esamdin, Jinzhong Liu, Xiangyun Zeng
A significant negative correlation between the overall ellipticities and masses is also detected for the sample clusters with log(age/year) $\geq$ 8, suggesting that the overall shapes of the clusters are possibly influenced by the number of members and masses, in addition to the external forces and the surrounding environment.
Astrophysics of Galaxies Solar and Stellar Astrophysics
1 code implementation • 1 Mar 2021 • Yu Zhang, Xiaoguang Di, Bin Zhang, Qingyan Li, Shiyu Yan, Chunhui Wang
Both of the networks can be trained with low light images only, which is achieved by a Maximum Entropy based Retinex (ME-Retinex) model and an assumption that noises are independently distributed.
no code implementations • 25 Feb 2021 • Yu Zhang, Xuelu Wu, Hong Peng, Caijun Zhong, Xiaoming Chen
This letter studies a cloud radio access network (C-RAN) with multiple intelligent reflecting surfaces (IRS) deployed between users and remote radio heads (RRH).
no code implementations • 18 Feb 2021 • Harsh Shrivastava, Ankush Garg, Yuan Cao, Yu Zhang, Tara Sainath
We propose automatic speech recognition (ASR) models inspired by echo state network (ESN), in which a subset of recurrent neural networks (RNN) layers in the models are randomly initialized and untrained.
no code implementations • 18 Feb 2021 • Yu Zhang, Muhammad Alrabeiah, Ahmed Alkhateeb
Employing large antenna arrays is a key characteristic of millimeter wave (mmWave) and terahertz communication systems.
1 code implementation • 15 Feb 2021 • Yu Zhang, Zhihong Shen, Yuxiao Dong, Kuansan Wang, Jiawei Han
Multi-label text classification refers to the problem of assigning each given document its most relevant labels from the label set.
no code implementations • NeurIPS 2021 • Feiyang Ye, Baijiong Lin, Zhixiong Yue, Pengxin Guo, Qiao Xiao, Yu Zhang
Empirically, we show the effectiveness of the proposed MOML framework in several meta learning problems, including few-shot learning, neural architecture search, domain adaptation, and multi-task learning.
no code implementations • 5 Feb 2021 • Hantao Wang, Huajun Zhang, Mingyuan Ren, Jinren Yao, Yu Zhang
Under the impact of an infinitely extended edge phase dislocation, optical vortices (screw phase dislocations) induce scintillation enhancement.
Optics
no code implementations • 2 Feb 2021 • Weiheng Jiang, Yu Zhang, Jun Zhao, Zehui Xiong, Zhiguo Ding
Cognitive radio (CR) is an effective solution to improve the spectral efficiency (SE) of wireless communications by allowing the secondary users (SUs) to share spectrum with primary users (PUs).
Information Theory Signal Processing Information Theory
no code implementations • 22 Jan 2021 • Neng-Chang Wei, Yu Zhang, Fei Huang, De-Min Li
In addition to the $t$-channel $K$ and $K^\ast$ exchanges, the $u$-channel $\Lambda$ exchange, the $s$-channel nucleon exchange, and the interaction current, a minimal number of nucleon resonances in the $s$ channel are introduced in constructing the reaction amplitudes to describe the data.
High Energy Physics - Phenomenology Nuclear Theory
no code implementations • 12 Jan 2021 • Zhengqing Zhou, Zhiheng Zhao, Shuyu Shi, Jianghua Wu, Dianjie Li, Jianwei Li, Jingpeng Zhang, Ke Gui, Yu Zhang, Heng Mei, Yu Hu, Qi Ouyang, Fangting Li
Integrating theoretical results with clinical COVID-19 patients' data, we classified the COVID-19 development processes into three typical modes of immune responses, correlated with the clinical classification of mild & moderate, severe and critical patients.
no code implementations • 12 Jan 2021 • Xiaocong Chen, Yun Li, Lina Yao, Ehsan Adeli, Yu Zhang
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.