no code implementations • EMNLP 2021 • Sheng Zhang, Xin Zhang, Weiming Zhang, Anders Søgaard
Using data from English cloze tests, in which subjects also self-reported their gender, age, education, and race, we examine performance differences of pretrained language models across demographic groups, defined by these (protected) attributes.
no code implementations • 28 Nov 2023 • Harsha Nori, Yin Tat Lee, Sheng Zhang, Dean Carignan, Richard Edgar, Nicolo Fusi, Nicholas King, Jonathan Larson, Yuanzhi Li, Weishung Liu, Renqian Luo, Scott Mayer McKinney, Robert Osazuwa Ness, Hoifung Poon, Tao Qin, Naoto Usuyama, Chris White, Eric Horvitz
We find that prompting innovation can unlock deeper specialist capabilities and show that GPT-4 easily tops prior leading results for medical benchmarks.
no code implementations • 28 Nov 2023 • Menglin Shi, Sheng Zhang, Gia-Wei Chern
Metallic spin glass systems, such as dilute magnetic alloys, are characterized by randomly distributed local moments coupled to each other through a long-range electron-mediated effective interaction.
no code implementations • 25 Nov 2023 • Sheng Zhang, Hui Li, Yanlin Wang, Zhao Wei, Yong Xiu, Juhong Wang, Rongong Ji
To mitigate biases, we develop a general debiasing framework that employs reranking to calibrate search results.
no code implementations • 16 Nov 2023 • Yiqing Xie, Sheng Zhang, Hao Cheng, Zelalem Gero, Cliff Wong, Tristan Naumann, Hoifung Poon
In the evaluation of medical text generation, it is essential to scrutinize each piece of information and ensure the utmost accuracy of the evaluation.
1 code implementation • 2 Nov 2023 • Yingying Fang, Shuang Wu, Sheng Zhang, Chaoyan Huang, Tieyong Zeng, Xiaodan Xing, Simon Walsh, Guang Yang
Specifically, our information bottleneck module serves to filter out the task-irrelevant information and noises in the fused feature, and we further introduce a sufficiency loss to prevent dropping of task-relevant information, thus explicitly preserving the sufficiency of prediction information in the distilled feature.
no code implementations • 16 Oct 2023 • Yu Gu, Jianwei Yang, Naoto Usuyama, Chunyuan Li, Sheng Zhang, Matthew P. Lungren, Jianfeng Gao, Hoifung Poon
In a comprehensive battery of tests on counterfactual medical image generation, BiomedJourney substantially outperforms prior state-of-the-art methods in instruction image editing and medical image generation such as InstructPix2Pix and RoentGen.
no code implementations • 10 Oct 2023 • Danni Yang, Yun Ji, Zhoubin Kou, Xiaoxiong Zhong, Sheng Zhang
To address the challenges posed by the heterogeneity inherent in federated learning (FL) and to attract high-quality clients, various incentive mechanisms have been employed.
1 code implementation • 24 Aug 2023 • Sheng Zhang, Muzammal Naseer, Guangyi Chen, Zhiqiang Shen, Salman Khan, Kun Zhang, Fahad Khan
To address this challenge, we propose the Self Structural Semantic Alignment (S^3A) framework, which extracts the structural semantic information from unlabeled data while simultaneously self-learning.
no code implementations • 7 Aug 2023 • Wenxuan Zhou, Sheng Zhang, Yu Gu, Muhao Chen, Hoifung Poon
Instruction tuning has proven effective for distilling LLMs into more cost-efficient models such as Alpaca and Vicuna.
Ranked #1 on
Named Entity Recognition
on ACE 2005
no code implementations • 4 Aug 2023 • Cliff Wong, Sheng Zhang, Yu Gu, Christine Moung, Jacob Abel, Naoto Usuyama, Roshanthi Weerasinghe, Brian Piening, Tristan Naumann, Carlo Bifulco, Hoifung Poon
Clinical trial matching is a key process in health delivery and discovery.
no code implementations • 12 Jul 2023 • Yu Gu, Sheng Zhang, Naoto Usuyama, Yonas Woldesenbet, Cliff Wong, Praneeth Sanapathi, Mu Wei, Naveen Valluri, Erika Strandberg, Tristan Naumann, Hoifung Poon
We find that while LLMs already possess decent competency in structuring biomedical text, by distillation into a task-specific student model through self-supervised learning, substantial gains can be attained over out-of-box LLMs, with additional advantages such as cost, efficiency, and white-box model access.
no code implementations • 30 May 2023 • Xingyu Fu, Sheng Zhang, Gukyeong Kwon, Pramuditha Perera, Henghui Zhu, Yuhao Zhang, Alexander Hanbo Li, William Yang Wang, Zhiguo Wang, Vittorio Castelli, Patrick Ng, Dan Roth, Bing Xiang
The open-ended Visual Question Answering (VQA) task requires AI models to jointly reason over visual and natural language inputs using world knowledge.
no code implementations • 27 May 2023 • Sijia Wang, Alexander Hanbo Li, Henry Zhu, Sheng Zhang, Chung-Wei Hang, Pramuditha Perera, Jie Ma, William Wang, Zhiguo Wang, Vittorio Castelli, Bing Xiang, Patrick Ng
Entities can be expressed in diverse formats, such as texts, images, or column names and cell values in tables.
1 code implementation • 25 May 2023 • Wuwei Lan, Zhiguo Wang, Anuj Chauhan, Henghui Zhu, Alexander Li, Jiang Guo, Sheng Zhang, Chung-Wei Hang, Joseph Lilien, Yiqun Hu, Lin Pan, Mingwen Dong, Jun Wang, Jiarong Jiang, Stephen Ash, Vittorio Castelli, Patrick Ng, Bing Xiang
A practical text-to-SQL system should generalize well on a wide variety of natural language questions, unseen database schemas, and novel SQL query structures.
no code implementations • 6 May 2023 • Zhoubin Kou, Yun Ji, Xiaoxiong Zhong, Sheng Zhang
However, existing FEEL systems with AirComp scheme often employ traditional synchronous aggregation mechanisms for local model aggregation in each global round, which suffer from the stragglers issues.
no code implementations • 11 Apr 2023 • Zhongzheng Tian, Sheng Zhang, Gia-Wei Chern
Based on the locality assumption, ML model is developed for the prediction of intensive properties of a finite-size block.
no code implementations • 30 Mar 2023 • Zhe Liu, Zhou Chen, Qi Wang, Sheng Zhang, Yunjie Yang
The results suggest that combining the shallow image prior and the hand-crafted regularization can achieve similar performance to the Deep Image Prior (DIP) but with less architectural dependency and complexity of the neural network.
no code implementations • 23 Mar 2023 • Fangyu Liu, Qianchu Liu, Shruthi Bannur, Fernando Pérez-García, Naoto Usuyama, Sheng Zhang, Tristan Naumann, Aditya Nori, Hoifung Poon, Javier Alvarez-Valle, Ozan Oktay, Stephanie L. Hyland
We evaluate DoT5 on the biomedical domain and the resource-lean subdomain of radiology, focusing on NLI, text summarisation and embedding learning.
1 code implementation • 20 Mar 2023 • Wenxuan Zhou, Sheng Zhang, Hoifung Poon, Muhao Chen
However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e. g., knowledge acquisition tasks).
no code implementations • 6 Mar 2023 • Chen Cheng, Sheng Zhang, Gia-Wei Chern
We present a machine learning (ML) framework for large-scale dynamical simulations of charge density wave (CDW) states.
2 code implementations • 2 Mar 2023 • Sheng Zhang, Yanbo Xu, Naoto Usuyama, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, Cliff Wong, Matthew P. Lungren, Tristan Naumann, Hoifung Poon
Our dataset (PMC-15M) is two orders of magnitude larger than existing biomedical image-text datasets such as MIMIC-CXR, and spans a diverse range of biomedical images.
Ranked #2 on
Medical Visual Question Answering
on SLAKE-English
1 code implementation • 21 Jan 2023 • Shuaichen Chang, Jun Wang, Mingwen Dong, Lin Pan, Henghui Zhu, Alexander Hanbo Li, Wuwei Lan, Sheng Zhang, Jiarong Jiang, Joseph Lilien, Steve Ash, William Yang Wang, Zhiguo Wang, Vittorio Castelli, Patrick Ng, Bing Xiang
Neural text-to-SQL models have achieved remarkable performance in translating natural language questions into SQL queries.
no code implementations • 21 Dec 2022 • Wenxuan Zhou, Sheng Zhang, Tristan Naumann, Muhao Chen, Hoifung Poon
In this paper, we aim at bridging the gap and propose to pretrain and finetune the RE model using consistent objectives of contrastive learning.
no code implementations • 17 Dec 2022 • Yiyun Zhao, Jiarong Jiang, Yiqun Hu, Wuwei Lan, Henry Zhu, Anuj Chauhan, Alexander Li, Lin Pan, Jun Wang, Chung-Wei Hang, Sheng Zhang, Marvin Dong, Joe Lilien, Patrick Ng, Zhiguo Wang, Vittorio Castelli, Bing Xiang
In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data.
1 code implementation • CVPR 2023 • Sheng Zhang, Salman Khan, Zhiqiang Shen, Muzammal Naseer, Guangyi Chen, Fahad Khan
The GNCD setting aims to categorize unlabeled training data coming from known and novel classes by leveraging the information of partially labeled known classes.
2 code implementations • 19 Oct 2022 • Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, Tie-Yan Liu
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain.
Ranked #1 on
Question Answering
on PubMedQA
1 code implementation • 30 Sep 2022 • Donghan Yu, Sheng Zhang, Patrick Ng, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Yiqun Hu, William Wang, Zhiguo Wang, Bing Xiang
Question answering over knowledge bases (KBs) aims to answer natural language questions with factual information such as entities and relations in KBs.
1 code implementation • 30 Aug 2022 • Sheng Zhang, Hao Cheng, Jianfeng Gao, Hoifung Poon
We present a bi-encoder framework for named entity recognition (NER), which applies contrastive learning to map candidate text spans and entity types into the same vector representation space.
Ranked #1 on
Named Entity Recognition (NER)
on BC5CDR
2 code implementations • 23 Jun 2022 • Chen Lin, Si Chen, Meifang Zeng, Sheng Zhang, Min Gao, Hui Li
Leg-UP learns user behavior patterns from real users in the sampled ``templates'' and constructs fake user profiles.
no code implementations • 10 Jun 2022 • Sheng Zhang, Patrick Ng, Zhiguo Wang, Bing Xiang
Our generative model is a unified framework to sequentially generate relational triplets under various relation extraction settings and explicitly utilizes relevant knowledge from Knowledge Graph (KG) to resolve ambiguities.
no code implementations • 10 May 2022 • Sheng Zhang, Guang Lin, Samy Tindel
We introduce a proper notion of 2-dimensional signature for images.
no code implementations • NAACL 2022 • Sheng Zhang, Jin Wang, Haitao Jiang, Rui Song
Some feature attribution methods have shown promising results in computer vision, especially the gradient-based methods where effectively smoothing the gradients with reference data is key to a robust and faithful result.
no code implementations • 12 Apr 2022 • Yi-Hsuan Liu, Sheng Zhang, Puhan Zhang, Ting-Kuo Lee, Gia-Wei Chern
We present a scalable machine learning (ML) model to predict local electronic properties such as on-site electron number and double occupation for disordered correlated electron systems.
1 code implementation • ACL 2022 • Miryam de Lhoneux, Sheng Zhang, Anders Søgaard
Large multilingual pretrained language models such as mBERT and XLM-RoBERTa have been found to be surprisingly effective for cross-lingual transfer of syntactic parsing models (Wu and Dredze 2019), but only between related languages.
1 code implementation • ACL 2022 • Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, Anders Søgaard
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained language models and the techniques used to fine-tune them for downstream tasks.
no code implementations • 21 Feb 2022 • Lihan Chen, Sihang Jiang, Jingping Liu, Chao Wang, Sheng Zhang, Chenhao Xie, Jiaqing Liang, Yanghua Xiao, Rui Song
Knowledge graphs (KGs) are an important source repository for a wide range of applications and rule mining from KGs recently attracts wide research interest in the KG-related research community.
no code implementations • 3 Jan 2022 • Puhan Zhang, Sheng Zhang, Gia-Wei Chern
A general theory of the descriptor for the classical fields is formulated, and two types of models are distinguished depending on the presence or absence of an internal symmetry for the classical field.
no code implementations • 15 Dec 2021 • Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon
Zero-shot entity linking has emerged as a promising direction for generalizing to new entities, but it still requires example gold entity mentions during training and canonical descriptions for all entities, both of which are rarely available outside of Wikipedia.
no code implementations • 9 Dec 2021 • Yongbiao Chen, Sheng Zhang, Fangxin Liu, Chenggang Wu, Kaicheng Guo, Zhengwei Qi
Specifically, we directly constrain the output from the convolutional neural network to be discrete binary codes and ensure the learned binary codes are optimal for classification.
no code implementations • NeurIPS 2021 • Sheng Zhang, Zhe Zhang, Siva Theja Maguluri
The focus of this paper is on sample complexity guarantees of average-reward reinforcement learning algorithms, which are known to be more challenging to study than their discounted-reward counterparts.
no code implementations • 27 Nov 2021 • Jianian Wang, Sheng Zhang, Yanghua Xiao, Rui Song
With multiple components and relations, financial data are often presented as graph data, since it could represent both the individual features and the complicated relations.
no code implementations • 22 Nov 2021 • Jing Fan, Xin Zhang, Sheng Zhang, Yan Pan, Lixiang Guo
In light of the success of transferring language models into NLP tasks, we ask whether the full BERT model is always the best and does it exist a simple but effective method to find the winning ticket in state-of-the-art deep neural networks without complex calculations.
no code implementations • EMNLP 2021 • Sheng Zhang, Cliff Wong, Naoto Usuyama, Sarthak Jain, Tristan Naumann, Hoifung Poon
Extracting relations across large text spans has been relatively underexplored in NLP, but it is particularly important for high-value domains such as biomedicine, where obtaining high recall of the latest findings is crucial for practical applications.
no code implementations • 27 Jul 2021 • Jie Li, Sheng Zhang, Kai Han, Xia Yuan, Chunxia Zhao, Yu Liu
UGV-KPNet is computationally efficient with a small number of parameters and provides pixel-level accurate keypoints detection results in real-time.
1 code implementation • 11 Jun 2021 • Daochen Zha, Jingru Xie, Wenye Ma, Sheng Zhang, Xiangru Lian, Xia Hu, Ji Liu
Games are abstractions of the real world, where artificial agents learn to compete and cooperate with other agents.
no code implementations • 27 May 2021 • Runzhe Wan, Sheng Zhang, Chengchun Shi, Shikai Luo, Rui Song
Order dispatch is one of the central problems to ride-sharing platforms.
no code implementations • 27 May 2021 • Sheng Zhang, Puhan Zhang, Gia-Wei Chern
With the aid of modern machine learning methods, we demonstrate the first-ever large-scale kinetic Monte Carlo simulations of the phase separation process for the Falicov-Kimball model, which is one of the canonical strongly correlated electron systems.
no code implementations • 5 May 2021 • Yongbiao Chen, Sheng Zhang, Fangxin Liu, Zhigang Chang, Mang Ye, Zhengwei Qi
Until now, the deep hashing for the image retrieval community has been dominated by convolutional neural network architectures, e. g. \texttt{Resnet}\cite{he2016deep}.
1 code implementation • 12 Apr 2021 • Elias Stengel-Eskin, Kenton Murray, Sheng Zhang, Aaron Steven White, Benjamin Van Durme
While numerous attempts have been made to jointly parse syntax and semantics, high performance in one domain typically comes at the price of performance in the other.
no code implementations • 11 Feb 2021 • Jiahao Xie, Sheng Zhang, Jianwei Lu, Ye Luo
Coarse-to-fine models and cascade segmentation architectures are widely adopted to solve the problem of large scale variations in medical image segmentation.
no code implementations • 21 Jan 2021 • Sheng Zhang
The main result is a submetric characterization of the class of Banach spaces admitting an equivalent norm with Rolewicz's property ($\beta$).
Functional Analysis
no code implementations • 21 Jan 2021 • Yunfei Pu, Sheng Zhang, Yukai Wu, Nan Jiang, Wei Chang, Chang Li, Luming Duan
The experimental realization of entanglement connection of two quantum repeater segments with an efficient memory-enhanced scaling demonstrates a key advantage of the quantum repeater protocol, which makes a cornerstone towards future large-scale quantum networks.
Quantum Physics
no code implementations • 1 Jan 2021 • Sheng Zhang, Rui Song, Wenbin Lu
In a number of experiments on benchmark datasets, we show that the proposed GraphCGAN outperforms the baseline methods by a significant margin.
1 code implementation • EMNLP 2020 • Ye Liu, Sheng Zhang, Rui Song, Suo Feng, Yanghua Xiao
Effectively filtering out noisy articles as well as bad answers is the key to improving extraction accuracy.
1 code implementation • 23 Sep 2020 • Sheng Zhang, Xin Zhang, Weiming Zhang, Anders Søgaard
Multi-task transfer learning based on pre-trained language encoders achieves state-of-the-art performance across a range of tasks.
no code implementations • 3 Sep 2020 • Sheng Zhang, Xiu Yang, Samy Tindel, Guang Lin
We prove that under certain conditions, the observable and its derivatives of any order are governed by a single Gaussian random field, which is the aforementioned AGRF.
Statistics Theory Probability Statistics Theory
22 code implementations • 12 May 2020 • Ivan Perov, Daiheng Gao, Nikolay Chervoniy, Kunlin Liu, Sugasa Marangonda, Chris Umé, Mr. Dpfks, Carl Shift Facenheim, Luis RP, Jian Jiang, Sheng Zhang, Pingyu Wu, Bo Zhou, Weiming Zhang
Deepfake defense not only requires the research of detection but also requires the efforts of generation methods.
Ranked #1 on
Face Swapping
on FaceForensics++
no code implementations • WS 2019 • Simon Ostermann, Sheng Zhang, Michael Roth, Peter Clark
This paper reports on the results of the shared tasks of the COIN workshop at EMNLP-IJCNLP 2019.
no code implementations • ACL 2020 • Elias Stengel-Eskin, Aaron Steven White, Sheng Zhang, Benjamin Van Durme
We introduce a transductive model for parsing into Universal Decompositional Semantics (UDS) representations, which jointly learns to map natural language utterances into UDS graph structures and annotate the graph with decompositional semantic attribute scores.
1 code implementation • LREC 2020 • Aaron Steven White, Elias Stengel-Eskin, Siddharth Vashishtha, Venkata Govindarajan, Dee Ann Reisinger, Tim Vieira, Keisuke Sakaguchi, Sheng Zhang, Francis Ferraro, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
We present the Universal Decompositional Semantics (UDS) dataset (v1. 0), which is bundled with the Decomp toolkit (v0. 1).
no code implementations • IJCNLP 2019 • Sheng Zhang, Xutai Ma, Kevin Duh, Benjamin Van Durme
We unify different broad-coverage semantic parsing tasks under a transduction paradigm, and propose an attention-based neural framework that incrementally builds a meaning representation via a sequence of semantic relations.
Ranked #2 on
UCCA Parsing
on SemEval 2019 Task 1
no code implementations • 5 Sep 2019 • Chang Li, Nan Jiang, Yukai Wu, Wei Chang, Yunfei Pu, Sheng Zhang, Lu-Ming Duan
The use of multiplexed atomic quantum memories (MAQM) can significantly enhance the efficiency to establish entanglement in a quantum network.
Quantum Physics
no code implementations • 17 Jul 2019 • Sheng Zhang, Guang Lin
We demonstrate how to use our algorithm step by step and compare our algorithm with threshold sparse Bayesian regression (TSBR) for the discovery of differential equations.
no code implementations • 2 Jul 2019 • Shanshan Liu, Xin Zhang, Sheng Zhang, Hui Wang, Weiming Zhang
Machine reading comprehension (MRC), which requires a machine to answer questions based on a given context, has attracted increasing attention with the incorporation of various deep-learning techniques over the past few years.
1 code implementation • 6 Jun 2019 • Yuliang Liu, Sheng Zhang, Lianwen Jin, Lele Xie, Yaqiang Wu, Zhepeng Wang
Scene text in the wild is commonly presented with high variant characteristics.
Ranked #1 on
Scene Text Detection
on IC19-ReCTs
(using extra training data)
1 code implementation • 27 May 2019 • Zaiwei Chen, Sheng Zhang, Thinh T. Doan, John-Paul Clarke, Siva Theja Maguluri
To demonstrate the generality of our theoretical results on Markovian SA, we use it to derive the finite-sample bounds of the popular $Q$-learning with linear function approximation algorithm, under a condition on the behavior policy.
1 code implementation • ACL 2019 • Sheng Zhang, Xutai Ma, Kevin Duh, Benjamin Van Durme
Our experimental results outperform all previously reported SMATCH scores, on both AMR 2. 0 (76. 3% F1 on LDC2017T10) and AMR 1. 0 (70. 2% F1 on LDC2014T12).
Ranked #1 on
AMR Parsing
on LDC2014T12:
no code implementations • NAACL 2019 • Shuohang Wang, Sheng Zhang, Yelong Shen, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Jing Jiang
Commonsense reasoning is fundamental to natural language understanding.
Ranked #3 on
Natural Language Understanding
on PDP60
no code implementations • 20 Nov 2018 • Chao Chen, Sheng Zhang, Cuibing Du
Change detection has been a challenging visual task due to the dynamic nature of real-world scenes.
no code implementations • 30 Oct 2018 • Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, Benjamin Van Durme
We present a large-scale dataset, ReCoRD, for machine reading comprehension requiring commonsense reasoning.
no code implementations • EMNLP 2018 • Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh, Benjamin Van Durme
We introduce the task of cross-lingual decompositional semantic parsing: mapping content provided in a source language into a decompositional semantic analysis based on a target language.
no code implementations • WS 2018 • Kaiyin Zhou, Sheng Zhang, Xiangyu Meng, Qi Luo, Yuxing Wang, Ke Ding, Yukun Feng, Mo Chen, Kevin Cohen, Jingbo Xia
Sequence labeling of biomedical entities, e. g., side effects or phenotypes, was a long-term task in BioNLP and MedNLP communities.
no code implementations • SEMEVAL 2018 • Hongyuan Mei, Sheng Zhang, Kevin Duh, Benjamin Van Durme
Cross-lingual information extraction (CLIE) is an important and challenging task, especially in low resource scenarios.
no code implementations • 21 Apr 2018 • Sheng Zhang, Kevin Duh, Benjamin Van Durme
We introduce the task of cross-lingual semantic parsing: mapping content provided in a source language into a meaning representation based on a target language.
1 code implementation • EMNLP 2018 • Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, Benjamin Van Durme
We present a model for semantic proto-role labeling (SPRL) using an adapted bidirectional LSTM encoding strategy that we call "Neural-Davidsonian": predicate-argument structure is represented as pairs of hidden states corresponding to predicate and argument head tokens of the input sequence.
1 code implementation • SEMEVAL 2018 • Sheng Zhang, Kevin Duh, Benjamin Van Durme
Fine-grained entity typing is the task of assigning fine-grained semantic types to entity mentions.
no code implementations • 28 Mar 2018 • Yuechao Gao, Nianhong Liu, Sheng Zhang
It is a challenging task to deploy computationally and memory intensive State-of-the-art deep neural networks (DNNs) on embedded systems with limited hardware resources and power budgets.
1 code implementation • 23 Jan 2018 • Yuechao Gao, Nianhong Liu, Sheng Zhang
To address memory and computation resource limitations for hardware-oriented acceleration of deep convolutional neural networks (CNNs), we present a computation flow, stacked filters stationary flow (SFS), and a corresponding data encoding format, relative indexed compressed sparse filter format (CSF), to make the best of data sparsity, and simplify data handling at execution time.
no code implementations • 2 Jan 2018 • Jiashu Zhang, Sheng Zhang, Defang Li
Over the last decade, both the neural network and kernel adaptive filter have successfully been used for nonlinear signal processing.
no code implementations • 12 Nov 2017 • Sheng Zhang, Yuliang Liu, Lianwen Jin, Canjie Luo
In this paper, we propose a refined scene text detector with a \textit{novel} Feature Enhancement Network (FEN) for Region Proposal and Text Detection Refinement.
no code implementations • IJCNLP 2017 • Sheng Zhang, Kevin Duh, Benjamin Van Durme
Cross-lingual open information extraction is the task of distilling facts from the source language into representations in the target language.
no code implementations • SEMEVAL 2017 • Sheng Zhang, Jiajun Cheng, Hui Wang, Xin Zhang, Pei Li, Zhaoyun Ding
We describes deep neural networks frameworks in this paper to address the community question answering (cQA) ranking task (SemEval-2017 task 3).
no code implementations • EACL 2017 • Sheng Zhang, Kevin Duh, Benjamin Van Durme
Conventional pipeline solutions decompose the task as machine translation followed by information extraction (or vice versa).
3 code implementations • WS 2019 • Adrian Benton, Huda Khayrallah, Biman Gujral, Dee Ann Reisinger, Sheng Zhang, Raman Arora
We present Deep Generalized Canonical Correlation Analysis (DGCCA) -- a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other.
no code implementations • TACL 2017 • Sheng Zhang, Rachel Rudinger, Kevin Duh, Benjamin Van Durme
Humans have the capacity to draw common-sense inferences from natural language: various things that are likely but not certain to hold based on established discourse, and are rarely stated explicitly.
no code implementations • 7 Jun 2015 • Yi-Lun Wang, Sheng Zhang, Junjie Zheng, Heng Chen, Huafu Chen
In this paper, we focus on how to locate the relevant or discriminative brain regions related with external stimulus or certain mental decease, which is also called support identification, based on the neuroimaging data.
1 code implementation • 12 Feb 2015 • Sheng Zhang, Brendan Harding
We established a new method called Discrete Weierstrass Fourier Transform, a faster and more generalized Discrete Fourier Transform, to approximate discrete data.
Numerical Analysis
no code implementations • 17 Oct 2014 • Yi-Lun Wang, Junjie Zheng, Sheng Zhang, Xujun Duan, Huafu Chen
In this paper, we consider voxel selection for functional Magnetic Resonance Imaging (fMRI) brain data with the aim of finding a more complete set of probably correlated discriminative voxels, thus improving interpretation of the discovered potential biomarkers.