no code implementations • BioNLP (ACL) 2022 • Avisha Das, Salih Selek, Alia R. Warner, Xu Zuo, Yan Hu, Vipina Kuttichi Keloth, Jianfu Li, W. Jim Zheng, Hua Xu
Through quantitative evaluation of the linguistic quality, we observe that the dialog generation model - DialoGPT (345M) with transfer learning on video data attains scores similar to a human response baseline.
no code implementations • 24 Dec 2024 • Xingjian Zhang, Ziyang Xiong, Shixuan Liu, Yutong Xie, Tolga Ergen, Dongsub Shim, Hua Xu, Honglak Lee, Qiaozhu Me
Low-dimensional visualizations, or "projection maps" of datasets, are widely used across scientific research and creative industries as effective tools for interpreting large-scale and complex information.
1 code implementation • 20 Dec 2024 • Jiaming Ji, Jiayi Zhou, Hantao Lou, Boyuan Chen, Donghai Hong, Xuyao Wang, Wenqi Chen, Kaile Wang, Rui Pan, Jiahao Li, Mohan Wang, Josef Dai, Tianyi Qiu, Hua Xu, Dong Li, WeiPeng Chen, Jun Song, Bo Zheng, Yaodong Yang
In this work, we make the first attempt to fine-tune all-modality models (i. e. input and output with any modality, also named any-to-any models) using human preference data across all modalities (including text, image, audio, and video), ensuring its behavior aligns with human intentions.
no code implementations • 30 Nov 2024 • Yan Wang, Jimin Huang, Huan He, Vincent Zhang, Yujia Zhou, Xubing Hao, Pritham Ram, Lingfei Qian, Qianqian Xie, Ruey-Ling Weng, Fongci Lin, Yan Hu, Licong Cui, Xiaoqian Jiang, Hua Xu, Na Hong
We propose CDEMapper, a large language model (LLM) powered mapping tool designed to assist in mapping local data elements to NIH CDEs.
no code implementations • 27 Nov 2024 • Yutong Xie, Yijun Pan, Hua Xu, Qiaozhu Mei
Artificial Intelligence has proven to be a transformative tool for advancing scientific research across a wide range of disciplines.
1 code implementation • 15 Nov 2024 • Yan Hu, Xu Zuo, Yujia Zhou, Xueqing Peng, Jimin Huang, Vipina K. Keloth, Vincent J. Zhang, Ruey-Ling Weng, Qingyu Chen, Xiaoqian Jiang, Kirk E. Roberts, Hua Xu
On unseen i2b2 data, LLaMA-3-70B outperformed BERT by 7% (F1) on NER and 4% on RE.
no code implementations • 6 Nov 2024 • Yiming Li, Fang Li, Kirk Roberts, Licong Cui, Cui Tao, Hua Xu
Evaluation metrics included token-level analysis (BLEU, ROUGE-1, ROUGE-2, ROUGE-L) and semantic similarity scores between model-generated summaries and physician-written gold standards.
no code implementations • 26 Oct 2024 • Rachael Fleurence, Xiaoyan Wang, Jiang Bian, Mitchell K. Higashi, Turgay Ayer, Hua Xu, Dalia Dawoud, Jagpreet Chhatwal
The article discusses strategies to improve the accuracy of these AI tools.
no code implementations • 12 Oct 2024 • Rongbin Li, Wenbo Chen, Jinbo Li, Hanwen Xing, Hua Xu, Zhao Li, W. Jim Zheng
By leveraging GPT-4 for ontology narration, we developed GPTON to infuse structured knowledge into LLMs through verbalized ontology terms, achieving accurate text and ontology annotations for over 68% of gene sets in the top five predictions.
no code implementations • 8 Oct 2024 • Qian Qiu, Liang Zhang, Mohan Wu, Qichun Sun, Xiaogang Li, Da-Chuang Li, Hua Xu
Quantum ant colony optimization (QACO) has drew much attention since it combines the advantages of quantum computing and ant colony optimization (ACO) algorithm overcoming some limitations of the traditional ACO algorithm.
no code implementations • 1 Oct 2024 • Aidan Gilson, Xuguang Ai, Qianqian Xie, Sahana Srinivasan, Krithi Pushpanathan, Maxwell B. Singer, Jimin Huang, Hyunjae Kim, Erping Long, Peixing Wan, Luciano V. Del Priore, Lucila Ohno-Machado, Hua Xu, Dianbo Liu, Ron A. Adelman, Yih-Chung Tham, Qingyu Chen
In external validations, LEME excelled in long-form QA with a Rouge-L of 0. 19 (all p<0. 0001), ranked second in MCQ accuracy (0. 68; all p<0. 0001), and scored highest in EHR summarization and clinical QA (ranging from 4. 24 to 4. 83 out of 5 for correctness, completeness, and readability).
no code implementations • 28 Sep 2024 • Betina Idnay, Zihan Xu, William G. Adams, Mohammad Adibuzzaman, Nicholas R. Anderson, Neil Bahroos, Douglas S. Bell, Cody Bumgardner, Thomas Campion, Mario Castro, James J. Cimino, I. Glenn Cohen, David Dorr, Peter L Elkin, Jungwei W. Fan, Todd Ferris, David J. Foran, David Hanauer, Mike Hogarth, Kun Huang, Jayashree Kalpathy-Cramer, Manoj Kandpal, Niranjan S. Karnik, Avnish Katoch, Albert M. Lai, Christophe G. Lambert, Lang Li, Christopher Lindsell, Jinze Liu, Zhiyong Lu, Yuan Luo, Peter McGarvey, Eneida A. Mendonca, Parsa Mirhaji, Shawn Murphy, John D. Osborne, Ioannis C. Paschalidis, Paul A. Harris, Fred Prior, Nicholas J. Shaheen, Nawar Shara, Ida Sim, Umberto Tachinardi, Lemuel R. Waitman, Rosalind J. Wright, Adrian H. Zai, Kai Zheng, Sandra Soo-Jin Lee, Bradley A. Malin, Karthik Natarajan, W. Nicholson Price II, Rui Zhang, Yiye Zhang, Hua Xu, Jiang Bian, Chunhua Weng, Yifan Peng
This study reports a comprehensive environmental scan of the generative AI (GenAI) infrastructure in the national network for clinical and translational science across 36 institutions supported by the Clinical and Translational Science Award (CTSA) Program led by the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH) at the United States.
no code implementations • 28 Sep 2024 • Zhixiao Xiong, Fangyu Zong, Huigen Ye, Hua Xu
Machine Learning (ML) optimization frameworks have gained attention for their ability to accelerate the optimization of large-scale Quadratically Constrained Quadratic Programs (QCQPs) by learning shared problem structures.
no code implementations • 27 Sep 2024 • Zehan Li, Yan Hu, Scott Lane, Salih Selek, Lokesh Shahani, Rodrigo Machado-Vieira, Jair Soares, Hua Xu, Hongfang Liu, Ming Huang
We evaluated the performance of four BERT-based models using two fine-tuning strategies (multiple single-label and single multi-label) for detecting coexisting suicidal events from 500 annotated psychiatric evaluation notes.
no code implementations • 20 Sep 2024 • Aidan Gilson, Xuguang Ai, Thilaka Arunachalam, Ziyou Chen, Ki Xiong Cheong, Amisha Dave, Cameron Duic, Mercy Kibe, Annette Kaminaka, Minali Prasad, Fares Siddig, Maxwell Singer, Wendy Wong, Qiao Jin, Tiarnan D. L. Keenan, Xia Hu, Emily Y. Chew, Zhiyong Lu, Hua Xu, Ron A. Adelman, Yih-Chung Tham, Qingyu Chen
The evaluation focuses on factuality of evidence, selection and ranking of evidence, attribution of evidence, and answer accuracy and completeness.
no code implementations • 29 Jul 2024 • Liang Zhang, Yin Xu, Mohan Wu, Liang Wang, Hua Xu
Quantum computing combined with machine learning (ML) is an extremely promising research area, with numerous studies demonstrating that quantum machine learning (QML) is expected to solve scientific problems more effectively than classical ML.
no code implementations • 9 Jul 2024 • Rachael Fleurence, Jiang Bian, Xiaoyan Wang, Hua Xu, Dalia Dawoud, Mitch Higashi, Jagpreet Chhatwal
This review introduces the transformative potential of generative Artificial Intelligence (AI) and foundation models, including large language models (LLMs), for health technology assessment (HTA).
no code implementations • 26 Jun 2024 • Yiming Li, Deepthi Viswaroopan, William He, Jianfu Li, Xu Zuo, Hua Xu, Cui Tao
This study aims to evaluate the effectiveness of LLMs and traditional deep learning models in AE extraction, and to assess the impact of ensembling these models on performance.
1 code implementation • 21 Jun 2024 • Tianyu Liu, Yijia Xiao, Xiao Luo, Hua Xu, W. Jim Zheng, Hongyu Zhao
The applications of large language models (LLMs) are promising for biomedical and healthcare research.
no code implementations • 15 Jun 2024 • Mingchen Li, Jiatan Huang, Jeremy Yeung, Anne Blaes, Steven Johnson, Hongfang Liu, Hua Xu, Rui Zhang
Medical Large Language Models (LLMs) such as ClinicalCamel 70B, Llama3-OpenBioLLM 70B have demonstrated impressive performance on a wide variety of medical NLP task. However, there still lacks a large language model (LLM) specifically designed for cancer domain.
1 code implementation • 15 Jun 2024 • Yu Yin, Hyunjae Kim, Xiao Xiao, Chih Hsuan Wei, Jaewoo Kang, Zhiyong Lu, Hua Xu, Meng Fang, Qingyu Chen
Specifically, our models consistently outperformed the baseline models in six out of eight entity types, achieving an average improvement of 0. 9% over the best baseline performance across eight entities.
no code implementations • 28 May 2024 • Shengnan Wang, Youhui Bai, Lin Zhang, Pingyi Zhou, Shixiong Zhao, Gong Zhang, Sen Wang, Renhai Chen, Hua Xu, Hongwei Sun
Under the XL3M framework, the input context will be firstly decomposed into multiple short sub-contexts, where each sub-context contains an independent segment and a common ``question'' which is a few tokens from the end of the original context.
1 code implementation • 21 May 2024 • Hanlei Zhang, Hua Xu, Fei Long, Xin Wang, Kai Gao
UMC shows remarkable improvements of 2-6\% scores in clustering metrics over state-of-the-art methods, marking the first successful endeavor in this domain.
1 code implementation • 1 May 2024 • Mingchen Li, Halil Kilicoglu, Hua Xu, Rui Zhang
Large Language Models (LLMs) have swiftly emerged as vital resources for different applications in the biomedical and healthcare domains; however, these models encounter issues such as generating inaccurate information or hallucinations.
no code implementations • 8 Apr 2024 • Yiming Li, Xueqing Peng, Jianfu Li, Xu Zuo, Suyuan Peng, Donghong Pei, Cui Tao, Hua Xu, Na Hong
This study underscores the effectiveness of LLMs like GPT in extracting relations related to acupoint locations, with implications for accurately modeling acupuncture knowledge and promoting standard implementation in acupuncture training and practice.
1 code implementation • 16 Mar 2024 • Hanlei Zhang, Xin Wang, Hua Xu, Qianrui Zhou, Kai Gao, Jianhua Su, jinyue Zhao, Wenrui Li, Yanting Chen
We believe that MIntRec2. 0 will serve as a valuable resource, providing a pioneering foundation for research in human-machine conversational interactions, and significantly facilitating related applications.
1 code implementation • 20 Feb 2024 • Qianqian Xie, Qingyu Chen, Aokun Chen, Cheng Peng, Yan Hu, Fongci Lin, Xueqing Peng, Jimin Huang, Jeffrey Zhang, Vipina Keloth, Xinyu Zhou, Lingfei Qian, Huan He, Dennis Shung, Lucila Ohno-Machado, Yonghui Wu, Hua Xu, Jiang Bian
This work underscores the importance of domain-specific data in developing medical LLMs and addresses the high computational costs involved in training, highlighting a balance between pre-training and fine-tuning strategies.
no code implementations • 8 Jan 2024 • Gongbo Zhang, Yiliang Zhou, Yan Hu, Hua Xu, Chunhua Weng, Yifan Peng
On the PICO-Corpus, PICOX obtained higher recall and F1 scores than the baseline and improved the micro recall score from 56. 66 to 67. 33.
1 code implementation • 22 Dec 2023 • Qianrui Zhou, Hua Xu, Hao Li, Hanlei Zhang, Xiaohan Zhang, Yifan Wang, Kai Gao
To establish an optimal multimodal semantic environment for text modality, we develop a modality-aware prompting module (MAP), which effectively aligns and fuses features from text, video and audio modalities with similarity-based modality alignment and cross-modality attention mechanism.
Ranked #2 on Multimodal Intent Recognition on MIntRec
3 code implementations • 28 Nov 2023 • Rui Yang, Qingcheng Zeng, Keen You, Yujie Qiao, Lucas Huang, Chia-Chun Hsieh, Benjamin Rosand, Jeremy Goldwasser, Amisha D Dave, Tiarnan D. L. Keenan, Emily Y Chew, Dragomir Radev, Zhiyong Lu, Hua Xu, Qingyu Chen, Irene Li
This study introduces Ascle, a pioneering natural language processing (NLP) toolkit designed for medical text generation.
no code implementations • 30 Oct 2023 • Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, Fanzhi Zeng, Kwan Yee Ng, Juntao Dai, Xuehai Pan, Aidan O'Gara, Yingshan Lei, Hua Xu, Brian Tse, Jie Fu, Stephen Mcaleer, Yaodong Yang, Yizhou Wang, Song-Chun Zhu, Yike Guo, Wen Gao
The former aims to make AI systems aligned via alignment training, while the latter aims to gain evidence about the systems' alignment and govern them appropriately to avoid exacerbating misalignment risks.
1 code implementation • 22 Jun 2023 • Cathy Shyr, Yan Hu, Paul A. Harris, Hua Xu
Despite this, ChatGPT achieved similar or higher accuracy for certain entities (i. e., rare diseases and signs) in the one-shot setting (F1 of 0. 776 and 0. 725).
no code implementations • 27 May 2023 • Huixue Zhou, Robin Austin, Sheng-Chieh Lu, Greg Silverman, Yuqi Zhou, Halil Kilicoglu, Hua Xu, Rui Zhang
Results: From the 198 unique concepts in CIHLex, 62. 1% could be matched to at least one term in the UMLS.
1 code implementation • 10 May 2023 • Qingyu Chen, Yan Hu, Xueqing Peng, Qianqian Xie, Qiao Jin, Aidan Gilson, Maxwell B. Singer, Xuguang Ai, Po-Ting Lai, Zhizheng Wang, Vipina Kuttichi Keloth, Kalpana Raja, Jiming Huang, Huan He, Fongci Lin, Jingcheng Du, Rui Zhang, W. Jim Zheng, Ron A. Adelman, Zhiyong Lu, Hua Xu
The biomedical literature is rapidly expanding, posing a significant challenge for manual curation and knowledge discovery.
1 code implementation • 16 Apr 2023 • Hanlei Zhang, Hua Xu, Xin Wang, Fei Long, Kai Gao
New intent discovery is of great value to natural language processing, allowing for a better understanding of user needs and providing friendly services.
1 code implementation • 29 Mar 2023 • Yan Hu, Qingyu Chen, Jingcheng Du, Xueqing Peng, Vipina Kuttichi Keloth, Xu Zuo, Yujia Zhou, Zehan Li, Xiaoqian Jiang, Zhiyong Lu, Kirk Roberts, Hua Xu
Results: Using baseline prompts, GPT-3. 5 and GPT-4 achieved relaxed F1 scores of 0. 634, 0. 804 for MTSamples, and 0. 301, 0. 593 for VAERS.
no code implementations • 18 Nov 2022 • Huigen Ye, Hongyan Wang, Hua Xu, Chengming Wang, Yu Jiang
Integer programming problems (IPs) are challenging to be solved efficiently due to the NP-hardness, especially for large-scale IPs.
no code implementations • 12 Nov 2022 • Kaicheng Yang, Ruxuan Zhang, Hua Xu, Kai Gao
In this paper, a Self-Adjusting Fusion Representation Learning Model (SA-FRLM) is proposed to learn robust crossmodal fusion representations directly from the unaligned text and audio sequences.
1 code implementation • 9 Sep 2022 • Hanlei Zhang, Hua Xu, Xin Wang, Qianrui Zhou, Shaojie Zhao, Jiayan Teng
This paper introduces a novel dataset for multimodal intent recognition (MIntRec) to address this issue.
Ranked #1 on Multimodal Intent Recognition on MIntRec
1 code implementation • 22 Aug 2022 • Yihe Liu, Ziqi Yuan, Huisheng Mao, Zhiyun Liang, Wanqiuyue Yang, Yuanzhe Qiu, Tie Cheng, Xiaoteng Li, Hua Xu, Kai Gao
The designed modality mixup module can be regarded as an augmentation, which mixes the acoustic and visual modalities from different videos.
no code implementations • Findings (NAACL) 2022 • Zhijing Wu, Hua Xu, Jingliang Fang, Kai Gao
However, it is a great challenge to learn a new domain incrementally without catastrophically forgetting previous knowledge.
3 code implementations • ACL 2022 • Huisheng Mao, Ziqi Yuan, Hua Xu, Wenmeng Yu, Yihe Liu, Kai Gao
The platform features a fully modular video sentiment analysis framework consisting of data management, feature extraction, model training, and result analysis modules.
1 code implementation • 11 Mar 2022 • Hanlei Zhang, Hua Xu, Shaojie Zhao, Qianrui Zhou
To address these issues, this paper presents an original framework called DA-ADB, which successively learns distance-aware intent representations and adaptive decision boundaries for open intent detection.
1 code implementation • Findings (ACL) 2022 • Kang Zhao, Hua Xu, Jiangong Yang, Kai Gao
Specifically, supervised contrastive learning based on a memory bank is first used to train each new task so that the model can effectively learn the relation representation.
no code implementations • 20 Oct 2021 • Sijia Liu, Andrew Wen, LiWei Wang, Huan He, Sunyang Fu, Robert Miller, Andrew Williams, Daniel Harris, Ramakanth Kavuluru, Mei Liu, Noor Abu-el-rub, Dalton Schutte, Rui Zhang, Masoud Rouhizadeh, John D. Osborne, Yongqun He, Umit Topaloglu, Stephanie S Hong, Joel H Saltz, Thomas Schaffter, Emily Pfaff, Christopher G. Chute, Tim Duong, Melissa A. Haendel, Rafael Fuentes, Peter Szolovits, Hua Xu, Hongfang Liu, Natural Language Processing, Subgroup, National COVID Cohort Collaborative
Although we use COVID-19 as a use case in this effort, our framework is general enough to be applied to other domains of interest in clinical NLP.
1 code implementation • 3 Oct 2021 • Laila Rasmy, Jie Zhu, Zhiheng Li, Xin Hao, Hong Thoai Tran, Yujia Zhou, Firat Tiryaki, Yang Xiang, Hua Xu, Degui Zhi
As a result, deep learning models developed for sequence modeling, like recurrent neural networks (RNNs) are common architecture for EHR-based clinical events predictive models.
2 code implementations • ACL 2021 • Hanlei Zhang, Xiaoteng Li, Hua Xu, Panpan Zhang, Kang Zhao, Kai Gao
It is composed of two main modules: open intent detection and open intent discovery.
no code implementations • 4 Aug 2021 • Greg M. Silverman, Raymond L. Finzel, Michael V. Heinz, Jake Vasilakes, Jacob C. Solinsky, Reed McEwan, Benjamin C. Knoll, Christopher J. Tignanelli, Hongfang Liu, Hua Xu, Xiaoqian Jiang, Genevieve B. Melton, Serguei VS Pakhomov
Our objective in this study is to investigate the behavior of Boolean operators on combining annotation output from multiple Natural Language Processing (NLP) systems across multiple corpora and to assess how filtering by aggregation of Unified Medical Language System (UMLS) Metathesaurus concepts affects system performance for Named Entity Recognition (NER) of UMLS concepts.
no code implementations • 23 Jul 2021 • Yu Jing, Xiaogang Li, Yang Yang, Chonghang Wu, Wenbing Fu, Wei Hu, Yuanyuan Li, Hua Xu
With the rapid growth of qubit numbers and coherence times in quantum hardware technology, implementing shallow neural networks on the so-called Noisy Intermediate-Scale Quantum (NISQ) devices has attracted a lot of interest.
no code implementations • 24 Jun 2021 • Dalton Schutte, Jake Vasilakes, Anu Bompelli, Yuqi Zhou, Marcelo Fiszman, Hua Xu, Halil Kilicoglu, Jeffrey R. Bishop, Terrence Adam, Rui Zhang
MATERIALS AND METHODS: We created SemRepDS (an extension of SemRep), capable of extracting semantic relations from abstracts by leveraging a DS-specific terminology (iDISK) containing 28, 884 DS terms not found in the UMLS.
1 code implementation • 8 May 2021 • Kang Zhao, Hua Xu, Yue Cheng, Xiaoteng Li, Kai Gao
Joint entity and relation extraction is an essential task in information extraction, which aims to extract all relational triples from unstructured text.
Ranked #2 on Relation Extraction on SemEval-2010 Task-8
Graph Neural Network Joint Entity and Relation Extraction +3
2 code implementations • 9 Feb 2021 • Wenmeng Yu, Hua Xu, Ziqi Yuan, Jiele Wu
On MOSI and MOSEI datasets, our method surpasses the current state-of-the-art methods.
1 code implementation • 18 Dec 2020 • Hanlei Zhang, Hua Xu, Ting-En Lin
In this paper, we propose a post-processing method to learn the adaptive decision boundary (ADB) for open intent classification.
Ranked #1 on Open Intent Detection on StackOverFlow(75%known)
2 code implementations • 16 Dec 2020 • Hanlei Zhang, Hua Xu, Ting-En Lin, Rui Lyu
In this work, we propose an effective method, Deep Aligned Clustering, to discover new intents with the aid of the limited known intent data.
Ranked #2 on Open Intent Discovery on CLINC150
1 code implementation • ACM Multimedia 2020 • Kaicheng Yang, Hua Xu, Kai Gao
In this paper, we propose the Cross-Modal BERT (CM-BERT), which relies on the interaction of text and audio modality to fine-tune the pre-trained BERT model.
Ranked #4 on Multimodal Sentiment Analysis on MOSI
no code implementations • 13 Jul 2020 • Jingqi Wang, Noor Abu-el-rub, Josh Gray, Huy Anh Pham, Yujia Zhou, Frank Manion, Mei Liu, Xing Song, Hua Xu, Masoud Rouhizadeh, Yaoyun Zhang
To this end, this study aims at adapting the existing CLAMP natural language processing tool to quickly build COVID-19 SignSym, which can extract COVID-19 signs/symptoms and their 8 attributes (body location, severity, temporal expression, subject, condition, uncertainty, negation, and course) from clinical text.
1 code implementation • ACL 2020 • Wenmeng Yu, Hua Xu, Fanyang Meng, Yilin Zhu, Yixiao Ma, Jiele Wu, Jiyun Zou, Kai-Cheng Yang
Previous studies in multimodal sentiment analysis have used limited datasets, which only contain unified multimodal annotations.
no code implementations • Knowledge-Based Systems, 105916. 2020 • Yan Zhang, Hua Xu, Yunfeng Xu, Junhui Deng, Juan Gu, Rui Ma, Jie Lai, Jiangtao Hu, Xiaoshuai Yu, Lei Hou, Lidong Gu, Yanling Wei, Yichao Xiao, Junhao Lu
In this paper, we try to give a more visual and detailed definition of structural hole spanner based on the existing work, and propose a novel algorithm to identify structural hole spanner based on community forest model and diminishing marginal utility.
no code implementations • 13 Apr 2020 • Hong Guan, Jianfu Li, Hua Xu, Murthy Devarakonda
Background: Identifying relationships between clinical events and temporal expressions is a key challenge in meaningfully analyzing clinical text for use in advanced AI applications.
1 code implementation • 7 Mar 2020 • Ting-En Lin, Hua Xu
In this paper, we propose SofterMax and deep novelty detection (SMDN), a simple yet effective post-processing method for detecting unknown intent in dialogue systems based on pre-trained deep neural network classifiers.
1 code implementation • 20 Nov 2019 • Ting-En Lin, Hua Xu, Hanlei Zhang
Identifying new user intents is an essential task in the dialogue system.
Ranked #1 on Open Intent Discovery on ATIS
no code implementations • 9 Aug 2019 • Zongcheng Ji, Qiang Wei, Hua Xu
Developing high-performance entity normalization algorithms that can alleviate the term variation problem is of great interest to the biomedical community.
1 code implementation • ACL 2019 • Ting-En Lin, Hua Xu
With margin loss, we can learn discriminative deep features by forcing the network to maximize inter-class variance and to minimize intra-class variance.
Ranked #1 on Open Intent Detection on SNIPS (25% known)
no code implementations • NAACL 2019 • Jiatong Li, Kai Zheng, Hua Xu, Qiaozhu Mei, Yue Wang
When developing topic classifiers for real-world applications, we begin by defining a set of meaningful topic labels.
no code implementations • 22 Feb 2019 • Yuqi Si, Jingqi Wang, Hua Xu, Kirk Roberts
We explore a battery of embedding methods consisting of traditional word embeddings and contextual embeddings, and compare these on four concept extraction corpora: i2b2 2010, i2b2 2012, SemEval 2014, and SemEval 2015.
Ranked #1 on Clinical Concept Extraction on 2010 i2b2/VA
no code implementations • Knowledge-Based Systems. 109. 10.1016 2016 • Yunfeng Xu, Hua Xu, Dongwen Zhang, Yan Zhang
Overlapping community detection is the key research work to discover and explore the social networks.