Search Results for author: Jiaxin Huang

Found 27 papers, 14 papers with code

Practical Applications of Advanced Cloud Services and Generative AI Systems in Medical Image Analysis

no code implementations26 Mar 2024 Jingyu Xu, Binbin Wu, Jiaxin Huang, Yulu Gong, Yifan Zhang, Bo Liu

With the explosive growth and diversification of medical data, as well as the continuous improvement of medical needs and challenges, artificial intelligence technology is playing an increasingly important role in the medical field.

Anomaly Detection Image-to-Image Translation

Dynamic Resource Allocation for Virtual Machine Migration Optimization using Machine Learning

no code implementations20 Mar 2024 Yulu Gong, Jiaxin Huang, Bo Liu, Jingyu Xu, Binbin Wu, Yifan Zhang

Overall, the paragraph effectively communicates the importance of machine learning technology in addressing resource allocation and virtual machine migration challenges in cloud computing.

Cloud Computing

Application analysis of ai technology combined with spiral CT scanning in early lung cancer screening

no code implementations26 Jan 2024 Shulin Li, Liqiang Yu, Bo Liu, Qunwei Lin, Jiaxin Huang

However, at present, there are few studies on the diagnosis of early lung cancer by AI technology combined with SCT scanning.

Organ Segmentation

Enhancing Essay Scoring with Adversarial Weights Perturbation and Metric-specific AttentionPooling

no code implementations6 Jan 2024 Jiaxin Huang, Xinyu Zhao, Chang Che, Qunwei Lin, Bo Liu

To address the specific needs of ELLs, we propose the use of DeBERTa, a state-of-the-art neural language model, for improving automated feedback tools.

Automated Essay Scoring Language Modelling +2

Ontology Enrichment for Effective Fine-grained Entity Typing

no code implementations11 Oct 2023 Siru Ouyang, Jiaxin Huang, Pranav Pillai, Yunyi Zhang, Yu Zhang, Jiawei Han

In this study, we propose OnEFET, where we (1) enrich each node in the ontology structure with two types of extra information: instance information for training sample augmentation and topic information to relate types to contexts, and (2) develop a coarse-to-fine typing algorithm that exploits the enriched information by training an entailment model with contrasting topics and instance-based augmented training samples.

Entity Typing

Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning

1 code implementation6 Nov 2022 Yu Meng, Martin Michalski, Jiaxin Huang, Yu Zhang, Tarek Abdelzaher, Jiawei Han

In this work, we study few-shot learning with PLMs from a different perspective: We first tune an autoregressive PLM on the few-shot samples and then use it as a generator to synthesize a large amount of novel training samples which augment the original training set.

Few-Shot Learning

Large Language Models Can Self-Improve

no code implementations20 Oct 2022 Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han

We show that our approach improves the general reasoning ability of a 540B-parameter LLM (74. 4%->82. 1% on GSM8K, 78. 2%->83. 0% on DROP, 90. 0%->94. 4% on OpenBookQA, and 63. 4%->67. 9% on ANLI-A3) and achieves state-of-the-art-level performance, without any ground truth label.

Arithmetic Reasoning Common Sense Reasoning +3

Few-Shot Fine-Grained Entity Typing with Automatic Label Interpretation and Instance Generation

1 code implementation28 Jun 2022 Jiaxin Huang, Yu Meng, Jiawei Han

We study the problem of few-shot Fine-grained Entity Typing (FET), where only a few annotated entity mentions with contexts are given for each entity type.

Entity Typing Language Modelling +1

Mitigating barren plateaus of variational quantum eigensolvers

no code implementations26 May 2022 Xia Liu, Geng Liu, Jiaxin Huang, Hao-Kai Zhang, Xin Wang

Variational quantum algorithms (VQAs) are expected to establish valuable applications on near-term quantum computers.

All Birds with One Stone: Multi-task Text Classification for Efficient Inference with One Forward Pass

no code implementations22 May 2022 Jiaxin Huang, Tianqi Liu, Jialu Liu, Adam D. Lelkes, Cong Yu, Jiawei Han

Multi-Task Learning (MTL) models have shown their robustness, effectiveness, and efficiency for transferring learned knowledge across tasks.

Multi-Task Learning text-classification +1

Generating Training Data with Language Models: Towards Zero-Shot Language Understanding

1 code implementation9 Feb 2022 Yu Meng, Jiaxin Huang, Yu Zhang, Jiawei Han

Pretrained language models (PLMs) have demonstrated remarkable performance in various natural language processing tasks: Unidirectional PLMs (e. g., GPT) are well known for their superior text generation capabilities; bidirectional PLMs (e. g., BERT) have been the prominent choice for natural language understanding (NLU) tasks.

Few-Shot Learning MNLI-m +5

Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations

1 code implementation9 Feb 2022 Yu Meng, Yunyi Zhang, Jiaxin Huang, Yu Zhang, Jiawei Han

Interestingly, there have not been standard approaches to deploy PLMs for topic discovery as better alternatives to topic models.

Clustering Language Modelling +1

Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training

1 code implementation EMNLP 2021 Yu Meng, Yunyi Zhang, Jiaxin Huang, Xuan Wang, Yu Zhang, Heng Ji, Jiawei Han

We study the problem of training named entity recognition (NER) models using only distantly-labeled data, which can be automatically obtained by matching entity mentions in the raw text with entity types in a knowledge base.

Language Modelling named-entity-recognition +2

Few-Shot Named Entity Recognition: A Comprehensive Study

2 code implementations29 Dec 2020 Jiaxin Huang, Chunyuan Li, Krishan Subudhi, Damien Jose, Shobana Balakrishnan, Weizhu Chen, Baolin Peng, Jianfeng Gao, Jiawei Han

This paper presents a comprehensive study to efficiently build named entity recognition (NER) systems when a small number of in-domain labeled data is available.

Few-Shot Learning named-entity-recognition +2

Text Classification Using Label Names Only: A Language Model Self-Training Approach

2 code implementations EMNLP 2020 Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, Jiawei Han

In this paper, we explore the potential of only using the label name of each class to train classification models on unlabeled data, without using any labeled documents.

Document Classification General Classification +6

CoRel: Seed-Guided Topical Taxonomy Construction by Concept Learning and Relation Transferring

1 code implementation13 Oct 2020 Jiaxin Huang, Yiqing Xie, Yu Meng, Yunyi Zhang, Jiawei Han

Taxonomy is not only a fundamental form of knowledge representation, but also crucial to vast knowledge-rich applications, such as question answering and web search.

Question Answering Relation

Hierarchical Topic Mining via Joint Spherical Tree and Text Embedding

1 code implementation18 Jul 2020 Yu Meng, Yunyi Zhang, Jiaxin Huang, Yu Zhang, Chao Zhang, Jiawei Han

Mining a set of meaningful topics organized into a hierarchy is intuitively appealing since topic correlations are ubiquitous in massive text corpora.

text-classification Topic Models

Minimally Supervised Categorization of Text with Metadata

1 code implementation1 May 2020 Yu Zhang, Yu Meng, Jiaxin Huang, Frank F. Xu, Xuan Wang, Jiawei Han

Then, based on the same generative process, we synthesize training samples to address the bottleneck of label scarcity.

Document Classification

Guiding Corpus-based Set Expansion by Auxiliary Sets Generation and Co-Expansion

1 code implementation27 Jan 2020 Jiaxin Huang, Yiqing Xie, Yu Meng, Jiaming Shen, Yunyi Zhang, Jiawei Han

Given a small set of seed entities (e. g., ``USA'', ``Russia''), corpus-based set expansion is to induce an extensive set of entities which share the same semantic class (Country in this example) from a given corpus.

Spherical Text Embedding

1 code implementation NeurIPS 2019 Yu Meng, Jiaxin Huang, Guangyuan Wang, Chao Zhang, Honglei Zhuang, Lance Kaplan, Jiawei Han

While text embeddings are typically learned in the Euclidean space, directional similarity is often more effective in tasks such as word similarity and document clustering, which creates a gap between the training stage and usage stage of text embedding.

Clustering Riemannian optimization +1

Discriminative Topic Mining via Category-Name Guided Text Embedding

1 code implementation20 Aug 2019 Yu Meng, Jiaxin Huang, Guangyuan Wang, Zihan Wang, Chao Zhang, Yu Zhang, Jiawei Han

We propose a new task, discriminative topic mining, which leverages a set of user-provided category names to mine discriminative topics from text corpora.

Document Classification General Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.