Search Results for author: Jieyu Zhang

Found 33 papers, 18 papers with code

AcTune: Uncertainty-Based Active Self-Training for Active Fine-Tuning of Pretrained Language Models

1 code implementation NAACL 2022 Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang

We develop AcTune, a new framework that improves the label efficiency of active PLM fine-tuning by unleashing the power of unlabeled data via self-training.

Active Learning text-classification +1

EcoAssistant: Using LLM Assistant More Affordably and Accurately

1 code implementation3 Oct 2023 Jieyu Zhang, Ranjay Krishna, Ahmed H. Awadallah, Chi Wang

Today, users ask Large language models (LLMs) as assistants to answer queries that require external knowledge; they ask about the weather in a specific city, about stock prices, and even about where specific locations are within their neighborhood.

NLPBench: Evaluating Large Language Models on Solving NLP Problems

1 code implementation27 Sep 2023 Linxin Song, Jieyu Zhang, Lechao Cheng, Pengyuan Zhou, Tianyi Zhou, Irene Li

Recent developments in large language models (LLMs) have shown promise in enhancing the capabilities of natural language processing (NLP).


When to Learn What: Model-Adaptive Data Augmentation Curriculum

1 code implementation ICCV 2023 Chengkai Hou, Jieyu Zhang, Tianyi Zhou

Unlike previous work, MADAug selects augmentation operators for each input image by a model-adaptive policy varying between training stages, producing a data augmentation curriculum optimized for better generalization.

Data Augmentation Fairness +1

Subclass-balancing Contrastive Learning for Long-tailed Recognition

1 code implementation ICCV 2023 Chengkai Hou, Jieyu Zhang, Haonan Wang, Tianyi Zhou

We overcome these drawbacks by a novel ``subclass-balancing contrastive learning (SBCL)'' approach that clusters each head class into multiple subclasses of similar sizes as the tail classes and enforce representations to capture the two-layer class hierarchy between the original classes and their subclasses.

Contrastive Learning Representation Learning

Taming Small-sample Bias in Low-budget Active Learning

no code implementations19 Jun 2023 Linxin Song, Jieyu Zhang, Xiaotian Lu, Tianyi Zhou

Instead of tuning the coefficient for each query round, which is sensitive and time-consuming, we propose the curriculum Firth bias reduction (CHAIN) that can automatically adjust the coefficient to be adaptive to the training process.

Active Learning

MaskSearch: Querying Image Masks at Scale

no code implementations3 May 2023 Dong He, Jieyu Zhang, Maureen Daum, Alexander Ratner, Magdalena Balazinska

Machine learning tasks over image databases often generate masks that annotate image content (e. g., saliency maps, segmentation maps) and enable a variety of applications (e. g., determine if a model is learning spurious correlations or if an image was maliciously modified to mislead a model).

Single-Pass Contrastive Learning Can Work for Both Homophilic and Heterophilic Graph

1 code implementation20 Nov 2022 Haonan Wang, Jieyu Zhang, Qi Zhu, Wei Huang, Kenji Kawaguchi, Xiaokui Xiao

To answer this question, we theoretically study the concentration property of features obtained by neighborhood aggregation on homophilic and heterophilic graphs, introduce the single-pass augmentation-free graph contrastive learning loss based on the property, and provide performance guarantees for the minimizer of the loss on downstream tasks.

Contrastive Learning

Leveraging Instance Features for Label Aggregation in Programmatic Weak Supervision

2 code implementations6 Oct 2022 Jieyu Zhang, Linxin Song, Alexander Ratner

In particular, it is built on a mixture of Bayesian label models, each corresponding to a global pattern of correlation, and the coefficients of the mixture components are predicted by a Gaussian Process classifier based on instance features.

Variational Inference

Adaptive Ranking-based Sample Selection for Weakly Supervised Class-imbalanced Text Classification

2 code implementations6 Oct 2022 Linxin Song, Jieyu Zhang, Tianxiang Yang, Masayuki Goto

To obtain a large amount of training labels inexpensively, researchers have recently adopted the weak supervision (WS) paradigm, which leverages labeling rules to synthesize training labels rather than using individual annotations to achieve competitive results for natural language processing (NLP) tasks.

text-classification Text Classification

Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A Prompt-Based Uncertainty Propagation Approach

1 code implementation15 Sep 2022 Yue Yu, Rongzhi Zhang, ran Xu, Jieyu Zhang, Jiaming Shen, Chao Zhang

Large Language Models have demonstrated remarkable few-shot performance, but the performance can be sensitive to the selection of few-shot instances.

Language Modelling Text Classification

Binary Classification with Positive Labeling Sources

no code implementations2 Aug 2022 Jieyu Zhang, Yujing Wang, Yaming Yang, Yang Luo, Alexander Ratner

Thus, in this work, we study the application of WS on binary classification tasks with positive labeling sources only.

Benchmarking Binary Classification +1

Learning Hyper Label Model for Programmatic Weak Supervision

1 code implementation27 Jul 2022 Renzhi Wu, Shen-En Chen, Jieyu Zhang, Xu Chu

We train the model on synthetic data generated in the way that ensures the model approximates the analytical optimal solution, and build the model upon Graph Neural Network (GNN) to ensure the model prediction being invariant (or equivariant) to the permutation of LFs (or data points).

Frustratingly Easy Regularization on Representation Can Boost Deep Reinforcement Learning

no code implementations CVPR 2023 Qiang He, Huangyuan Su, Jieyu Zhang, Xinwen Hou

In this work, we demonstrate that the learned representation of the $Q$-network and its target $Q$-network should, in theory, satisfy a favorable distinguishable representation property.

Continuous Control reinforcement-learning +2

Understanding Programmatic Weak Supervision via Source-aware Influence Function

no code implementations25 May 2022 Jieyu Zhang, Haonan Wang, Cheng-Yu Hsieh, Alexander Ratner

Programmatic Weak Supervision (PWS) aggregates the source votes of multiple weak supervision sources into probabilistic training labels, which are in turn used to train an end model.

Augmentation-Free Graph Contrastive Learning with Performance Guarantee

no code implementations11 Apr 2022 Haonan Wang, Jieyu Zhang, Qi Zhu, Wei Huang

Graph contrastive learning (GCL) is the most representative and prevalent self-supervised learning approach for graph-structured data.

Contrastive Learning Self-Supervised Learning

A Survey on Deep Graph Generation: Methods and Applications

no code implementations13 Mar 2022 Yanqiao Zhu, Yuanqi Du, Yinkai Wang, Yichen Xu, Jieyu Zhang, Qiang Liu, Shu Wu

In this paper, we conduct a comprehensive review on the existing literature of deep graph generation from a variety of emerging methods to its wide application areas.

Graph Generation Graph Learning

Nemo: Guiding and Contextualizing Weak Supervision for Interactive Data Programming

1 code implementation2 Mar 2022 Cheng-Yu Hsieh, Jieyu Zhang, Alexander Ratner

Weak Supervision (WS) techniques allow users to efficiently create large training datasets by programmatically labeling data with heuristic sources of supervision.

A Survey on Programmatic Weak Supervision

1 code implementation11 Feb 2022 Jieyu Zhang, Cheng-Yu Hsieh, Yue Yu, Chao Zhang, Alexander Ratner

Labeling training data has become one of the major roadblocks to using machine learning.

TaxoEnrich: Self-Supervised Taxonomy Completion via Structure-Semantic Representations

no code implementations10 Feb 2022 Minhao Jiang, Xiangchen Song, Jieyu Zhang, Jiawei Han

Taxonomies are fundamental to many real-world applications in various domains, serving as structural representations of knowledge.

Optimizing Information-theoretical Generalization Bounds via Anisotropic Noise in SGLD

no code implementations NeurIPS 2021 Bohan Wang, Huishuai Zhang, Jieyu Zhang, Qi Meng, Wei Chen, Tie-Yan Liu

We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance if both the prior and the posterior are jointly optimized.

Generalization Bounds

WRENCH: A Comprehensive Benchmark for Weak Supervision

1 code implementation23 Sep 2021 Jieyu Zhang, Yue Yu, Yinghao Li, Yujing Wang, Yaming Yang, Mao Yang, Alexander Ratner

To address these problems, we introduce a benchmark platform, WRENCH, for thorough and standardized evaluation of WS approaches.

Optimizing Information-theoretical Generalization Bound via Anisotropic Noise of SGLD

no code implementations NeurIPS 2021 Bohan Wang, Huishuai Zhang, Jieyu Zhang, Qi Meng, Wei Chen, Tie-Yan Liu

We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance if both the prior and the posterior are jointly optimized.

Generalization Bounds

Who Should Go First? A Self-Supervised Concept Sorting Model for Improving Taxonomy Expansion

no code implementations8 Apr 2021 Xiangchen Song, Jiaming Shen, Jieyu Zhang, Jiawei Han

Taxonomies have been widely used in various machine learning and text mining systems to organize knowledge and facilitate downstream tasks.

Taxonomy Expansion

A Survey on Graph Structure Learning: Progress and Opportunities

no code implementations4 Mar 2021 Yanqiao Zhu, Weizhi Xu, Jinghao Zhang, Yuanqi Du, Jieyu Zhang, Qiang Liu, Carl Yang, Shu Wu

Specifically, we first formulate a general pipeline of GSL and review state-of-the-art methods classified by the way of modeling graph structures, followed by applications of GSL across domains.

Graph structure learning

Taxonomy Completion via Triplet Matching Network

1 code implementation6 Jan 2021 Jieyu Zhang, Xiangchen Song, Ying Zeng, Jiaze Chen, Jiaming Shen, Yuning Mao, Lei LI

Previous approaches focus on the taxonomy expansion, i. e. finding an appropriate hypernym concept from the taxonomy for a new query concept.

Taxonomy Expansion

Relation Learning on Social Networks with Multi-Modal Graph Edge Variational Autoencoders

no code implementations4 Nov 2019 Carl Yang, Jieyu Zhang, Haonan Wang, Sha Li, Myungwan Kim, Matt Walker, Yiou Xiao, Jiawei Han

While node semantics have been extensively explored in social networks, little research attention has been paid to profile edge semantics, i. e., social relations.

Neural Embedding Propagation on Heterogeneous Networks

1 code implementation29 Sep 2019 Carl Yang, Jieyu Zhang, Jiawei Han

While generalizing LP as a simple instance, NEP is far more powerful in its natural awareness of different types of objects and links, and the ability to automatically capture their important interaction patterns.

Network Embedding

Cannot find the paper you are looking for? You can Submit a new open access paper.