Search Results for author: Jieyu Zhang

Found 57 papers, 35 papers with code

AcTune: Uncertainty-Based Active Self-Training for Active Fine-Tuning of Pretrained Language Models

1 code implementation NAACL 2022 Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang

We develop AcTune, a new framework that improves the label efficiency of active PLM fine-tuning by unleashing the power of unlabeled data via self-training.

Active Learning text-classification +1

Generate Any Scene: Evaluating and Improving Text-to-Vision Generation with Scene Graph Programming

no code implementations11 Dec 2024 Ziqi Gao, Weikai Huang, Jieyu Zhang, Aniruddha Kembhavi, Ranjay Krishna

We introduce Generate Any Scene, a framework that systematically enumerates scene graphs representing a vast array of visual scenes, spanning realistic to imaginative compositions.

Text to 3D

Template Matters: Understanding the Role of Instruction Templates in Multimodal Language Model Evaluation and Training

1 code implementation11 Dec 2024 Shijian Wang, Linxin Song, Jieyu Zhang, Ryotaro Shimizu, Ao Luo, Li Yao, Cunjian Chen, Julian McAuley, Hanqian Wu

Models tuned on our augmented dataset achieve the best overall performance when compared with the same scale MLMs tuned on at most 75 times the scale of our augmented dataset, highlighting the importance of instruction templates in MLM training.

Language Modeling Language Modelling

TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action

1 code implementation7 Dec 2024 Zixian Ma, JianGuo Zhang, Zhiwei Liu, Jieyu Zhang, Juntao Tan, Manli Shu, Juan Carlos Niebles, Shelby Heinecke, Huan Wang, Caiming Xiong, Ranjay Krishna, Silvio Savarese

While open-source multi-modal language models perform well on simple question answering tasks, they often fail on complex questions that require multiple capabilities, such as fine-grained recognition, visual grounding, and reasoning, and that demand multi-step solutions.

Depth Estimation Mathematical Reasoning +4

EcoAct: Economic Agent Determines When to Register What Action

no code implementations3 Nov 2024 Shaokun Zhang, Jieyu Zhang, Dujian Ding, Mirian Hipolito Garcia, Ankur Mallick, Daniel Madrigal, Menglin Xia, Victor Rühle, Qingyun Wu, Chi Wang

Recent advancements have enabled Large Language Models (LLMs) to function as agents that can perform actions using external tools.

Language Model Preference Evaluation with Multiple Weak Evaluators

1 code implementation14 Oct 2024 Zhengyu Hu, Jieyu Zhang, Zhihan Xiong, Alexander Ratner, Hui Xiong, Ranjay Krishna

To improve model-based preference evaluation, we introduce GED (Preference Graph Ensemble and Denoise), a novel approach that leverages multiple model-based evaluators to construct preference graphs, and then ensemble and denoise these graphs for better, non-contradictory evaluation results.

Denoising Language Modeling +1

Explaining Length Bias in LLM-Based Preference Evaluations

no code implementations1 Jul 2024 Zhengyu Hu, Linxin Song, Jieyu Zhang, Zheyuan Xiao, Tianfu Wang, Zhengyu Chen, Nicholas Jing Yuan, Jianxun Lian, Kaize Ding, Hui Xiong

The use of large language models (LLMs) as judges, particularly in preference comparisons, has become widespread, but this reveals a notable bias towards longer responses, undermining the reliability of such evaluations.

Language Modelling Large Language Model

Biomedical Visual Instruction Tuning with Clinician Preference Alignment

1 code implementation19 Jun 2024 Hejie Cui, Lingjun Mao, Xin Liang, Jieyu Zhang, Hui Ren, Quanzheng Li, Xiang Li, Carl Yang

In this work, we propose a data-centric framework, Biomedical Visual Instruction Tuning with Clinician Preference Alignment (BioMed-VITAL), that incorporates clinician preferences into both stages of generating and selecting instruction data for tuning biomedical multimodal foundation models.

Instruction Following Visual Question Answering (VQA)

Task Me Anything

1 code implementation17 Jun 2024 Jieyu Zhang, Weikai Huang, Zixian Ma, Oscar Michel, Dong He, Tanmay Gupta, Wei-Chiu Ma, Ali Farhadi, Aniruddha Kembhavi, Ranjay Krishna

As a result, when a developer wants to identify which models to use for their application, they are overwhelmed by the number of benchmarks and remain uncertain about which benchmark's results are most reflective of their specific use case.

2k Attribute +3

Adaptive In-conversation Team Building for Language Model Agents

no code implementations29 May 2024 Linxin Song, Jiale Liu, Jieyu Zhang, Shaokun Zhang, Ao Luo, Shijian Wang, Qingyun Wu, Chi Wang

Leveraging multiple large language model (LLM) agents has shown to be a promising approach for tackling complex tasks, while the effective design of multiple agents for a particular application remains an art.

Diversity Language Modeling +3

m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks

2 code implementations17 Mar 2024 Zixian Ma, Weikai Huang, Jieyu Zhang, Tanmay Gupta, Ranjay Krishna

With m&m's, we evaluate 10 popular LLMs with 2 planning strategies (multi-step vs. step-by-step planning), 2 plan formats (JSON vs. code), and 3 types of feedback (parsing/verification/execution).

4k

Offline Training of Language Model Agents with Functions as Learnable Weights

1 code implementation17 Feb 2024 Shaokun Zhang, Jieyu Zhang, Jiale Liu, Linxin Song, Chi Wang, Ranjay Krishna, Qingyun Wu

Researchers and practitioners have recently reframed powerful Large Language Models (LLMs) as agents, enabling them to automate complex tasks largely via the use of specialized functions.

Language Modeling Language Modelling

Leveraging Large Language Models for Structure Learning in Prompted Weak Supervision

1 code implementation2 Feb 2024 Jinyan Su, Peilin Yu, Jieyu Zhang, Stephen H. Bach

We propose a Structure Refining Module, a simple yet effective first approach based on the similarities of the prompts by taking advantage of the intrinsic structure in the embedding space.

EHRAgent: Code Empowers Large Language Models for Few-shot Complex Tabular Reasoning on Electronic Health Records

1 code implementation13 Jan 2024 Wenqi Shi, ran Xu, Yuchen Zhuang, Yue Yu, Jieyu Zhang, Hang Wu, Yuanda Zhu, Joyce Ho, Carl Yang, May D. Wang

Large language models (LLMs) have demonstrated exceptional capabilities in planning and tool utilization as autonomous agents, but few have been developed for medical problem-solving.

Code Generation Few-Shot Learning +1

EcoAssistant: Using LLM Assistant More Affordably and Accurately

1 code implementation3 Oct 2023 Jieyu Zhang, Ranjay Krishna, Ahmed H. Awadallah, Chi Wang

Today, users ask Large language models (LLMs) as assistants to answer queries that require external knowledge; they ask about the weather in a specific city, about stock prices, and even about where specific locations are within their neighborhood.

NLPBench: Evaluating Large Language Models on Solving NLP Problems

1 code implementation27 Sep 2023 Linxin Song, Jieyu Zhang, Lechao Cheng, Pengyuan Zhou, Tianyi Zhou, Irene Li

Recent developments in large language models (LLMs) have shown promise in enhancing the capabilities of natural language processing (NLP).

Benchmarking Math

Uncovering Neural Scaling Laws in Molecular Representation Learning

2 code implementations NeurIPS 2023 Dingshuo Chen, Yanqiao Zhu, Jieyu Zhang, Yuanqi Du, ZHIXUN LI, Qiang Liu, Shu Wu, Liang Wang

Molecular Representation Learning (MRL) has emerged as a powerful tool for drug and materials discovery in a variety of tasks such as virtual screening and inverse design.

molecular representation Representation Learning

When to Learn What: Model-Adaptive Data Augmentation Curriculum

1 code implementation ICCV 2023 Chengkai Hou, Jieyu Zhang, Tianyi Zhou

Unlike previous work, MADAug selects augmentation operators for each input image by a model-adaptive policy varying between training stages, producing a data augmentation curriculum optimized for better generalization.

Data Augmentation Fairness +1

SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models

1 code implementation20 Jul 2023 Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang

Most of the existing Large Language Model (LLM) benchmarks on scientific problem reasoning focus on problems grounded in high-school subjects and are confined to elementary algebraic operations.

Benchmarking Language Modeling +3

Subclass-balancing Contrastive Learning for Long-tailed Recognition

1 code implementation ICCV 2023 Chengkai Hou, Jieyu Zhang, Haonan Wang, Tianyi Zhou

We overcome these drawbacks by a novel ``subclass-balancing contrastive learning (SBCL)'' approach that clusters each head class into multiple subclasses of similar sizes as the tail classes and enforce representations to capture the two-layer class hierarchy between the original classes and their subclasses.

Contrastive Learning Representation Learning

SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality

2 code implementations NeurIPS 2023 Cheng-Yu Hsieh, Jieyu Zhang, Zixian Ma, Aniruddha Kembhavi, Ranjay Krishna

In the last year alone, a surge of new benchmarks to measure compositional understanding of vision-language models have permeated the machine learning ecosystem.

Taming Small-sample Bias in Low-budget Active Learning

no code implementations19 Jun 2023 Linxin Song, Jieyu Zhang, Xiaotian Lu, Tianyi Zhou

Instead of tuning the coefficient for each query round, which is sensitive and time-consuming, we propose the curriculum Firth bias reduction (CHAIN) that can automatically adjust the coefficient to be adaptive to the training process.

Active Learning

On the Trade-off of Intra-/Inter-class Diversity for Supervised Pre-training

no code implementations NeurIPS 2023 Jieyu Zhang, Bohan Wang, Zhengyu Hu, Pang Wei Koh, Alexander Ratner

Pre-training datasets are critical for building state-of-the-art machine learning models, motivating rigorous study on their impact on downstream tasks.

Diversity

MaskSearch: Querying Image Masks at Scale

no code implementations3 May 2023 Dong He, Jieyu Zhang, Maureen Daum, Alexander Ratner, Magdalena Balazinska

Machine learning tasks over image databases often generate masks that annotate image content (e. g., saliency maps, segmentation maps, depth maps) and enable a variety of applications (e. g., determine if a model is learning spurious correlations or if an image was maliciously modified to mislead a model).

Single-Pass Contrastive Learning Can Work for Both Homophilic and Heterophilic Graph

1 code implementation20 Nov 2022 Haonan Wang, Jieyu Zhang, Qi Zhu, Wei Huang, Kenji Kawaguchi, Xiaokui Xiao

To answer this question, we theoretically study the concentration property of features obtained by neighborhood aggregation on homophilic and heterophilic graphs, introduce the single-pass augmentation-free graph contrastive learning loss based on the property, and provide performance guarantees for the minimizer of the loss on downstream tasks.

Contrastive Learning

Leveraging Instance Features for Label Aggregation in Programmatic Weak Supervision

2 code implementations6 Oct 2022 Jieyu Zhang, Linxin Song, Alexander Ratner

In particular, it is built on a mixture of Bayesian label models, each corresponding to a global pattern of correlation, and the coefficients of the mixture components are predicted by a Gaussian Process classifier based on instance features.

Variational Inference

Adaptive Ranking-based Sample Selection for Weakly Supervised Class-imbalanced Text Classification

2 code implementations6 Oct 2022 Linxin Song, Jieyu Zhang, Tianxiang Yang, Masayuki Goto

To obtain a large amount of training labels inexpensively, researchers have recently adopted the weak supervision (WS) paradigm, which leverages labeling rules to synthesize training labels rather than using individual annotations to achieve competitive results for natural language processing (NLP) tasks.

text-classification Text Classification

Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A Prompt-Based Uncertainty Propagation Approach

1 code implementation15 Sep 2022 Yue Yu, Rongzhi Zhang, ran Xu, Jieyu Zhang, Jiaming Shen, Chao Zhang

Large Language Models have demonstrated remarkable few-shot performance, but the performance can be sensitive to the selection of few-shot instances.

Diversity Language Modeling +2

Binary Classification with Positive Labeling Sources

no code implementations2 Aug 2022 Jieyu Zhang, Yujing Wang, Yaming Yang, Yang Luo, Alexander Ratner

Thus, in this work, we study the application of WS on binary classification tasks with positive labeling sources only.

Benchmarking Binary Classification +1

Learning Hyper Label Model for Programmatic Weak Supervision

1 code implementation27 Jul 2022 Renzhi Wu, Shen-En Chen, Jieyu Zhang, Xu Chu

We train the model on synthetic data generated in the way that ensures the model approximates the analytical optimal solution, and build the model upon Graph Neural Network (GNN) to ensure the model prediction being invariant (or equivariant) to the permutation of LFs (or data points).

Graph Neural Network

Frustratingly Easy Regularization on Representation Can Boost Deep Reinforcement Learning

no code implementations CVPR 2023 Qiang He, Huangyuan Su, Jieyu Zhang, Xinwen Hou

In this work, we demonstrate that the learned representation of the $Q$-network and its target $Q$-network should, in theory, satisfy a favorable distinguishable representation property.

Continuous Control Deep Reinforcement Learning +3

Understanding Programmatic Weak Supervision via Source-aware Influence Function

no code implementations25 May 2022 Jieyu Zhang, Haonan Wang, Cheng-Yu Hsieh, Alexander Ratner

Programmatic Weak Supervision (PWS) aggregates the source votes of multiple weak supervision sources into probabilistic training labels, which are in turn used to train an end model.

Augmentation-Free Graph Contrastive Learning with Performance Guarantee

no code implementations11 Apr 2022 Haonan Wang, Jieyu Zhang, Qi Zhu, Wei Huang

Graph contrastive learning (GCL) is the most representative and prevalent self-supervised learning approach for graph-structured data.

Contrastive Learning Graph Neural Network +1

A Survey on Deep Graph Generation: Methods and Applications

no code implementations13 Mar 2022 Yanqiao Zhu, Yuanqi Du, Yinkai Wang, Yichen Xu, Jieyu Zhang, Qiang Liu, Shu Wu

In this paper, we conduct a comprehensive review on the existing literature of deep graph generation from a variety of emerging methods to its wide application areas.

Graph Generation Graph Learning +1

Nemo: Guiding and Contextualizing Weak Supervision for Interactive Data Programming

1 code implementation2 Mar 2022 Cheng-Yu Hsieh, Jieyu Zhang, Alexander Ratner

Weak Supervision (WS) techniques allow users to efficiently create large training datasets by programmatically labeling data with heuristic sources of supervision.

A Survey on Programmatic Weak Supervision

1 code implementation11 Feb 2022 Jieyu Zhang, Cheng-Yu Hsieh, Yue Yu, Chao Zhang, Alexander Ratner

Labeling training data has become one of the major roadblocks to using machine learning.

Survey

TaxoEnrich: Self-Supervised Taxonomy Completion via Structure-Semantic Representations

no code implementations10 Feb 2022 Minhao Jiang, Xiangchen Song, Jieyu Zhang, Jiawei Han

Taxonomies are fundamental to many real-world applications in various domains, serving as structural representations of knowledge.

Position

Optimizing Information-theoretical Generalization Bounds via Anisotropic Noise in SGLD

no code implementations NeurIPS 2021 Bohan Wang, Huishuai Zhang, Jieyu Zhang, Qi Meng, Wei Chen, Tie-Yan Liu

We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance if both the prior and the posterior are jointly optimized.

Generalization Bounds

WRENCH: A Comprehensive Benchmark for Weak Supervision

1 code implementation23 Sep 2021 Jieyu Zhang, Yue Yu, Yinghao Li, Yujing Wang, Yaming Yang, Mao Yang, Alexander Ratner

To address these problems, we introduce a benchmark platform, WRENCH, for thorough and standardized evaluation of WS approaches.

Optimizing Information-theoretical Generalization Bound via Anisotropic Noise of SGLD

no code implementations NeurIPS 2021 Bohan Wang, Huishuai Zhang, Jieyu Zhang, Qi Meng, Wei Chen, Tie-Yan Liu

We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance if both the prior and the posterior are jointly optimized.

Generalization Bounds

Who Should Go First? A Self-Supervised Concept Sorting Model for Improving Taxonomy Expansion

no code implementations8 Apr 2021 Xiangchen Song, Jiaming Shen, Jieyu Zhang, Jiawei Han

Taxonomies have been widely used in various machine learning and text mining systems to organize knowledge and facilitate downstream tasks.

Taxonomy Expansion

A Survey on Graph Structure Learning: Progress and Opportunities

no code implementations4 Mar 2021 Yanqiao Zhu, Weizhi Xu, Jinghao Zhang, Yuanqi Du, Jieyu Zhang, Qiang Liu, Carl Yang, Shu Wu

Specifically, we first formulate a general pipeline of GSL and review state-of-the-art methods classified by the way of modeling graph structures, followed by applications of GSL across domains.

Graph structure learning Survey

Taxonomy Completion via Triplet Matching Network

1 code implementation6 Jan 2021 Jieyu Zhang, Xiangchen Song, Ying Zeng, Jiaze Chen, Jiaming Shen, Yuning Mao, Lei LI

Previous approaches focus on the taxonomy expansion, i. e. finding an appropriate hypernym concept from the taxonomy for a new query concept.

Taxonomy Expansion Triplet

Relation Learning on Social Networks with Multi-Modal Graph Edge Variational Autoencoders

no code implementations4 Nov 2019 Carl Yang, Jieyu Zhang, Haonan Wang, Sha Li, Myungwan Kim, Matt Walker, Yiou Xiao, Jiawei Han

While node semantics have been extensively explored in social networks, little research attention has been paid to profile edge semantics, i. e., social relations.

Relation

Neural Embedding Propagation on Heterogeneous Networks

1 code implementation29 Sep 2019 Carl Yang, Jieyu Zhang, Jiawei Han

While generalizing LP as a simple instance, NEP is far more powerful in its natural awareness of different types of objects and links, and the ability to automatically capture their important interaction patterns.

Network Embedding

Cannot find the paper you are looking for? You can Submit a new open access paper.