1 code implementation • NAACL 2022 • Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang
We develop AcTune, a new framework that improves the label efficiency of active PLM fine-tuning by unleashing the power of unlabeled data via self-training.
1 code implementation • 3 Oct 2023 • Jieyu Zhang, Ranjay Krishna, Ahmed H. Awadallah, Chi Wang
Today, users ask Large language models (LLMs) as assistants to answer queries that require external knowledge; they ask about the weather in a specific city, about stock prices, and even about where specific locations are within their neighborhood.
1 code implementation • 27 Sep 2023 • Linxin Song, Jieyu Zhang, Lechao Cheng, Pengyuan Zhou, Tianyi Zhou, Irene Li
Recent developments in large language models (LLMs) have shown promise in enhancing the capabilities of natural language processing (NLP).
1 code implementation • ICCV 2023 • Chengkai Hou, Jieyu Zhang, Tianyi Zhou
Unlike previous work, MADAug selects augmentation operators for each input image by a model-adaptive policy varying between training stages, producing a data augmentation curriculum optimized for better generalization.
1 code implementation • 16 Aug 2023 • Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks.
2 code implementations • 20 Jul 2023 • Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang
Recent advances in large language models (LLMs) have demonstrated notable progress on many mathematical benchmarks.
1 code implementation • ICCV 2023 • Chengkai Hou, Jieyu Zhang, Haonan Wang, Tianyi Zhou
We overcome these drawbacks by a novel ``subclass-balancing contrastive learning (SBCL)'' approach that clusters each head class into multiple subclasses of similar sizes as the tail classes and enforce representations to capture the two-layer class hierarchy between the original classes and their subclasses.
no code implementations • 19 Jun 2023 • Linxin Song, Jieyu Zhang, Xiaotian Lu, Tianyi Zhou
Instead of tuning the coefficient for each query round, which is sensitive and time-consuming, we propose the curriculum Firth bias reduction (CHAIN) that can automatically adjust the coefficient to be adaptive to the training process.
no code implementations • 3 May 2023 • Dong He, Jieyu Zhang, Maureen Daum, Alexander Ratner, Magdalena Balazinska
Machine learning tasks over image databases often generate masks that annotate image content (e. g., saliency maps, segmentation maps) and enable a variety of applications (e. g., determine if a model is learning spurious correlations or if an image was maliciously modified to mislead a model).
no code implementations • 30 Dec 2022 • Hong Guo, Yujing Wang, Jieyu Zhang, Zhengjie Lin, Yunhai Tong, Lei Yang, Luoxing Xiong, Congrui Huang
Time-series anomaly detection is an important task and has been widely applied in the industry.
1 code implementation • 20 Nov 2022 • Haonan Wang, Jieyu Zhang, Qi Zhu, Wei Huang, Kenji Kawaguchi, Xiaokui Xiao
To answer this question, we theoretically study the concentration property of features obtained by neighborhood aggregation on homophilic and heterophilic graphs, introduce the single-pass augmentation-free graph contrastive learning loss based on the property, and provide performance guarantees for the minimizer of the loss on downstream tasks.
2 code implementations • 6 Oct 2022 • Jieyu Zhang, Linxin Song, Alexander Ratner
In particular, it is built on a mixture of Bayesian label models, each corresponding to a global pattern of correlation, and the coefficients of the mixture components are predicted by a Gaussian Process classifier based on instance features.
2 code implementations • 6 Oct 2022 • Linxin Song, Jieyu Zhang, Tianxiang Yang, Masayuki Goto
To obtain a large amount of training labels inexpensively, researchers have recently adopted the weak supervision (WS) paradigm, which leverages labeling rules to synthesize training labels rather than using individual annotations to achieve competitive results for natural language processing (NLP) tasks.
1 code implementation • 15 Sep 2022 • Yue Yu, Rongzhi Zhang, ran Xu, Jieyu Zhang, Jiaming Shen, Chao Zhang
Large Language Models have demonstrated remarkable few-shot performance, but the performance can be sensitive to the selection of few-shot instances.
no code implementations • 2 Aug 2022 • Jieyu Zhang, Yujing Wang, Yaming Yang, Yang Luo, Alexander Ratner
Thus, in this work, we study the application of WS on binary classification tasks with positive labeling sources only.
1 code implementation • 27 Jul 2022 • Renzhi Wu, Shen-En Chen, Jieyu Zhang, Xu Chu
We train the model on synthetic data generated in the way that ensures the model approximates the analytical optimal solution, and build the model upon Graph Neural Network (GNN) to ensure the model prediction being invariant (or equivariant) to the permutation of LFs (or data points).
no code implementations • CVPR 2023 • Qiang He, Huangyuan Su, Jieyu Zhang, Xinwen Hou
In this work, we demonstrate that the learned representation of the $Q$-network and its target $Q$-network should, in theory, satisfy a favorable distinguishable representation property.
no code implementations • 25 May 2022 • Jieyu Zhang, Haonan Wang, Cheng-Yu Hsieh, Alexander Ratner
Programmatic Weak Supervision (PWS) aggregates the source votes of multiple weak supervision sources into probabilistic training labels, which are in turn used to train an end model.
no code implementations • 11 Apr 2022 • Haonan Wang, Jieyu Zhang, Qi Zhu, Wei Huang
Graph contrastive learning (GCL) is the most representative and prevalent self-supervised learning approach for graph-structured data.
no code implementations • 13 Mar 2022 • Yanqiao Zhu, Yuanqi Du, Yinkai Wang, Yichen Xu, Jieyu Zhang, Qiang Liu, Shu Wu
In this paper, we conduct a comprehensive review on the existing literature of deep graph generation from a variety of emerging methods to its wide application areas.
1 code implementation • 2 Mar 2022 • Cheng-Yu Hsieh, Jieyu Zhang, Alexander Ratner
Weak Supervision (WS) techniques allow users to efficiently create large training datasets by programmatically labeling data with heuristic sources of supervision.
1 code implementation • 11 Feb 2022 • Jieyu Zhang, Cheng-Yu Hsieh, Yue Yu, Chao Zhang, Alexander Ratner
Labeling training data has become one of the major roadblocks to using machine learning.
no code implementations • 10 Feb 2022 • Minhao Jiang, Xiangchen Song, Jieyu Zhang, Jiawei Han
Taxonomies are fundamental to many real-world applications in various domains, serving as structural representations of knowledge.
1 code implementation • 16 Dec 2021 • Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang
We propose {\ours}, a new framework that leverages unlabeled data to improve the label efficiency of active PLM fine-tuning.
no code implementations • NeurIPS 2021 • Bohan Wang, Huishuai Zhang, Jieyu Zhang, Qi Meng, Wei Chen, Tie-Yan Liu
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance if both the prior and the posterior are jointly optimized.
no code implementations • ICLR 2022 • Jieyu Zhang, Bohan Wang, Xiangchen Song, Yujing Wang, Yaming Yang, Jing Bai, Alexander Ratner
Creating labeled training sets has become one of the major roadblocks in machine learning.
1 code implementation • 23 Sep 2021 • Jieyu Zhang, Yue Yu, Yinghao Li, Yujing Wang, Yaming Yang, Mao Yang, Alexander Ratner
To address these problems, we introduce a benchmark platform, WRENCH, for thorough and standardized evaluation of WS approaches.
no code implementations • NeurIPS 2021 • Bohan Wang, Huishuai Zhang, Jieyu Zhang, Qi Meng, Wei Chen, Tie-Yan Liu
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance if both the prior and the posterior are jointly optimized.
no code implementations • 8 Apr 2021 • Xiangchen Song, Jiaming Shen, Jieyu Zhang, Jiawei Han
Taxonomies have been widely used in various machine learning and text mining systems to organize knowledge and facilitate downstream tasks.
no code implementations • 4 Mar 2021 • Yanqiao Zhu, Weizhi Xu, Jinghao Zhang, Yuanqi Du, Jieyu Zhang, Qiang Liu, Carl Yang, Shu Wu
Specifically, we first formulate a general pipeline of GSL and review state-of-the-art methods classified by the way of modeling graph structures, followed by applications of GSL across domains.
1 code implementation • 6 Jan 2021 • Jieyu Zhang, Xiangchen Song, Ying Zeng, Jiaze Chen, Jiaming Shen, Yuning Mao, Lei LI
Previous approaches focus on the taxonomy expansion, i. e. finding an appropriate hypernym concept from the taxonomy for a new query concept.
no code implementations • 4 Nov 2019 • Carl Yang, Jieyu Zhang, Haonan Wang, Sha Li, Myungwan Kim, Matt Walker, Yiou Xiao, Jiawei Han
While node semantics have been extensively explored in social networks, little research attention has been paid to profile edge semantics, i. e., social relations.
1 code implementation • 29 Sep 2019 • Carl Yang, Jieyu Zhang, Jiawei Han
While generalizing LP as a simple instance, NEP is far more powerful in its natural awareness of different types of objects and links, and the ability to automatically capture their important interaction patterns.