Search Results for author: Jiacheng Liu

Found 24 papers, 12 papers with code

Personal LLM Agents: Insights and Survey about the Capability, Efficiency and Security

2 code implementations10 Jan 2024 Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, Rui Kong, Yile Wang, Hanfei Geng, Jian Luan, Xuefeng Jin, Zilong Ye, Guanjing Xiong, Fan Zhang, Xiang Li, Mengwei Xu, Zhijun Li, Peng Li, Yang Liu, Ya-Qin Zhang, Yunxin Liu

Next, we discuss several key challenges to achieve intelligent, efficient and secure Personal LLM Agents, followed by a comprehensive survey of representative solutions to address these challenges.

DataCLUE: A Benchmark Suite for Data-centric NLP

1 code implementation16 Nov 2021 Liang Xu, Jiacheng Liu, Xiang Pan, Xiaojing Lu, Xiaofeng Hou

However, we have not seen significant research progress in this field, especially in NLP.

NaturalProofs: Mathematical Theorem Proving in Natural Language

1 code implementation24 Mar 2021 Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, Kyunghyun Cho

Understanding and creating mathematics using natural mathematical language - the mixture of symbolic and natural language used by humans - is a challenging and important problem for driving progress in machine learning.

Automated Theorem Proving Domain Generalization +3

Generated Knowledge Prompting for Commonsense Reasoning

1 code implementation ACL 2022 Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi

It remains an open question whether incorporating external knowledge benefits commonsense reasoning while maintaining the flexibility of pretrained sequence models.

Language Modelling Open-Ended Question Answering

Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs

3 code implementations21 Oct 2022 Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, Guillaume Lample

In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems.

Ranked #3 on Automated Theorem Proving on miniF2F-valid (Pass@100 metric)

Automated Theorem Proving Language Modelling

NaturalProver: Grounded Mathematical Proof Generation with Language Models

1 code implementation25 May 2022 Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, Yejin Choi

Theorem proving in natural mathematical language - the mixture of symbolic and natural language used by humans - plays a central role in mathematical advances and education, and tests aspects of reasoning that are core to intelligence.

Automated Theorem Proving Language Modelling

Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering

1 code implementation6 Oct 2022 Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, Yejin Choi

Our work is the first to report that knowledge generated by models that are orders of magnitude smaller than GPT-3, even without direct supervision on the knowledge itself, can exceed the quality of commonsense knowledge elicited from GPT-3.

Question Answering Reinforcement Learning (RL)

Phrase Grounding by Soft-Label Chain Conditional Random Field

1 code implementation IJCNLP 2019 Jiacheng Liu, Julia Hockenmaier

In this paper, we formulate phrase grounding as a sequence labeling task where we treat candidate regions as potential labels, and use neural chain Conditional Random Fields (CRFs) to model dependencies among regions for adjacent mentions.

Phrase Grounding Structured Prediction

Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements

1 code implementation5 May 2023 Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi

Despite the much discussed capabilities of today's language models, they are still prone to silly and unexpected commonsense failures.

Crystal: Introspective Reasoners Reinforced with Self-Feedback

1 code implementation7 Oct 2023 Jiacheng Liu, Ramakanth Pasunuru, Hannaneh Hajishirzi, Yejin Choi, Asli Celikyilmaz

Extensive work has shown that the performance and interpretability of commonsense reasoning can be improved via knowledge-augmented reasoning methods, where the knowledge that underpins the reasoning process is explicitly verbalized and utilized.

RHA-Net: An Encoder-Decoder Network with Residual Blocks and Hybrid Attention Mechanisms for Pavement Crack Segmentation

no code implementations28 Jul 2022 Guijie Zhu, Zhun Fan, Jiacheng Liu, Duan Yuan, Peili Ma, Meihua Wang, Weihua Sheng, Kelvin C. P. Wang

In this paper, an efficient and effective end-to-end network for automatic pavement crack segmentation, called RHA-Net, is proposed to improve the pavement crack segmentation accuracy.

Crack Segmentation

Optimizing Investment Strategies with Lazy Factor and Probability Weighting: A Price Portfolio Forecasting and Mean-Variance Model with Transaction Costs Approach

no code implementations12 Jun 2023 Shuo Han, Yinan Chen, Jiacheng Liu

Our approach bifurcates into a Price Portfolio Forecasting Model and a Mean-Variance Model with Transaction Costs, utilizing probability weights as the coefficients of laziness factors.

Transforming Graphs for Enhanced Attribute Clustering: An Innovative Graph Transformer-Based Method

no code implementations20 Jun 2023 Shuo Han, Jiacheng Liu, Jiayun Wu, Yinan Chen, Li Tao

The architecture of GTAGC encompasses graph embedding, integration of the Graph Transformer within the autoencoder structure, and a clustering component.

Attribute Clustering +5

Generative Model for Models: Rapid DNN Customization for Diverse Tasks and Resource Constraints

no code implementations29 Aug 2023 Wenxing Xu, Yuanchun Li, Jiacheng Liu, Yi Sun, Zhengyang Cao, Yixuan Li, Hao Wen, Yunxin Liu

Unlike cloud-based deep learning models that are often large and uniform, edge-deployed models usually demand customization for domain-specific tasks and resource-limited environments.

Image Classification object-detection +1

Don't throw away your value model! Making PPO even better via Value-Guided Monte-Carlo Tree Search decoding

no code implementations26 Sep 2023 Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, Asli Celikyilmaz

The key idea is not to throw out the value network, a byproduct of PPO training for evaluating partial output sequences, when decoding text out of the policy network.

Text Generation

Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens

no code implementations30 Jan 2024 Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, Hannaneh Hajishirzi

The $\infty$-gram framework and infini-gram engine enable us to conduct many novel and interesting analyses of human-written and machine-generated text: we find that the $\infty$-gram LM has fairly high accuracy for next-token prediction (47%), and can complement neural LLMs to greatly reduce their language modeling perplexities.

Language Modelling

Are Machines Better at Complex Reasoning? Unveiling Human-Machine Inference Gaps in Entailment Verification

no code implementations6 Feb 2024 Soumya Sanyal, Tianyi Xiao, Jiacheng Liu, Wenya Wang, Xiang Ren

Finally, we use this model to filter out inconsistent model-generated rationales in self-consistency decoding, resulting in a 6% accuracy improvement on average across three MCQ datasets.

Benchmarking Multiple-choice +3

A Kalman Filter Based Framework for Monitoring the Performance of In-Hospital Mortality Prediction Models Over Time

no code implementations9 Feb 2024 Jiacheng Liu, Lisa Kirkland, Jaideep Srivastava

Therefore, in this study, for binary classifiers running in a long time period, we proposed to adjust these performance metrics for sample size and class distribution, so that a fair comparison can be made between two time periods.

Mortality Prediction

Explain Variance of Prediction in Variational Time Series Models for Clinical Deterioration Prediction

no code implementations9 Feb 2024 Jiacheng Liu, Jaideep Srivastava

To achieve this goal, we propose variance SHAP with variational time series models, an application of Shapley Additive Expanation(SHAP) algorithm to attribute epistemic prediction uncertainty.

Attribute Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.