Search Results for author: Jiacheng Liu

Found 28 papers, 13 papers with code

Camera-Based Remote Physiology Sensing for Hundreds of Subjects Across Skin Tones

1 code implementation7 Apr 2024 Jiankai Tang, Xinyi Li, Jiacheng Liu, Xiyuxing Zhang, Zeyu Wang, Yuntao Wang

Remote photoplethysmography (rPPG) emerges as a promising method for non-invasive, convenient measurement of vital signs, utilizing the widespread presence of cameras.

Ground-to-UAV 140 GHz channel measurement and modeling

no code implementations3 Apr 2024 Da Li, Peian Li, Jiabiao Zhao, Jianjian Liang, Jiacheng Liu, Guohao Liu, Yuanshuai Lei, Wenbo Liu, Jianqin Deng, Fuyong Liu, Jianjun Ma

Employing experimental measurements through an unmodulated channel setup and a geometry-based stochastic model (GBSM) that integrates three-dimensional positional coordinates and beamwidth, this work evaluates the impact of UAV dynamic movements and antenna orientation on channel performance.

An Interpretable Power System Transient Stability Assessment Method with Expert Guiding Neural-Regression-Tree

no code implementations3 Apr 2024 Hanxuan Wang, Na Lu, Zixuan Wang, Jiacheng Liu, Jun Liu

TSA-ENRT utilizes an expert guiding nonlinear regression tree to approximate the neural network prediction and the neural network can be explained by the interpretive rules generated by the tree model.

regression

FineFake: A Knowledge-Enriched Dataset for Fine-Grained Multi-Domain Fake News Detecction

no code implementations30 Mar 2024 Ziyi Zhou, XiaoMing Zhang, Litian Zhang, Jiacheng Liu, Xi Zhang, Chaozhuo Li

Existing benchmarks for fake news detection have significantly contributed to the advancement of models in assessing the authenticity of news content.

Domain Adaptation Fake News Detection

Explain Variance of Prediction in Variational Time Series Models for Clinical Deterioration Prediction

no code implementations9 Feb 2024 Jiacheng Liu, Jaideep Srivastava

To achieve this goal, we propose variance SHAP with variational time series models, an application of Shapley Additive Expanation(SHAP) algorithm to attribute epistemic prediction uncertainty.

Attribute Decision Making +2

A Kalman Filter Based Framework for Monitoring the Performance of In-Hospital Mortality Prediction Models Over Time

no code implementations9 Feb 2024 Jiacheng Liu, Lisa Kirkland, Jaideep Srivastava

Therefore, in this study, for binary classifiers running in a long time period, we proposed to adjust these performance metrics for sample size and class distribution, so that a fair comparison can be made between two time periods.

Mortality Prediction

Are Machines Better at Complex Reasoning? Unveiling Human-Machine Inference Gaps in Entailment Verification

no code implementations6 Feb 2024 Soumya Sanyal, Tianyi Xiao, Jiacheng Liu, Wenya Wang, Xiang Ren

Finally, we use this model to filter out inconsistent model-generated rationales in self-consistency decoding, resulting in a 6% accuracy improvement on average across three MCQ datasets.

Benchmarking Multiple-choice +3

Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens

no code implementations30 Jan 2024 Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, Hannaneh Hajishirzi

Second, existing $n$-gram LMs use small $n$ which hinders their performance; we instead allow $n$ to be arbitrarily large, by introducing a new $\infty$-gram LM with backoff.

Language Modelling

Personal LLM Agents: Insights and Survey about the Capability, Efficiency and Security

2 code implementations10 Jan 2024 Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, Rui Kong, Yile Wang, Hanfei Geng, Jian Luan, Xuefeng Jin, Zilong Ye, Guanjing Xiong, Fan Zhang, Xiang Li, Mengwei Xu, Zhijun Li, Peng Li, Yang Liu, Ya-Qin Zhang, Yunxin Liu

Next, we discuss several key challenges to achieve intelligent, efficient and secure Personal LLM Agents, followed by a comprehensive survey of representative solutions to address these challenges.

Crystal: Introspective Reasoners Reinforced with Self-Feedback

1 code implementation7 Oct 2023 Jiacheng Liu, Ramakanth Pasunuru, Hannaneh Hajishirzi, Yejin Choi, Asli Celikyilmaz

Extensive work has shown that the performance and interpretability of commonsense reasoning can be improved via knowledge-augmented reasoning methods, where the knowledge that underpins the reasoning process is explicitly verbalized and utilized.

Don't throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding

no code implementations26 Sep 2023 Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, Asli Celikyilmaz

The key idea is not to throw out the value network, a byproduct of PPO training for evaluating partial output sequences, when decoding text out of the policy network.

Text Generation

Generative Model for Models: Rapid DNN Customization for Diverse Tasks and Resource Constraints

no code implementations29 Aug 2023 Wenxing Xu, Yuanchun Li, Jiacheng Liu, Yi Sun, Zhengyang Cao, Yixuan Li, Hao Wen, Yunxin Liu

Unlike cloud-based deep learning models that are often large and uniform, edge-deployed models usually demand customization for domain-specific tasks and resource-limited environments.

Image Classification object-detection +1

Transforming Graphs for Enhanced Attribute Clustering: An Innovative Graph Transformer-Based Method

no code implementations20 Jun 2023 Shuo Han, Jiacheng Liu, Jiayun Wu, Yinan Chen, Li Tao

The architecture of GTAGC encompasses graph embedding, integration of the Graph Transformer within the autoencoder structure, and a clustering component.

Attribute Clustering +5

Optimizing Investment Strategies with Lazy Factor and Probability Weighting: A Price Portfolio Forecasting and Mean-Variance Model with Transaction Costs Approach

no code implementations12 Jun 2023 Shuo Han, Yinan Chen, Jiacheng Liu

Our approach bifurcates into a Price Portfolio Forecasting Model and a Mean-Variance Model with Transaction Costs, utilizing probability weights as the coefficients of laziness factors.

Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements

1 code implementation5 May 2023 Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi

Despite the much discussed capabilities of today's language models, they are still prone to silly and unexpected commonsense failures.

Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs

3 code implementations21 Oct 2022 Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, Guillaume Lample

In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems.

Ranked #3 on Automated Theorem Proving on miniF2F-valid (Pass@100 metric)

Automated Theorem Proving Language Modelling

Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering

1 code implementation6 Oct 2022 Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, Yejin Choi

Our work is the first to report that knowledge generated by models that are orders of magnitude smaller than GPT-3, even without direct supervision on the knowledge itself, can exceed the quality of commonsense knowledge elicited from GPT-3.

Question Answering Reinforcement Learning (RL)

RHA-Net: An Encoder-Decoder Network with Residual Blocks and Hybrid Attention Mechanisms for Pavement Crack Segmentation

no code implementations28 Jul 2022 Guijie Zhu, Zhun Fan, Jiacheng Liu, Duan Yuan, Peili Ma, Meihua Wang, Weihua Sheng, Kelvin C. P. Wang

In this paper, an efficient and effective end-to-end network for automatic pavement crack segmentation, called RHA-Net, is proposed to improve the pavement crack segmentation accuracy.

Crack Segmentation

NaturalProver: Grounded Mathematical Proof Generation with Language Models

1 code implementation25 May 2022 Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, Yejin Choi

Theorem proving in natural mathematical language - the mixture of symbolic and natural language used by humans - plays a central role in mathematical advances and education, and tests aspects of reasoning that are core to intelligence.

Automated Theorem Proving Language Modelling

DataCLUE: A Benchmark Suite for Data-centric NLP

1 code implementation16 Nov 2021 Liang Xu, Jiacheng Liu, Xiang Pan, Xiaojing Lu, Xiaofeng Hou

However, we have not seen significant research progress in this field, especially in NLP.

Generated Knowledge Prompting for Commonsense Reasoning

1 code implementation ACL 2022 Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi

It remains an open question whether incorporating external knowledge benefits commonsense reasoning while maintaining the flexibility of pretrained sequence models.

Language Modelling Open-Ended Question Answering

NaturalProofs: Mathematical Theorem Proving in Natural Language

1 code implementation24 Mar 2021 Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, Kyunghyun Cho

Understanding and creating mathematics using natural mathematical language - the mixture of symbolic and natural language used by humans - is a challenging and important problem for driving progress in machine learning.

Automated Theorem Proving Domain Generalization +3

Phrase Grounding by Soft-Label Chain Conditional Random Field

1 code implementation IJCNLP 2019 Jiacheng Liu, Julia Hockenmaier

In this paper, we formulate phrase grounding as a sequence labeling task where we treat candidate regions as potential labels, and use neural chain Conditional Random Fields (CRFs) to model dependencies among regions for adjacent mentions.

Phrase Grounding Structured Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.