Search Results for author: Yintong Huo

Found 5 papers, 2 papers with code

Face It Yourselves: An LLM-Based Two-Stage Strategy to Localize Configuration Errors via Logs

1 code implementation31 Mar 2024 Shiwen Shan, Yintong Huo, Yuxin Su, Yichen Li, Dan Li, Zibin Zheng

Based on the insights gained from the preliminary study, we propose an LLM-based two-stage strategy for end-users to localize the root-cause configuration properties based on logs.

Enhancing LLM-Based Coding Tools through Native Integration of IDE-Derived Static Context

no code implementations6 Feb 2024 Yichen Li, Yun Peng, Yintong Huo, Michael R. Lyu

We conducted preliminary experiments to validate the performance of IDECoder and observed that this synergy represents a promising trend for future exploration.

Code Completion

CIKQA: Learning Commonsense Inference with a Unified Knowledge-in-the-loop QA Paradigm

no code implementations12 Oct 2022 Hongming Zhang, Yintong Huo, Yanai Elazar, Yangqiu Song, Yoav Goldberg, Dan Roth

We first align commonsense tasks with relevant knowledge from commonsense knowledge bases and ask humans to annotate whether the knowledge is enough or not.

Question Answering Task 2

Learning Contextual Causality from Time-consecutive Images

1 code implementation13 Dec 2020 Hongming Zhang, Yintong Huo, Xinran Zhao, Yangqiu Song, Dan Roth

Compared with pure text-based approaches, learning causality from the visual signal has the following advantages: (1) Causality knowledge belongs to the commonsense knowledge, which is rarely expressed in the text but rich in videos; (2) Most events in the video are naturally time-ordered, which provides a rich resource for us to mine causality knowledge from; (3) All the objects in the video can be used as context to study the contextual property of causal relations.

Cannot find the paper you are looking for? You can Submit a new open access paper.