Search Results for author: Mong-Li Lee

Found 10 papers, 4 papers with code

FormFactory: An Interactive Benchmarking Suite for Multimodal Form-Filling Agents

no code implementations2 Jun 2025 Bobo Li, Yuheng Wang, Hao Fei, Juncheng Li, Wei Ji, Mong-Li Lee, Wynne Hsu

However, they struggle with the unique challenges of form filling, such as flexible layouts and the difficulty of aligning textual instructions with on-screen fields.

Benchmarking Form

Probing then Editing Response Personality of Large Language Models

1 code implementation14 Apr 2025 Tianjie Ju, Zhenyu Shao, Bowen Wang, Yujia Chen, Zhuosheng Zhang, Hao Fei, Mong-Li Lee, Wynne Hsu, Sufeng Duan, Gongshen Liu

We conduct probing experiments on 11 open-source LLMs over the PersonalityEdit benchmark and find that LLMs predominantly encode personality for responding in their middle and upper layers, with instruction-tuned models demonstrating a slightly clearer separation of personality traits.

MMLU

Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models

1 code implementation3 Mar 2025 Tianjie Ju, Yi Hua, Hao Fei, Zhenyu Shao, Yubin Zheng, Haodong Zhao, Mong-Li Lee, Wynne Hsu, Zhuosheng Zhang, Gongshen Liu

Multi-Modal Large Language Models (MLLMs) have exhibited remarkable performance on various vision-language tasks such as Visual Question Answering (VQA).

Memorization Question Answering +1

Investigating the Adaptive Robustness with Knowledge Conflicts in LLM-based Multi-Agent Systems

1 code implementation21 Feb 2025 Tianjie Ju, Bowen Wang, Hao Fei, Mong-Li Lee, Wynne Hsu, Yun Li, Qianren Wang, Pengzhou Cheng, Zongru Wu, Zhuosheng Zhang, Gongshen Liu

Recent advances in Large Language Models (LLMs) have upgraded them from sophisticated text generators to autonomous agents capable of corporation and tool use in multi-agent systems (MASs).

Aristotle: Mastering Logical Reasoning with A Logic-Complete Decompose-Search-Resolve Framework

no code implementations22 Dec 2024 Jundong Xu, Hao Fei, Meng Luo, Qian Liu, Liangming Pan, William Yang Wang, Preslav Nakov, Mong-Li Lee, Wynne Hsu

In the context of large language models (LLMs), current advanced reasoning methods have made impressive strides in various reasoning tasks.

Logical Reasoning

PanoSent: A Panoptic Sextuple Extraction Benchmark for Multimodal Conversational Aspect-based Sentiment Analysis

no code implementations18 Aug 2024 Meng Luo, Hao Fei, Bobo Li, Shengqiong Wu, Qian Liu, Soujanya Poria, Erik Cambria, Mong-Li Lee, Wynne Hsu

While existing Aspect-based Sentiment Analysis (ABSA) has received extensive effort and advancement, there are still gaps in defining a more holistic research target seamlessly integrating multimodality, conversation context, fine-granularity, and also covering the changing sentiment dynamics as well as cognitive causal rationales.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3

Faithful Logical Reasoning via Symbolic Chain-of-Thought

1 code implementation28 May 2024 Jundong Xu, Hao Fei, Liangming Pan, Qian Liu, Mong-Li Lee, Wynne Hsu

Technically, building upon an LLM, SymbCoT 1) first translates the natural language context into the symbolic format, and then 2) derives a step-by-step plan to solve the problem with symbolic logical rules, 3) followed by a verifier to check the translation and reasoning chain.

Logical Reasoning

Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition

no code implementations7 May 2024 Hao Fei, Shengqiong Wu, Wei Ji, Hanwang Zhang, Meishan Zhang, Mong-Li Lee, Wynne Hsu

Existing research of video understanding still struggles to achieve in-depth comprehension and reasoning in complex videos, primarily due to the under-exploration of two key bottlenecks: fine-grained spatial-temporal perceptive understanding and cognitive-level video scene comprehension.

Large Language Model Multimodal Large Language Model +2

Towards Robust Out-of-Distribution Generalization Bounds via Sharpness

no code implementations11 Mar 2024 Yingtian Zou, Kenji Kawaguchi, Yingnan Liu, Jiashuo Liu, Mong-Li Lee, Wynne Hsu

To bridge this gap between optimization and OOD generalization, we study the effect of sharpness on how a model tolerates data change in domain shift which is usually captured by "robustness" in generalization.

Generalization Bounds Out-of-Distribution Generalization

PERSONALIZED LAB TEST RESPONSE PREDICTION WITH KNOWLEDGE AUGMENTATION

no code implementations29 Sep 2021 Suman Bhoi, Mong-Li Lee, Wynne Hsu, Hao Sen Andrew Fang, Ngiap Chuan Tan

Further, we model the drug-lab interactions and diagnosis-lab interactions in the form of graphs and design a knowledge-augmented approach to predict patients’ response to a target lab result.

Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.