Search Results for author: An Yan

Found 21 papers, 9 papers with code

Bridging Language and Items for Retrieval and Recommendation

1 code implementation6 Mar 2024 Yupeng Hou, Jiacheng Li, Zhankui He, An Yan, Xiusi Chen, Julian McAuley

This paper introduces BLaIR, a series of pretrained sentence embedding models specialized for recommendation scenarios.

Retrieval Sentence +2

GPT-4V(ision) as a Generalist Evaluator for Vision-Language Tasks

no code implementations2 Nov 2023 Xinlu Zhang, Yujie Lu, Weizhi Wang, An Yan, Jun Yan, Lianke Qin, Heng Wang, Xifeng Yan, William Yang Wang, Linda Ruth Petzold

Automatically evaluating vision-language tasks is challenging, especially when it comes to reflecting human judgments due to limitations in accounting for fine-grained details.

Image Generation

Driving through the Concept Gridlock: Unraveling Explainability Bottlenecks in Automated Driving

1 code implementation25 Oct 2023 Jessica Echterhoff, An Yan, Kyungtae Han, Amr Abdelraouf, Rohit Gupta, Julian McAuley

In the context of human-assisted or autonomous driving, explainability models can help user acceptance and understanding of decisions made by the autonomous vehicle, which can be used to rationalize and explain driver or vehicle behavior.

Autonomous Driving

Robust and Interpretable Medical Image Classifiers via Concept Bottleneck Models

no code implementations4 Oct 2023 An Yan, Yu Wang, Yiwu Zhong, Zexue He, Petros Karypis, Zihan Wang, chengyu dong, Amilcare Gentili, Chun-Nan Hsu, Jingbo Shang, Julian McAuley

Medical image classification is a critical problem for healthcare, with the potential to alleviate the workload of doctors and facilitate diagnoses of patients.

Image Classification Language Modelling +1

Learning Concise and Descriptive Attributes for Visual Recognition

1 code implementation ICCV 2023 An Yan, Yu Wang, Yiwu Zhong, chengyu dong, Zexue He, Yujie Lu, William Wang, Jingbo Shang, Julian McAuley

Recent advances in foundation models present new opportunities for interpretable visual recognition -- one can first query Large Language Models (LLMs) to obtain a set of attributes that describe each class, then apply vision-language models to classify images via these attributes.

Descriptive

"Nothing Abnormal": Disambiguating Medical Reports via Contrastive Knowledge Infusion

no code implementations15 May 2023 Zexue He, An Yan, Amilcare Gentili, Julian McAuley, Chun-Nan Hsu

Based on our analysis, we define a disambiguation rewriting task to regenerate an input to be unambiguous while preserving information about the original content.

Personalized Showcases: Generating Multi-Modal Explanations for Recommendations

no code implementations30 Jun 2022 An Yan, Zhankui He, Jiacheng Li, Tianyang Zhang, Julian McAuley

In this paper, to further enrich explanations, we propose a new task named personalized showcases, in which we provide both textual and visual information to explain our recommendations.

Contrastive Learning

Weakly Supervised Contrastive Learning for Chest X-Ray Report Generation

1 code implementation Findings (EMNLP) 2021 An Yan, Zexue He, Xing Lu, Jiang Du, Eric Chang, Amilcare Gentili, Julian McAuley, Chun-Nan Hsu

Radiology report generation aims at generating descriptive text from radiology images automatically, which may present an opportunity to improve radiology reporting and interpretation.

Contrastive Learning Descriptive +2

ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation

no code implementations10 Jun 2021 Wanrong Zhu, Xin Eric Wang, An Yan, Miguel Eckstein, William Yang Wang

Automatic evaluations for natural language generation (NLG) conventionally rely on token-level or embedding-level comparisons with text references.

nlg evaluation Text Generation

L2C: Describing Visual Differences Needs Semantic Understanding of Individuals

no code implementations EACL 2021 An Yan, Xin Eric Wang, Tsu-Jui Fu, William Yang Wang

Recent advances in language and vision push forward the research of captioning a single image to describing visual differences between image pairs.

Image Captioning

Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation

1 code implementation EACL 2021 Wanrong Zhu, Xin Eric Wang, Tsu-Jui Fu, An Yan, Pradyumna Narayana, Kazoo Sone, Sugato Basu, William Yang Wang

Outdoor vision-and-language navigation (VLN) is such a task where an agent follows natural language instructions and navigates a real-life urban environment.

Ranked #4 on Vision and Language Navigation on Touchdown Dataset (using extra training data)

Style Transfer Text Style Transfer +1

Cross-Lingual Vision-Language Navigation

2 code implementations24 Oct 2019 An Yan, Xin Eric Wang, Jiangtao Feng, Lei LI, William Yang Wang

Commanding a robot to navigate with natural language instructions is a long-term goal for grounded language understanding and robotics.

Domain Adaptation Navigate +2

FairST: Equitable Spatial and Temporal Demand Prediction for New Mobility Systems

no code implementations21 Jun 2019 An Yan, Bill Howe

Emerging transportation modes, including car-sharing, bike-sharing, and ride-hailing, are transforming urban mobility but have been shown to reinforce socioeconomic inequities.

Fairness

Predicting Abandonment in Online Coding Tutorials

no code implementations13 Jul 2017 An Yan, Michael J. Lee, Andrew J. Ko

Learners regularly abandon online coding tutorials when they get bored or frustrated, but there are few techniques for anticipating this abandonment to intervene.

Cannot find the paper you are looking for? You can Submit a new open access paper.