Search Results for author: Chun-Fu Yeh

Found 3 papers, 0 papers with code

AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model

no code implementations27 Sep 2023 Seungwhan Moon, Andrea Madotto, Zhaojiang Lin, Tushar Nagarajan, Matt Smith, Shashank Jain, Chun-Fu Yeh, Prakash Murugesan, Peyman Heidari, Yue Liu, Kavya Srinet, Babak Damavandi, Anuj Kumar

We present Any-Modality Augmented Language Model (AnyMAL), a unified model that reasons over diverse input modality signals (i. e. text, image, video, audio, IMU motion sensor), and generates textual responses.

Language Modelling Video Question Answering

Self-similarity Student for Partial Label Histopathology Image Segmentation

no code implementations ECCV 2020 Hsien-Tzu Cheng, Chun-Fu Yeh, Po-Chen Kuo, Andy Wei, Keng-Chi Liu, Mong-Chi Ko, Kuan-Hua Chao, Yu-Ching Peng, Tyng-Luh Liu

Following this similarity learning, our similarity ensemble merges similar patches' ensembled predictions as the pseudo-label of a given patch to counteract its noisy label.

Image Segmentation Pseudo Label +2

Cannot find the paper you are looking for? You can Submit a new open access paper.