no code implementations • 15 Nov 2024 • Laura Doval, Federico Echenique Wanying Huang, Yi Xin
We study the allocation of deceased-donor lungs to patients in need of a transplant.
no code implementations • 4 Nov 2024 • Fan Wu, Yi Xin
We propose a two-step semiparametric maximum likelihood estimator to estimate the selection model and the potential outcome distribution.
1 code implementation • 14 Oct 2024 • Wenze Liu, Le Zhuo, Yi Xin, Sheng Xia, Peng Gao, Xiangyu Yue
We reveal that existing AR variants correspond to specific design choices of sequence order and output intervals within the SAR framework, with AR and Masked AR (MAR) as two extreme instances.
no code implementations • 26 Sep 2024 • Siyeop Yoon, Yujin Oh, Xiang Li, Yi Xin, Maurizio Cereda, Quanzheng Li
Acute respiratory distress syndrome (ARDS) is a severe condition characterized by lung inflammation and respiratory failure, with a high mortality rate of approximately 40%.
no code implementations • 17 Sep 2024 • Yongyang Pan, Xiaohong Liu, Siqi Luo, Yi Xin, Xiao Guo, Xiaoming Liu, Xiongkuo Min, Guangtao Zhai
Rapid advancements in multimodal large language models have enabled the creation of hyper-realistic images from textual descriptions.
1 code implementation • 11 Sep 2024 • Guimin Hu, Yi Xin, Weimin Lyu, Haojian Huang, Chang Sun, Zhihong Zhu, Lin Gui, Ruichu Cai, Erik Cambria, Hasti Seifi
The goal of this survey is to explore the current landscape of multimodal affective research, identify development trends, and highlight the similarities and differences across various tasks, offering a comprehensive report on the recent progress in multimodal affective computing from an NLP perspective.
Aspect-Based Sentiment Analysis
Emotion Recognition in Conversation
+3
no code implementations • 2 Sep 2024 • Siqi Luo, Yi Xin, Yuntao Du, Zhongwei Wan, Tao Tan, Guangtao Zhai, Xiaohong Liu
Furthermore, we propose a two-stage framework to tackle FS-TTA, including (i) fine-tuning the pre-trained source model with few-shot support set, along with using feature diversity augmentation module to avoid overfitting, (ii) implementing test time adaptation based on prototype memory bank guidance to produce high quality pseudo-label for model adaptation.
1 code implementation • 23 Aug 2024 • Qi Fan, Yutong Li, Yi Xin, Xinyu Cheng, Guanglai Gao, Miao Ma
The Multimodal Emotion Recognition challenge MER2024 focuses on recognizing emotions using audio, language, and visual signals.
no code implementations • 8 Aug 2024 • Luyang Luo, Mingxiang Wu, Mei Li, Yi Xin, Qiong Wang, Varut Vardhanabhuti, Winnie CW Chu, Zhenhui Li, Juan Zhou, Pranav Rajpurkar, Hao Chen
MOME exemplifies a discriminative, robust, scalable, and interpretable multimodal model, paving the way for noninvasive, personalized management of breast cancer patients based on multiparametric breast imaging data.
no code implementations • 1 Jul 2024 • Xuyang Liu, Ting Liu, Siteng Huang, Yi Xin, Yue Hu, Quanjun Yin, Donglin Wang, Honggang Chen
With M$^2$IST, standard transformer-based REC methods present competitive or even superior performance compared to full fine-tuning, while utilizing only 2. 11\% of the tunable parameters, 39. 61\% of the GPU memory, and 63. 46\% of the fine-tuning time required for full fine-tuning.
no code implementations • 18 Jun 2024 • Zhongwei Wan, Xinjian Wu, Yu Zhang, Yi Xin, Chaofan Tao, Zhihong Zhu, Xin Wang, Siqi Luo, Jing Xiong, Mi Zhang
Efficient inference in Large Language Models (LLMs) is impeded by the growing memory demands of key-value (KV) caching, especially for longer sequences.
no code implementations • 12 Jun 2024 • Jay Lu, Yao Luo, Kota Saito, Yi Xin
This paper proposes an empirical model of dynamic discrete choice to allow for non-separable time preferences, generalizing the well-known Rust (1987) model.
no code implementations • 24 May 2024 • Mingyang Yi, Aoxue Li, Yi Xin, Zhenguo Li
We conclude that in the earlier generation stage, the image is mostly decided by the special token [\texttt{EOS}] in the text prompt, and the information in the text prompt is already conveyed in this stage.
1 code implementation • 23 May 2024 • Ting Liu, Xuyang Liu, Siteng Huang, Liangtao Shi, Zunnan Xu, Yi Xin, Quanjun Yin, Xiaohong Liu
Parameter-efficient fine-tuning (PEFT) has emerged as a popular solution for adapting pre-trained Vision Transformer (ViT) models to downstream applications.
1 code implementation • 3 Feb 2024 • Yi Xin, Siqi Luo, Haodi Zhou, Junlong Du, Xiaohong Liu, Yue Fan, Qing Li, Yuntao Du
Large-scale pre-trained vision models (PVMs) have shown great potential for adaptability across various downstream vision tasks.
no code implementations • 14 Dec 2023 • Yi Xin, Junlong Du, Qiang Wang, Zhiwen Lin, Ke Yan
Extensive experiments on four dense scene understanding tasks demonstrate the superiority of VMT-Adapter(-Lite), achieving a 3. 96%(1. 34%) relative improvement compared to single-task full fine-tuning, while utilizing merely ~1% (0. 36%) trainable parameters of the pre-trained model.
no code implementations • 14 Dec 2023 • Yi Xin, Junlong Du, Qiang Wang, Ke Yan, Shouhong Ding
On the one hand, to maximize the complementarity of tasks with high similarity, we utilize a gradient-driven task grouping method that partitions tasks into several disjoint groups and assign a group-shared MmAP to each group.
no code implementations • 1 Nov 2021 • Dazhou Guo, Jia Ge, Xianghua Ye, Senxiang Yan, Yi Xin, Yuchen Song, Bing-shen Huang, Tsung-Min Hung, Zhuotun Zhu, Ling Peng, Yanping Ren, Rui Liu, Gong Zhang, Mengyuan Mao, Xiaohua Chen, Zhongjie Lu, Wenxiang Li, Yuzhen Chen, Lingyun Huang, Jing Xiao, Adam P. Harrison, Le Lu, Chien-Yu Lin, Dakai Jin, Tsung-Ying Ho
Accurate organ at risk (OAR) segmentation is critical to reduce the radiotherapy post-treatment complications.
no code implementations • 11 Oct 2021 • Xianghua Ye, Dazhou Guo, Chen-Kan Tseng, Jia Ge, Tsung-Min Hung, Ping-Ching Pai, Yanping Ren, Lu Zheng, Xinli Zhu, Ling Peng, Ying Chen, Xiaohua Chen, Chen-Yu Chou, Danni Chen, Jiaze Yu, Yuzhen Chen, Feiran Jiao, Yi Xin, Lingyun Huang, Guotong Xie, Jing Xiao, Le Lu, Senxiang Yan, Dakai Jin, Tsung-Ying Ho
252 institution-1 patients had a treatment planning-CT (pCT) and a pair of diagnostic FDG-PETCT; 354 patients from other 3 institutions had only pCT.
no code implementations • 16 Oct 2020 • Sarah E. Gerard, Jacob Herrmann, Yi Xin, Kevin T. Martin, Emanuele Rezoagli, Davide Ippolito, Giacomo Bellani, Maurizio Cereda, Junfeng Guo, Eric A. Hoffman, David W. Kaczka, Joseph M. Reinhardt
Regional lobar analysis was performed using hierarchical clustering to identify radiographic subtypes of COVID-19.