Search Results for author: Meng Ye

Found 14 papers, 5 papers with code

Zero-Shot Classification With Discriminative Semantic Representation Learning

no code implementations CVPR 2017 Meng Ye, Yuhong Guo

The proposed approach aims to identify a set of common high-level semantic components across the two domains via non-negative sparse matrix factorization, while enforcing the representation vectors of the images in this common component-based space to be discriminatively aligned with the attribute-based label representation vectors.

Attribute Classification +4

Deep Triplet Ranking Networks for One-Shot Recognition

1 code implementation19 Apr 2018 Meng Ye, Yuhong Guo

Despite the breakthroughs achieved by deep learning models in conventional supervised learning scenarios, their dependence on sufficient labeled training data in each class prevents effective applications of these deep models in situations where labeled training instances for a subset of novel classes are very sparse -- in the extreme case only one instance is available for each class.

One-Shot Learning

Progressive Ensemble Networks for Zero-Shot Recognition

no code implementations CVPR 2019 Meng Ye, Yuhong Guo

The ensemble network is built by learning multiple image classification functions with a shared feature extraction network but different label embedding representations, which enhance the diversity of the classifiers and facilitate information transfer to unlabeled classes.

Generalized Zero-Shot Learning Image Classification

Multi-Label Zero-Shot Learning with Transfer-Aware Label Embedding Projection

no code implementations7 Aug 2018 Meng Ye, Yuhong Guo

The approach projects the label embedding vectors into a low-dimensional space to induce better inter-label relationships and explicitly facilitate information transfer from seen labels to unseen labels, while simultaneously learning a max-margin multi-label classifier with the projected label embeddings.

Multi-Label Image Classification Multi-label zero-shot learning +1

PC-U Net: Learning to Jointly Reconstruct and Segment the Cardiac Walls in 3D from CT Data

no code implementations18 Aug 2020 Meng Ye, Qiaoying Huang, Dong Yang, Pengxiang Wu, Jingru Yi, Leon Axel, Dimitris Metaxas

The 3D volumetric shape of the heart's left ventricle (LV) myocardium (MYO) wall provides important information for diagnosis of cardiac disease and invasive procedure navigation.

Image Segmentation Segmentation +1

Modular Adaptation for Cross-Domain Few-Shot Learning

1 code implementation1 Apr 2021 Xiao Lin, Meng Ye, Yunye Gong, Giedrius Buracas, Nikoletta Basiou, Ajay Divakaran, Yi Yao

Adapting pre-trained representations has become the go-to recipe for learning new downstream tasks with limited examples.

cross-domain few-shot learning Representation Learning

DeepRecon: Joint 2D Cardiac Segmentation and 3D Volume Reconstruction via A Structure-Specific Generative Method

no code implementations14 Jun 2022 Qi Chang, Zhennan Yan, Mu Zhou, Di Liu, Khalid Sawalha, Meng Ye, Qilong Zhangli, Mikael Kanski, Subhi Al Aref, Leon Axel, Dimitris Metaxas

Joint 2D cardiac segmentation and 3D volume reconstruction are fundamental to building statistical cardiac anatomy models and understanding functional mechanisms from motion patterns.

3D Reconstruction 3D Shape Reconstruction +5

Neural Deformable Models for 3D Bi-Ventricular Heart Shape Reconstruction and Modeling from 2D Sparse Cardiac Magnetic Resonance Imaging

no code implementations ICCV 2023 Meng Ye, Dong Yang, Mikael Kanski, Leon Axel, Dimitris Metaxas

We model the bi-ventricular shape using blended deformable superquadrics, which are parameterized by a set of geometric parameter functions and are capable of deforming globally and locally.

Fill the K-Space and Refine the Image: Prompting for Dynamic and Multi-Contrast MRI Reconstruction

1 code implementation25 Sep 2023 Bingyu Xin, Meng Ye, Leon Axel, Dimitris N. Metaxas

Then, we extend the baseline model to a prompt-based learning approach, PromptMR, for all-in-one MRI reconstruction from different views, contrasts, adjacent types, and acceleration factors.

MRI Reconstruction

A Video is Worth 10,000 Words: Training and Benchmarking with Diverse Captions for Better Long Video Retrieval

no code implementations30 Nov 2023 Matthew Gwilliam, Michael Cogswell, Meng Ye, Karan Sikka, Abhinav Shrivastava, Ajay Divakaran

To provide a more thorough evaluation of the capabilities of long video retrieval systems, we propose a pipeline that leverages state-of-the-art large language models to carefully generate a diverse set of synthetic captions for long videos.

Benchmarking Retrieval +2

Cannot find the paper you are looking for? You can Submit a new open access paper.