Search Results for author: Yinbo Chen

Found 8 papers, 7 papers with code

Graph Transformer

no code implementations ICLR 2019 Yuan Li, Xiaodan Liang, Zhiting Hu, Yinbo Chen, Eric P. Xing

Graph neural networks (GNN) have gained increasing research interests as a mean to the challenging goal of robust and universal graph learning.

Few-Shot Learning General Classification +3

Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning

10 code implementations ICCV 2021 Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, Xiaolong Wang

The edge between these two lines of works has yet been underexplored, and the effectiveness of meta-learning in few-shot learning remains unclear.

Few-Shot Learning General Classification

Learning Implicit Feature Alignment Function for Semantic Segmentation

1 code implementation17 Jun 2022 Hanzhe Hu, Yinbo Chen, Jiarui Xu, Shubhankar Borse, Hong Cai, Fatih Porikli, Xiaolong Wang

As such, IFA implicitly aligns the feature maps at different levels and is capable of producing segmentation maps in arbitrary resolutions.

Segmentation Semantic Segmentation

Transformers as Meta-Learners for Implicit Neural Representations

1 code implementation4 Aug 2022 Yinbo Chen, Xiaolong Wang

Motivated by a generalized formulation of gradient-based meta-learning, we propose a formulation that uses Transformers as hypernetworks for INRs, where it can directly build the whole set of INR weights with Transformers specialized as set-to-set mapping.

Meta-Learning

Visual Reinforcement Learning with Self-Supervised 3D Representations

1 code implementation13 Oct 2022 Yanjie Ze, Nicklas Hansen, Yinbo Chen, Mohit Jain, Xiaolong Wang

A prominent approach to visual Reinforcement Learning (RL) is to learn an internal state representation using self-supervised methods, which has the potential benefit of improved sample-efficiency and generalization through additional learning signal and inductive biases.

reinforcement-learning Reinforcement Learning (RL) +2

Cannot find the paper you are looking for? You can Submit a new open access paper.