Search Results for author: Bingyi Kang

Found 40 papers, 29 papers with code

Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models

no code implementations18 Dec 2024 Xinghang Li, Peiyan Li, Minghuan Liu, Dong Wang, Jirong Liu, Bingyi Kang, Xiao Ma, Tao Kong, Hanbo Zhang, Huaping Liu

The obtained results convince us firmly to explain why we need VLA and develop a new family of VLAs, RoboVLMs, which require very few manual designs and achieve a new state-of-the-art performance in three simulation tasks and real-world experiments.

Representation Learning

Image Understanding Makes for A Good Tokenizer for Image Generation

1 code implementation7 Nov 2024 Luting Wang, Yang Zhao, Zijian Zhang, Jiashi Feng, Si Liu, Bingyi Kang

Currently, pixel reconstruction (e. g., VQGAN) dominates the training objective for image tokenizers.

Image Generation

Classification Done Right for Vision-Language Pre-Training

1 code implementation5 Nov 2024 Zilong Huang, Qinghao Ye, Bingyi Kang, Jiashi Feng, Haoqi Fan

Due to the absence of the text encoding as contrastive target, SuperClass does not require a text encoder and does not need to maintain a large batch size as CLIP does.

Classification

How Far is Video Generation from World Model: A Physical Law Perspective

no code implementations4 Nov 2024 Bingyi Kang, Yang Yue, Rui Lu, Zhijie Lin, Yang Zhao, Kaixin Wang, Gao Huang, Jiashi Feng

Our scaling experiments show perfect generalization within the distribution, measurable scaling behavior for combinatorial generalization, but failure in out-of-distribution scenarios.

Video Generation

Loong: Generating Minute-level Long Videos with Autoregressive Language Models

no code implementations3 Oct 2024 Yuqing Wang, Tianwei Xiong, Daquan Zhou, Zhijie Lin, Yang Zhao, Bingyi Kang, Jiashi Feng, Xihui Liu

Autoregressive large language models (LLMs) have achieved great success in generating coherent and long sequences of tokens in the domain of natural language processing, while the exploration of autoregressive LLMs for video generation is limited to generating short videos of several seconds.

Video Generation

Improving Token-Based World Models with Parallel Observation Prediction

1 code implementation8 Feb 2024 Lior Cohen, Kaixin Wang, Bingyi Kang, Shie Mannor

We incorporate POP in a novel TBWM agent named REM (Retentive Environment Model), showcasing a 15. 4x faster imagination compared to prior TBWMs.

Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data

5 code implementations CVPR 2024 Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao

To this end, we scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data (~62M), which significantly enlarges the data coverage and thus is able to reduce the generalization error.

Data Augmentation Monocular Depth Estimation +1

Harnessing Diffusion Models for Visual Perception with Meta Prompts

1 code implementation22 Dec 2023 Qiang Wan, Zilong Huang, Bingyi Kang, Jiashi Feng, Li Zhang

Our key insight is to introduce learnable embeddings (meta prompts) to the pre-trained diffusion models to extract proper features for perception.

Monocular Depth Estimation Pose Estimation +1

Understanding, Predicting and Better Resolving Q-Value Divergence in Offline-RL

2 code implementations NeurIPS 2023 Yang Yue, Rui Lu, Bingyi Kang, Shiji Song, Gao Huang

We first identify a fundamental pattern, self-excitation, as the primary cause of Q-value estimation divergence in offline RL.

Attribute Offline RL

BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs

1 code implementation17 Jul 2023 Yang Zhao, Zhijie Lin, Daquan Zhou, Zilong Huang, Jiashi Feng, Bingyi Kang

Our experiments show that BuboGPT achieves impressive multi-modality understanding and visual grounding abilities during the interaction with human.

Instruction Following Sentence +1

Improving and Benchmarking Offline Reinforcement Learning Algorithms

1 code implementation1 Jun 2023 Bingyi Kang, Xiao Ma, Yirui Wang, Yang Yue, Shuicheng Yan

Recently, Offline Reinforcement Learning (RL) has achieved remarkable progress with the emergence of various algorithms and datasets.

Attribute Benchmarking +5

Efficient Diffusion Policies for Offline Reinforcement Learning

1 code implementation NeurIPS 2023 Bingyi Kang, Xiao Ma, Chao Du, Tianyu Pang, Shuicheng Yan

2) It is incompatible with maximum likelihood-based RL algorithms (e. g., policy gradient methods) as the likelihood of diffusion models is intractable.

D4RL Offline RL +4

MADiff: Offline Multi-agent Learning with Diffusion Models

1 code implementation27 May 2023 Zhengbang Zhu, Minghuan Liu, Liyuan Mao, Bingyi Kang, Minkai Xu, Yong Yu, Stefano Ermon, Weinan Zhang

Offline reinforcement learning (RL) aims to learn policies from pre-existing datasets without further interactions, making it a challenging task.

Offline RL Q-Learning +2

Boosting Offline Reinforcement Learning via Data Rebalancing

no code implementations17 Oct 2022 Yang Yue, Bingyi Kang, Xiao Ma, Zhongwen Xu, Gao Huang, Shuicheng Yan

Therefore, we propose a simple yet effective method to boost offline RL algorithms based on the observation that resampling a dataset keeps the distribution support unchanged.

D4RL Offline RL +3

Mutual Information Regularized Offline Reinforcement Learning

1 code implementation NeurIPS 2023 Xiao Ma, Bingyi Kang, Zhongwen Xu, Min Lin, Shuicheng Yan

In this work, we propose a novel MISA framework to approach offline RL from the perspective of Mutual Information between States and Actions in the dataset by directly constraining the policy improvement direction.

D4RL Offline RL +3

Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning

no code implementations25 Jun 2022 Yang Yue, Bingyi Kang, Zhongwen Xu, Gao Huang, Shuicheng Yan

Recently, visual representation learning has been shown to be effective and promising for boosting sample efficiency in RL.

Contrastive Learning Data Augmentation +6

Deep Long-Tailed Learning: A Survey

1 code implementation9 Oct 2021 Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, Jiashi Feng

Deep long-tailed learning, one of the most challenging problems in visual recognition, aims to train well-performing deep models from a large number of images that follow a long-tailed class distribution.

Survey

DeepViT: Towards Deeper Vision Transformer

5 code implementations22 Mar 2021 Daquan Zhou, Bingyi Kang, Xiaojie Jin, Linjie Yang, Xiaochen Lian, Zihang Jiang, Qibin Hou, Jiashi Feng

In this paper, we show that, unlike convolution neural networks (CNNs)that can be improved by stacking more convolutional layers, the performance of ViTs saturate fast when scaled to be deeper.

Image Classification Representation Learning

Learning Safe Policies with Cost-sensitive Advantage Estimation

no code implementations1 Jan 2021 Bingyi Kang, Shie Mannor, Jiashi Feng

Reinforcement Learning (RL) with safety guarantee is critical for agents performing tasks in risky environments.

Reinforcement Learning (RL)

Exploring Balanced Feature Spaces for Representation Learning

no code implementations ICLR 2021 Bingyi Kang, Yu Li, Sa Xie, Zehuan Yuan, Jiashi Feng

Motivated by this question, we conduct a series of studies on the performance of self-supervised contrastive learning and supervised learning methods over multiple datasets where training instance distributions vary from a balanced one to a long-tailed one.

Contrastive Learning Long-tail Learning +2

Regularization Matters in Policy Optimization - An Empirical Study on Continuous Control

1 code implementation ICLR 2021 Zhuang Liu, Xuanlin Li, Bingyi Kang, Trevor Darrell

In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks.

continuous-control Continuous Control +1

Improving Generalization in Reinforcement Learning with Mixture Regularization

2 code implementations NeurIPS 2020 Kaixin Wang, Bingyi Kang, Jie Shao, Jiashi Feng

Deep reinforcement learning (RL) agents trained in a limited set of environments tend to suffer overfitting and fail to generalize to unseen testing environments.

Data Augmentation Deep Reinforcement Learning +3

Few-shot Classification via Adaptive Attention

1 code implementation6 Aug 2020 Zi-Hang Jiang, Bingyi Kang, Kuangqi Zhou, Jiashi Feng

To be specific, we devise a simple and efficient meta-reweighting strategy to adapt the sample representations and generate soft attention to refine the representation such that the relevant features from the query and support samples can be extracted for a better few-shot classification.

Classification Few-Shot Learning +1

The Devil is in Classification: A Simple Framework for Long-tail Object Detection and Instance Segmentation

1 code implementation ECCV 2020 Tao Wang, Yu Li, Bingyi Kang, Junnan Li, Junhao Liew, Sheng Tang, Steven Hoi, Jiashi Feng

Specifically, we systematically investigate performance drop of the state-of-the-art two-stage instance segmentation model Mask R-CNN on the recent long-tail LVIS dataset, and unveil that a major cause is the inaccurate classification of object proposals.

General Classification Instance Segmentation +4

Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax

2 code implementations CVPR 2020 Yu Li, Tao Wang, Bingyi Kang, Sheng Tang, Chunfeng Wang, Jintao Li, Jiashi Feng

Solving long-tail large vocabulary object detection with deep learning based models is a challenging and demanding task, which is however under-explored. In this work, we provide the first systematic analysis on the underperformance of state-of-the-art models in front of long-tail distribution.

Image Classification Instance Segmentation +5

Classification Calibration for Long-tail Instance Segmentation

1 code implementation29 Oct 2019 Tao Wang, Yu Li, Bingyi Kang, Junnan Li, Jun Hao Liew, Sheng Tang, Steven Hoi, Jiashi Feng

In this report, we investigate the performance drop phenomenon of state-of-the-art two-stage instance segmentation models when processing extreme long-tail training data based on the LVIS [5] dataset, and find a major cause is the inaccurate classification of object proposals.

Classification General Classification +3

Regularization Matters in Policy Optimization

2 code implementations21 Oct 2019 Zhuang Liu, Xuanlin Li, Bingyi Kang, Trevor Darrell

In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks.

continuous-control Continuous Control +2

Exploring Simple and Transferable Recognition-Aware Image Processing

1 code implementation21 Oct 2019 Zhuang Liu, Hung-Ju Wang, Tinghui Zhou, Zhiqiang Shen, Bingyi Kang, Evan Shelhamer, Trevor Darrell

Interestingly, the processing model's ability to enhance recognition quality can transfer when evaluated on models of different architectures, recognized categories, tasks and training datasets.

Image Retrieval Recommendation Systems

Similarity R-C3D for Few-shot Temporal Activity Detection

no code implementations25 Dec 2018 Huijuan Xu, Bingyi Kang, Ximeng Sun, Jiashi Feng, Kate Saenko, Trevor Darrell

In this paper, we present a conceptually simple and general yet novel framework for few-shot temporal activity detection which detects the start and end time of the few-shot input activities in an untrimmed video.

Action Detection Activity Detection

Few-shot Object Detection via Feature Reweighting

4 code implementations ICCV 2019 Bingyi Kang, Zhuang Liu, Xin Wang, Fisher Yu, Jiashi Feng, Trevor Darrell

The feature learner extracts meta features that are generalizable to detect novel object classes, using training data from base classes with sufficient samples.

Few-Shot Learning Few-Shot Object Detection +3

Policy Optimization with Demonstrations

no code implementations ICML 2018 Bingyi Kang, Zequn Jie, Jiashi Feng

Exploration remains a significant challenge to reinforcement learning methods, especially in environments where reward signals are sparse.

Policy Gradient Methods Reinforcement Learning +1

Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms

no code implementations ICLR 2018 Tom Zahavy, Bingyi Kang, Alex Sivak, Jiashi Feng, Huan Xu, Shie Mannor

As most deep learning algorithms are stochastic (e. g., Stochastic Gradient Descent, Dropout, and Bayes-by-backprop), we revisit the robustness arguments of Xu & Mannor, and introduce a new approach, ensemble robustness, that concerns the robustness of a population of hypotheses.

Deep Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.