Search Results for author: Liu Ren

Found 33 papers, 9 papers with code

InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation

no code implementations25 Jun 2024 Jinbin Huang, Wenbin He, Liang Gou, Liu Ren, Chris Bryan

To address these challenges, this paper presents InFiConD, a novel framework that leverages visual concepts to implement the knowledge distillation process and enable subsequent no-code fine-tuning of student models.

Knowledge Distillation

TCLC-GS: Tightly Coupled LiDAR-Camera Gaussian Splatting for Autonomous Driving

no code implementations3 Apr 2024 Cheng Zhao, Su Sun, Ruoyu Wang, Yuliang Guo, Jun-Jun Wan, Zhou Huang, Xinyu Huang, Yingjie Victor Chen, Liu Ren

Most 3D Gaussian Splatting (3D-GS) based methods for urban scenes initialize 3D Gaussians directly with 3D LiDAR points, which not only underutilizes LiDAR data capabilities but also overlooks the potential advantages of fusing LiDAR with camera data.

3D Reconstruction Autonomous Driving

LORD: Large Models based Opposite Reward Design for Autonomous Driving

no code implementations27 Mar 2024 Xin Ye, Feng Tao, Abhirup Mallik, Burhaneddin Yaman, Liu Ren

Recently, large pretrained models have gained significant attention as zero-shot reward models for tasks specified with desired linguistic goals.

Autonomous Driving Imitation Learning +1

SUP-NeRF: A Streamlined Unification of Pose Estimation and NeRF for Monocular 3D Object Reconstruction

no code implementations23 Mar 2024 Yuliang Guo, Abhinav Kumar, Cheng Zhao, Ruoyu Wang, Xinyu Huang, Liu Ren

While gradient-based optimization in a NeRF framework updates the initial pose, this paper highlights that scale-depth ambiguity in monocular object reconstruction causes failures when the initial pose deviates moderately from the true pose.

3D Object Reconstruction 3D Reconstruction +2

A streamlined Approach to Multimodal Few-Shot Class Incremental Learning for Fine-Grained Datasets

2 code implementations10 Mar 2024 Thang Doan, Sima Behpour, Xin Li, Wenbin He, Liang Gou, Liu Ren

Few-shot Class-Incremental Learning (FSCIL) poses the challenge of retaining prior knowledge while learning from limited new data streams, all without overfitting.

Few-Shot Class-Incremental Learning Incremental Learning

AttributionScanner: A Visual Analytics System for Model Validation with Metadata-Free Slice Finding

no code implementations12 Jan 2024 Xiwei Xuan, Jorge Piazentin Ono, Liang Gou, Kwan-Liu Ma, Liu Ren

Data slice finding is an emerging technique for validating machine learning (ML) models by identifying and analyzing subgroups in a dataset that exhibit poor performance, often characterized by distinct feature sets or descriptive metadata.

Descriptive

InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates

no code implementations6 Nov 2023 Jinbin Huang, Wenbin He, Liang Gou, Liu Ren, Chris Bryan

Deep learning models are widely used in critical applications, highlighting the need for pre-deployment model understanding and improvement.

Long-Distance Gesture Recognition using Dynamic Neural Networks

no code implementations9 Aug 2023 Shubhang Bhatnagar, Sharath Gopal, Narendra Ahuja, Liu Ren

We demonstrate the performance of our method on the LD-ConGR long-distance dataset where it outperforms previous state-of-the-art methods on recognition accuracy and compute efficiency.

Gesture Recognition

CLIP-S$^4$: Language-Guided Self-Supervised Semantic Segmentation

no code implementations1 May 2023 Wenbin He, Suphanut Jamonnak, Liang Gou, Liu Ren

To further improve the pixel embeddings and enable language-driven semantic segmentation, we design two types of consistency guided by vision-language models: 1) embedding consistency, aligning our pixel embeddings to the joint feature space of a pre-trained vision-language model, CLIP; and 2) semantic consistency, forcing our model to make the same predictions as CLIP over a set of carefully designed target classes with both known and unknown prototypes.

Contrastive Learning Language Modelling +4

CLIP-S4: Language-Guided Self-Supervised Semantic Segmentation

no code implementations CVPR 2023 Wenbin He, Suphanut Jamonnak, Liang Gou, Liu Ren

To further improve the pixel embeddings and enable language-driven semantic segmentation, we design two types of consistency guided by vision-language models: 1) embedding consistency, aligning our pixel embeddings to the joint feature space of a pre-trained vision-language model, CLIP; and 2) semantic consistency, forcing our model to make the same predictions as CLIP over a set of carefully designed target classes with both known and unknown prototypes.

Contrastive Learning Language Modelling +4

Self-supervised Semantic Segmentation Grounded in Visual Concepts

no code implementations25 Mar 2022 Wenbin He, William Surmeier, Arvind Kumar Shekar, Liang Gou, Liu Ren

In this work, we propose a self-supervised pixel representation learning method for semantic segmentation by using visual concepts (i. e., groups of pixels with semantic meanings, such as parts, objects, and scenes) extracted from images.

Representation Learning Segmentation +2

Interactive Visual Pattern Search on Graph Data via Graph Representation Learning

no code implementations18 Feb 2022 Huan Song, Zeng Dai, Panpan Xu, Liu Ren

GraphQ provides a visual query interface with a query editor and a multi-scale visualization of the results, as well as a user feedback mechanism for refining the results with additional constraints.

Graph Representation Learning

Unsupervised Discriminative Learning of Sounds for Audio Event Classification

no code implementations19 May 2021 Sascha Hornauer, Ke Li, Stella X. Yu, Shabnam Ghaffarzadegan, Liu Ren

Recent progress in network-based audio event classification has shown the benefit of pre-training models on visual data such as ImageNet.

Classification Transfer Learning

Improving the Unsupervised Disentangled Representation Learning with VAE Ensemble

no code implementations1 Jan 2021 Nanxiang Li, Shabnam Ghaffarzadegan, Liu Ren

We show both theoretically and experimentally, the VAE ensemble objective encourages the linear transformations connecting the VAEs to be trivial transformations, aligning the latent representations of different models to be "alike".

Disentanglement

VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection

no code implementations27 Sep 2020 Liang Gou, Lincan Zou, Nanxiang Li, Michael Hofmann, Arvind Kumar Shekar, Axel Wendt, Liu Ren

In this work, we propose a visual analytics system, VATLD, equipped with a disentangled representation learning and semantic adversarial learning, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications.

Autonomous Driving Decision Making +1

Visualizing Classification Structure of Large-Scale Classifiers

1 code implementation12 Jul 2020 Bilal Alsallakh, Zhixin Yan, Shabnam Ghaffarzadegan, Zeng Dai, Liu Ren

We propose a measure to compute class similarity in large-scale classification based on prediction scores.

Classification General Classification

Improve Unsupervised Domain Adaptation with Mixup Training

1 code implementation3 Jan 2020 Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, Liu Ren

Unsupervised domain adaptation studies the problem of utilizing a relevant source domain with abundant labels to build predictive modeling for an unannotated target domain.

Domain Generalization Human Activity Recognition +2

Disentangled Representation Learning with Sequential Residual Variational Autoencoder

no code implementations ICLR 2020 Nanxiang Li, Shabnam Ghaffarzadegan, Liu Ren

Recent advancements in unsupervised disentangled representation learning focus on extending the variational autoencoder (VAE) with an augmented objective function to balance the trade-off between disentanglement and reconstruction.

Decoder Disentanglement

Controlling the Amount of Verbatim Copying in Abstractive Summarization

1 code implementation23 Nov 2019 Kaiqiang Song, Bingqing Wang, Zhe Feng, Liu Ren, Fei Liu

In this paper, we present a neural summarization model that, by learning from single human abstracts, can produce a broad spectrum of summaries ranging from purely extractive to highly generative ones.

Abstractive Text Summarization Language Modelling

Interpretable and Steerable Sequence Learning via Prototypes

2 code implementations23 Jul 2019 Yao Ming, Panpan Xu, Huamin Qu, Liu Ren

The prediction is obtained by comparing the inputs to a few prototypes, which are exemplar cases in the problem domain.

Sentiment Analysis

An Incremental Dimensionality Reduction Method for Visualizing Streaming Multidimensional Data

no code implementations10 May 2019 Takanori Fujiwara, Jia-Kai Chou, Shilpika, Panpan Xu, Liu Ren, Kwan-Liu Ma

We enhance an existing incremental PCA method in several ways to ensure its usability for visualizing streaming multidimensional data.

Dimensionality Reduction

Do Convolutional Neural Networks Learn Class Hierarchy?

no code implementations17 Oct 2017 Bilal Alsallakh, Amin Jourabloo, Mao Ye, Xiaoming Liu, Liu Ren

We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data.

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.