Search Results for author: Andrew Lizarraga

Found 6 papers, 2 papers with code

Latent Plan Transformer: Planning as Latent Variable Inference

no code implementations7 Feb 2024 Deqian Kong, Dehong Xu, Minglu Zhao, Bo Pang, Jianwen Xie, Andrew Lizarraga, Yuhao Huang, Sirui Xie, Ying Nian Wu

We introduce the Latent Plan Transformer (LPT), a novel model that leverages a latent space to connect a Transformer-based trajectory generator and the final return.

SDSRA: A Skill-Driven Skill-Recombination Algorithm for Efficient Policy Learning

1 code implementation6 Dec 2023 Eric H. Jiang, Andrew Lizarraga

In this paper, we introduce a novel algorithm - the Skill-Driven Skill Recombination Algorithm (SDSRA) - an innovative framework that significantly enhances the efficiency of achieving maximum entropy in reinforcement learning tasks.

reinforcement-learning

Differentiable VQ-VAE's for Robust White Matter Streamline Encodings

1 code implementation10 Nov 2023 Andrew Lizarraga, Brandon Taraku, Edouardo Honig, Ying Nian Wu, Shantanu H. Joshi

Given the complex geometry of white matter streamlines, Autoencoders have been proposed as a dimension-reduction tool to simplify the analysis streamlines in a low-dimensional latent spaces.

Dimensionality Reduction

StreamNet: A WAE for White Matter Streamline Analysis

no code implementations3 Sep 2022 Andrew Lizarraga, Katherine L. Narr, Kirsten A. Donald, Shantanu H. Joshi

This proposed framework takes advantage of geometry-preserving properties of the Wasserstein-1 metric in order to achieve direct encoding and reconstruction of entire bundles of streamlines.

SrvfNet: A Generative Network for Unsupervised Multiple Diffeomorphic Shape Alignment

no code implementations27 Apr 2021 Elvis Nunez, Andrew Lizarraga, Shantanu H. Joshi

We present SrvfNet, a generative deep learning framework for the joint multiple alignment of large collections of functional data comprising square-root velocity functions (SRVF) to their templates.

Cannot find the paper you are looking for? You can Submit a new open access paper.