no code implementations • 7 Feb 2024 • Deqian Kong, Dehong Xu, Minglu Zhao, Bo Pang, Jianwen Xie, Andrew Lizarraga, Yuhao Huang, Sirui Xie, Ying Nian Wu
We introduce the Latent Plan Transformer (LPT), a novel model that leverages a latent space to connect a Transformer-based trajectory generator and the final return.
1 code implementation • 6 Dec 2023 • Eric H. Jiang, Andrew Lizarraga
In this paper, we introduce a novel algorithm - the Skill-Driven Skill Recombination Algorithm (SDSRA) - an innovative framework that significantly enhances the efficiency of achieving maximum entropy in reinforcement learning tasks.
1 code implementation • 10 Nov 2023 • Andrew Lizarraga, Brandon Taraku, Edouardo Honig, Ying Nian Wu, Shantanu H. Joshi
Given the complex geometry of white matter streamlines, Autoencoders have been proposed as a dimension-reduction tool to simplify the analysis streamlines in a low-dimensional latent spaces.
no code implementations • 3 Sep 2022 • Andrew Lizarraga, Katherine L. Narr, Kirsten A. Donald, Shantanu H. Joshi
This proposed framework takes advantage of geometry-preserving properties of the Wasserstein-1 metric in order to achieve direct encoding and reconstruction of entire bundles of streamlines.
no code implementations • 8 Aug 2021 • Andrew Lizarraga, David Lee, Antoni Kubicki, Ashish Sahib, Elvis Nunez, Katherine Narr, Shantanu H. Joshi
We present a geometric framework for aligning white matter fiber tracts.
no code implementations • 27 Apr 2021 • Elvis Nunez, Andrew Lizarraga, Shantanu H. Joshi
We present SrvfNet, a generative deep learning framework for the joint multiple alignment of large collections of functional data comprising square-root velocity functions (SRVF) to their templates.