Search Results for author: Sacha Morin

Found 6 papers, 1 papers with code

Spectral Temporal Contrastive Learning

no code implementations1 Dec 2023 Sacha Morin, Somjit Nath, Samira Ebrahimi Kahou, Guy Wolf

This work is concerned with the temporal contrastive learning (TCL) setting where the sequential structure of the data is used instead to define positive pairs, which is more commonly used in RL and robotics contexts.

Contrastive Learning Self-Supervised Learning

ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and Planning

no code implementations28 Sep 2023 Qiao Gu, Alihusein Kuwajerwala, Sacha Morin, Krishna Murthy Jatavallabhula, Bipasha Sen, Aditya Agarwal, Corban Rivera, William Paul, Kirsty Ellis, Rama Chellappa, Chuang Gan, Celso Miguel de Melo, Joshua B. Tenenbaum, Antonio Torralba, Florian Shkurti, Liam Paull

We demonstrate the utility of this representation through a number of downstream planning tasks that are specified through abstract (language) prompts and require complex reasoning over spatial and semantic concepts.

StepMix: A Python Package for Pseudo-Likelihood Estimation of Generalized Mixture Models with External Variables

2 code implementations7 Apr 2023 Sacha Morin, Robin Legault, Félix Laliberté, Zsuzsa Bakk, Charles-Édouard Giguère, Roxane de la Sablonnière, Éric Lacourse

StepMix is an open-source Python package for the pseudo-likelihood estimation (one-, two- and three-step approaches) of generalized finite mixture models (latent profile and latent class analysis) with external variables (covariates and distal outcomes).

Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers

no code implementations7 Mar 2022 Miguel Saavedra-Ruiz, Sacha Morin, Liam Paull

In this work, we consider the problem of learning a perception model for monocular robot navigation using few annotated images.

Image Segmentation Robot Navigation +2

Extendable and invertible manifold learning with geometry regularized autoencoders

no code implementations14 Jul 2020 Andrés F. Duque, Sacha Morin, Guy Wolf, Kevin R. Moon

Our regularization, based on the diffusion potential distances from the recently-proposed PHATE visualization method, encourages the learned latent representation to follow intrinsic data geometry, similar to manifold learning algorithms, while still enabling faithful extension to new data and reconstruction of data in the original feature space from latent coordinates.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.