Search Results for author: Maria Attarian

Found 5 papers, 1 papers with code

Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers

no code implementations19 Mar 2024 Vidhi Jain, Maria Attarian, Nikhil J Joshi, Ayzaan Wahid, Danny Driess, Quan Vuong, Pannag R Sanketi, Pierre Sermanet, Stefan Welker, Christine Chan, Igor Gilitschenski, Yonatan Bisk, Debidatta Dwibedi

Vid2Robot uses cross-attention transformer layers between video features and the current robot state to produce the actions and perform the same task as shown in the video.

Transforming Neural Network Visual Representations to Predict Human Judgments of Similarity

no code implementations13 Oct 2020 Maria Attarian, Brett D. Roads, Michael C. Mozer

Deep-learning vision models have shown intriguing similarities and differences with respect to human vision.

Combining Learned Lyrical Structures and Vocabulary for Improved Lyric Generation

no code implementations12 Nov 2018 Pablo Samuel Castro, Maria Attarian

The use of language models for generating lyrics and poetry has received an increased interest in the last few years.

Diversity

Cannot find the paper you are looking for? You can Submit a new open access paper.