Search Results for author: Aditya Sanghi

Found 18 papers, 6 papers with code

Make-A-Shape: a Ten-Million-scale 3D Shape Model

no code implementations20 Jan 2024 Ka-Hei Hui, Aditya Sanghi, Arianna Rampini, Kamal Rahimi Malekshan, Zhengzhe Liu, Hooman Shayani, Chi-Wing Fu

We then make the representation generatable by a diffusion model by devising the subband coefficients packing scheme to layout the representation in a low-resolution grid.

SLiMe: Segment Like Me

1 code implementation6 Sep 2023 Aliasghar Khani, Saeid Asgari Taghanaki, Aditya Sanghi, Ali Mahdavi Amiri, Ghassan Hamarneh

Then, using the extracted attention maps, the text embeddings of Stable Diffusion are optimized such that, each of them, learn about a single segmented region from the training image.

3D Shape Generation Segmentation

TExplain: Explaining Learned Visual Features via Pre-trained (Frozen) Language Models

no code implementations1 Sep 2023 Saeid Asgari Taghanaki, Aliasghar Khani, Amir Khasahmadi, Aditya Sanghi, Karl D. D. Willis, Ali Mahdavi-Amiri

These sentences are then used to extract the most frequent words, providing a comprehensive understanding of the learned features and patterns within the classifier.

Decision Making

Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation

no code implementations8 Jul 2023 Aditya Sanghi, Pradeep Kumar Jayaraman, Arianna Rampini, Joseph Lambourne, Hooman Shayani, Evan Atherton, Saeid Asgari Taghanaki

Significant progress has recently been made in creative applications of large pre-trained models for downstream tasks in 3D vision, such as text-to-shape generation.

3D Shape Generation Text-to-Shape Generation

Reconstructing editable prismatic CAD from rounded voxel models

no code implementations2 Sep 2022 Joseph G. Lambourne, Karl D. D. Willis, Pradeep Kumar Jayaraman, Longfei Zhang, Aditya Sanghi, Kamal Rahimi Malekshan

Reverse Engineering a CAD shape from other representations is an important geometric processing step for many downstream applications.

SolidGen: An Autoregressive Model for Direct B-rep Synthesis

no code implementations26 Mar 2022 Pradeep Kumar Jayaraman, Joseph G. Lambourne, Nishkrit Desai, Karl D. D. Willis, Aditya Sanghi, Nigel J. W. Morris

Key to achieving this is our Indexed Boundary Representation that references B-rep vertices, edges and faces in a well-defined hierarchy to capture the geometric and topological relations suitable for use with machine learning.

UNIST: Unpaired Neural Implicit Shape Translation Network

no code implementations CVPR 2022 Qimin Chen, Johannes Merz, Aditya Sanghi, Hooman Shayani, Ali Mahdavi-Amiri, Hao Zhang

We introduce UNIST, the first deep neural implicit model for general-purpose, unpaired shape-to-shape translation, in both 2D and 3D domains.

Position Style Transfer +1

Group-disentangled Representation Learning with Weakly-Supervised Regularization

no code implementations23 Oct 2021 Linh Tran, Amir Hosein Khasahmadi, Aditya Sanghi, Saeid Asgari

Learning interpretable and human-controllable representations that uncover factors of variation in data remains an ongoing key challenge in representation learning.

Disentanglement Transfer Learning

UVStyle-Net: Unsupervised Few-shot Learning of 3D Style Similarity Measure for B-Reps

1 code implementation ICCV 2021 Peter Meltzer, Hooman Shayani, Amir Khasahmadi, Pradeep Kumar Jayaraman, Aditya Sanghi, Joseph Lambourne

Boundary Representations (B-Reps) are the industry standard in 3D Computer Aided Design/Manufacturing (CAD/CAM) and industrial design due to their fidelity in representing stylistic details.

Computational Efficiency Unsupervised Few-Shot Learning

CAPRI-Net: Learning Compact CAD Shapes with Adaptive Primitive Assembly

no code implementations CVPR 2022 Fenggen Yu, Zhiqin Chen, Manyi Li, Aditya Sanghi, Hooman Shayani, Ali Mahdavi-Amiri, Hao Zhang

We introduce CAPRI-Net, a neural network for learning compact and interpretable implicit representations of 3D computer-aided design (CAD) models, in the form of adaptive primitive assemblies.

CAD Reconstruction

UV-Net: Learning from Boundary Representations

1 code implementation CVPR 2021 Pradeep Kumar Jayaraman, Aditya Sanghi, Joseph G. Lambourne, Karl D. D. Willis, Thomas Davies, Hooman Shayani, Nigel Morris

We introduce UV-Net, a novel neural network architecture and representation designed to operate directly on Boundary representation (B-rep) data from 3D CAD models.

Vector Graphics

Info3D: Representation Learning on 3D Objects using Mutual Information Maximization and Contrastive Learning

no code implementations ECCV 2020 Aditya Sanghi

We show that we can maximize the mutual information between 3D objects and their "chunks" to improve the representations in aligned datasets.

Clustering Contrastive Learning +3

Jigsaw-VAE: Towards Balancing Features in Variational Autoencoders

no code implementations12 May 2020 Saeid Asgari Taghanaki, Mohammad Havaei, Alex Lamb, Aditya Sanghi, Ara Danielyan, Tonya Custis

The latent variables learned by VAEs have seen considerable interest as an unsupervised way of extracting features, which can then be used for downstream tasks.

How Powerful Are Randomly Initialized Pointcloud Set Functions?

no code implementations11 Mar 2020 Aditya Sanghi, Pradeep Kumar Jayaraman

We study random embeddings produced by untrained neural set functions, and show that they are powerful representations which well capture the input features for downstream tasks such as classification, and are often linearly separable.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.