Search Results for author: Brian Belgodere

Found 4 papers, 1 papers with code

Do Large Scale Molecular Language Representations Capture Important Structural Information?

no code implementations17 Jun 2021 Jerret Ross, Brian Belgodere, Vijil Chenthamarakshan, Inkit Padhi, Youssef Mroueh, Payel Das

Various representation learning methods in a supervised setting, including the features extracted using graph neural nets, have emerged for such tasks.

Drug Discovery Molecular Property Prediction +1

Image Captioning as an Assistive Technology: Lessons Learned from VizWiz 2020 Challenge

1 code implementation21 Dec 2020 Pierre Dognin, Igor Melnyk, Youssef Mroueh, Inkit Padhi, Mattia Rigotti, Jarret Ross, Yair Schiff, Richard A. Young, Brian Belgodere

Image captioning has recently demonstrated impressive progress largely owing to the introduction of neural network algorithms trained on curated dataset like MS-COCO.

Image Captioning

P2L: Predicting Transfer Learning for Images and Semantic Relations

no code implementations20 Aug 2019 Bishwaranjan Bhattacharjee, John R. Kender, Matthew Hill, Parijat Dube, Siyu Huo, Michael R. Glass, Brian Belgodere, Sharath Pankanti, Noel Codella, Patrick Watson

We use this measure, which we call "Predict To Learn" ("P2L"), in the two very different domains of images and semantic relations, where it predicts, from a set of "source" models, the one model most likely to produce effective transfer for training a given "target" model.

Transfer Learning

Automatic Labeling of Data for Transfer Learning

no code implementations24 Mar 2019 Parijat Dube, Bishwaranjan Bhattacharjee, Siyu Huo, Patrick Watson, John Kender, Brian Belgodere

Transfer learning uses trained weights from a source model as the initial weightsfor the training of a target dataset.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.