Search Results for author: Binh X. Nguyen

Found 8 papers, 8 papers with code

Deep Federated Learning for Autonomous Driving

1 code implementation12 Oct 2021 Anh Nguyen, Tuong Do, Minh Tran, Binh X. Nguyen, Chien Duong, Tu Phan, Erman Tjiputra, Quang D. Tran

We design a new Federated Autonomous Driving network (FADNet) that can improve the model stability, ensure convergence, and handle imbalanced data distribution problems while is being trained with federated learning methods.

Autonomous Driving Federated Learning

Coarse-to-Fine Reasoning for Visual Question Answering

2 code implementations6 Oct 2021 Binh X. Nguyen, Tuong Do, Huy Tran, Erman Tjiputra, Quang D. Tran, Anh Nguyen

Bridging the semantic gap between image and question is an important step to improve the accuracy of the Visual Question Answering (VQA) task.

Question Answering Visual Question Answering

Multiple Meta-model Quantifying for Medical Visual Question Answering

2 code implementations19 May 2021 Tuong Do, Binh X. Nguyen, Erman Tjiputra, Minh Tran, Quang D. Tran, Anh Nguyen

However, most of the existing medical VQA methods rely on external data for transfer learning, while the meta-data within the dataset is not fully utilized.

Medical Visual Question Answering Meta-Learning +3

Graph-based Person Signature for Person Re-Identifications

1 code implementation14 Apr 2021 Binh X. Nguyen, Binh D. Nguyen, Tuong Do, Erman Tjiputra, Quang D. Tran, Anh Nguyen

In this paper, we propose a new method to effectively aggregate detailed person descriptions (attributes labels) and visual features (body parts and global features) into a graph, namely Graph-based Person Signature, and utilize Graph Convolutional Networks to learn the topological structure of the visual signature of a person.

Attribute Multi-Task Learning +1

Deep Metric Learning Meets Deep Clustering: An Novel Unsupervised Approach for Feature Embedding

1 code implementation9 Sep 2020 Binh X. Nguyen, Binh D. Nguyen, Gustavo Carneiro, Erman Tjiputra, Quang D. Tran, Thanh-Toan Do

Based on pseudo labels, we propose a novel unsupervised metric loss which enforces the positive concentration and negative separation of samples in the embedding space.

Benchmarking Clustering +2

Overcoming Data Limitation in Medical Visual Question Answering

2 code implementations26 Sep 2019 Binh D. Nguyen, Thanh-Toan Do, Binh X. Nguyen, Tuong Do, Erman Tjiputra, Quang D. Tran

Traditional approaches for Visual Question Answering (VQA) require large amount of labeled data for training.

Ranked #13 on Medical Visual Question Answering on VQA-RAD (using extra training data)

Denoising Medical Visual Question Answering +3

Cannot find the paper you are looking for? You can Submit a new open access paper.