Search Results for author: Tuan Le

Found 5 papers, 1 papers with code

A Chain-of-Thought Prompting Approach with LLMs for Evaluating Students' Formative Assessment Responses in Science

no code implementations21 Mar 2024 Clayton Cohn, Nicole Hutchins, Tuan Le, Gautam Biswas

This paper explores the use of large language models (LLMs) to score and explain short-answer assessments in K-12 science.

Active Learning Math

Navigating the Design Space of Equivariant Diffusion-Based Generative Models for De Novo 3D Molecule Generation

no code implementations29 Sep 2023 Tuan Le, Julian Cremer, Frank Noé, Djork-Arné Clevert, Kristof Schütt

To further strengthen the applicability of diffusion models to limited training data, we investigate the transferability of EQGAT-diff trained on the large PubChem3D dataset with implicit hydrogen atoms to target different data distributions.

3D Molecule Generation Drug Discovery

Equivariant Graph Attention Networks for Molecular Property Prediction

no code implementations20 Feb 2022 Tuan Le, Frank Noé, Djork-Arné Clevert

Learning and reasoning about 3D molecular structures with varying size is an emerging and important challenge in machine learning and especially in drug discovery.

Drug Discovery Graph Attention +2

Unsupervised Learning of Group Invariant and Equivariant Representations

no code implementations15 Feb 2022 Robin Winter, Marco Bertolini, Tuan Le, Frank Noé, Djork-Arné Clevert

In this work, we extend group invariant and equivariant representation learning to the field of unsupervised deep learning.

Representation Learning valid

Parameterized Hypercomplex Graph Neural Networks for Graph Classification

1 code implementation30 Mar 2021 Tuan Le, Marco Bertolini, Frank Noé, Djork-Arné Clevert

Despite recent advances in representation learning in hypercomplex (HC) space, this subject is still vastly unexplored in the context of graphs.

General Classification Graph Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.