no code implementations • 5 Jul 2022 • Thong Nguyen, Cong-Duy Nguyen, Xiaobao Wu, Anh Tuan Luu
Inheriting the spirit of Transfer Learning, research works in V&L have devised multiple pretraining techniques on large-scale datasets in order to enhance the performance of downstream tasks.
1 code implementation • 16 Jun 2022 • Vijay Prakash Dwivedi, Ladislav Rampášek, Mikhail Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, Dominique Beaini
Graph Neural Networks (GNNs) that are based on the message passing (MP) paradigm exchange information between 1-hop neighbors to build node representations at each layer.
Ranked #1 on
Node Classification
on PascalVOC-SP
1 code implementation • 25 May 2022 • Ladislav Rampášek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, Dominique Beaini
We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks.
Ranked #1 on
Graph Regression
on ZINC-500k
no code implementations • 24 May 2022 • Haiteng Zhao, Chang Ma*, Xinshuai Dong, Anh Tuan Luu, Zhi-Hong Deng, Hanwang Zhang
Deep learning models have achieved great success in many fields, yet they are vulnerable to adversarial examples.
1 code implementation • NeurIPS 2021 • Thong Nguyen, Anh Tuan Luu
Recent empirical studies show that adversarial topic models (ATM) can successfully capture semantic patterns of the document by differentiating a document with another dissimilar sample.
1 code implementation • ICLR 2022 • Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson
An approach to tackle this issue is to introduce Positional Encoding (PE) of nodes, and inject it into the input layer, like in Transformers.
Ranked #5 on
Graph Regression
on ZINC-500k
no code implementations • 29 Sep 2021 • Cong-Duy T Nguyen, Anh Tuan Luu, Tho Quan
However, this approach has two main drawbacks: (i) the whole image usually contains more objects and backgrounds than the sentence itself; thus, matching them together will confuse the grounded model; (ii) CNN only extracts the features of the image but not the relationship between objects inside that, limiting the grounded model to learn complicated contexts.
no code implementations • EMNLP 2021 • Thong Nguyen, Anh Tuan Luu, Truc Lu, Tho Quan
Recently, Transformer-based models have been proven effective in the abstractive summarization task by creating fluent and informative summaries.
1 code implementation • ICLR 2021 • Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, Hong Liu
Robustness against word substitutions has a well-defined and widely acceptable form, i. e., using semantically similar words as substitutions, and thus it is considered as a fundamental stepping-stone towards broader robustness in natural language processing.
3 code implementations • 17 Feb 2021 • Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Cheung Hui, Jie Fu
Recent works have demonstrated reasonable success of representation learning in hypercomplex space.
no code implementations • ICLR 2021 • Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Hui, Jie Fu
Recent works have demonstrated reasonable success of representation learning in hypercomplex space.
no code implementations • ACL 2020 • Yi Tay, Donovan Ong, Jie Fu, Alvin Chan, Nancy Chen, Anh Tuan Luu, Chris Pal
Understanding human preferences, along with cultural and social nuances, lives at the heart of natural language understanding.
12 code implementations • 2 Mar 2020 • Vijay Prakash Dwivedi, Chaitanya K. Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs.
Ranked #1 on
Link Prediction
on COLLAB
no code implementations • NeurIPS 2019 • Yi Tay, Anh Tuan Luu, Aston Zhang, Shuohang Wang, Siu Cheung Hui
Attentional models are distinctly characterized by their ability to learn relative importance, i. e., assigning a different weight to input values.
1 code implementation • AAAI 2019 • Yi Tay, Shuai Zhang, Anh Tuan Luu, Siu Cheung Hui, Lina Yao, Tran Dang Quang Vinh
Factorization Machines (FMs) are a class of popular algorithms that have been widely adopted for collaborative filtering and recommendation tasks.
no code implementations • 12 Nov 2018 • Anran Wang, Anh Tuan Luu, Chuan-Sheng Foo, Hongyuan Zhu, Yi Tay, Vijay Chandrasekhar
In this paper, we present the Holistic Multi-modal Memory Network (HMMN) framework which fully considers the interactions between different input sources (multi-modal context, question) in each hop.
no code implementations • EMNLP 2018 • Yi Tay, Anh Tuan Luu, Siu Cheung Hui
Sequence encoders are crucial components in many neural architectures for learning to read and comprehend.
Ranked #7 on
Question Answering
on NarrativeQA
no code implementations • EMNLP 2018 • Yi Tay, Anh Tuan Luu, Siu Cheung Hui, Jian Su
This paper proposes a new neural architecture that exploits readily available sentiment lexicon resources.
no code implementations • 29 May 2018 • Yi Tay, Anh Tuan Luu, Siu Cheung Hui
Our approach, the CoupleNet is an end-to-end deep learning based estimator that analyzes the social profiles of two users and subsequently performs a similarity match between the users.
1 code implementation • 14 Dec 2017 • Yi Tay, Anh Tuan Luu, Siu Cheung Hui
Our novel model, \textit{Aspect Fusion LSTM} (AF-LSTM) learns to attend based on associative relationships between sentence words and aspect which allows our model to adaptively focus on the correct words given an aspect term.
1 code implementation • 17 Jul 2017 • Yi Tay, Anh Tuan Luu, Siu Cheung Hui
Our model, LRML (\textit{Latent Relational Metric Learning}) is a novel metric learning approach for recommendation.
Ranked #1 on
Recommendation Systems
on Netflix
(nDCG@10 metric)