1 code implementation • 15 Nov 2022 • Jessica Y. Bo, Hen-Wei Huang, Alvin Chan, Giovanni Traverso
Medical datasets often face the problem of data scarcity, as ground truth labels must be generated by medical professionals.
no code implementations • 9 May 2022 • Alvin Chan, Yew-Soon Ong, Clement Tan
Model robustness is vital for the reliable deployment of machine learning models in real-world applications.
no code implementations • 8 May 2022 • Zhenghua Chen, Min Wu, Alvin Chan, XiaoLi Li, Yew-Soon Ong
We believe that this technical review can help to promote a sustainable development of AI R&D activities for the research community.
1 code implementation • 28 Nov 2021 • Bill Tuck Weng Pung, Alvin Chan
The ability to reason with multiple hierarchical structures is an attractive and desirable property of sequential inductive biases for natural language processing.
1 code implementation • 28 Nov 2021 • Bill Tuck Weng Pung, Alvin Chan
Moreover, we show that the FASTTREES module can be applied to enhance Transformer models, achieving performance gains on three sequence transduction tasks (machine translation, subject-verb agreement and mathematical language understanding), paving the way for modular tree induction modules.
no code implementations • ACL 2021 • Aston Zhang, Alvin Chan, Yi Tay, Jie Fu, Shuohang Wang, Shuai Zhang, Huajie Shao, Shuochao Yao, Roy Ka-Wei Lee
Orthogonality constraints encourage matrices to be orthogonal for numerical stability.
1 code implementation • NeurIPS 2021 • Alvin Chan, Ali Madani, Ben Krause, Nikhil Naik
Attribute extrapolation in sample generation is challenging for deep neural networks operating beyond the training distribution.
2 code implementations • 7 Mar 2021 • Alvin Chan, Anna Korsakova, Yew-Soon Ong, Fernaldo Richtia Winnerdy, Kah Wai Lim, Anh Tuan Phan
In the case of alternative splicing prediction, DCEN models mRNA transcript probabilities through its constituent splice junctions' energy values.
3 code implementations • 17 Feb 2021 • Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Cheung Hui, Jie Fu
Recent works have demonstrated reasonable success of representation learning in hypercomplex space.
no code implementations • 1 Jan 2021 • Yi Tay, Yikang Shen, Alvin Chan, Aston Zhang, Shuai Zhang
This paper explores an intriguing idea of recursively parameterizing recurrent nets.
no code implementations • ICLR 2021 • Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Hui, Jie Fu
Recent works have demonstrated reasonable success of representation learning in hypercomplex space.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Alvin Chan, Yi Tay, Yew-Soon Ong, Aston Zhang
This paper demonstrates a fatal vulnerability in natural language inference (NLI) and text classification systems.
no code implementations • 5 Sep 2020 • Alvin Chan, Martin D. Levine, Mehrsan Javan
We propose an end-to-end trainable ResNet+LSTM network, with a residual network (ResNet) base and a long short-term memory (LSTM) layer, to discover spatio-temporal features of jersey numbers over time and learn long-term dependencies.
no code implementations • ACL 2020 • Yi Tay, Donovan Ong, Jie Fu, Alvin Chan, Nancy Chen, Anh Tuan Luu, Chris Pal
Understanding human preferences, along with cultural and social nuances, lives at the heart of natural language understanding.
1 code implementation • ICLR 2021 • Alvin Chan, Yew-Soon Ong, Bill Pung, Aston Zhang, Jie Fu
While there are studies that seek to control high-level attributes (such as sentiment and topic) of generated text, there is still a lack of more precise control over its content at the word- and phrase-level.
no code implementations • ICLR 2020 • Yi Tay, Yikang Shen, Alvin Chan, Yew Soon Ong
This paper proposes Metagross (Meta Gated Recursive Controller), a new neural sequence modeling unit.
1 code implementation • ICLR 2020 • Alvin Chan, Yi Tay, Yew Soon Ong, Jie Fu
Adversarial examples are crafted with imperceptible perturbations with the intent to fool neural networks.
2 code implementations • CVPR 2020 • Alvin Chan, Yi Tay, Yew-Soon Ong
Learned weights of models robust to such perturbations are previously found to be transferable across different tasks but this applies only if the model architecture for the source and target tasks is the same.
no code implementations • 19 Nov 2019 • Alvin Chan, Yew-Soon Ong
Existing defenses are effective under certain conditions such as a small size of the poison pattern, knowledge about the ratio of poisoned training samples or when a validated clean dataset is available.
no code implementations • 25 Sep 2019 • Yi Tay, Aston Zhang, Shuai Zhang, Alvin Chan, Luu Anh Tuan, Siu Cheung Hui
We propose R2D2 layers, a new neural block for training efficient NLP models.
no code implementations • 7 Sep 2018 • Alvin Chan, Lei Ma, Felix Juefei-Xu, Xiaofei Xie, Yang Liu, Yew Soon Ong
Deep neural networks (DNN), while becoming the driving force of many novel technology and achieving tremendous success in many cutting-edge applications, are still vulnerable to adversarial attacks.