no code implementations • 5 May 2024 • Wenyu Zhang, Li Shen, Chuan-Sheng Foo
Despite having diverse features important for generalization, the pre-trained feature extractor can overfit to the source data distribution during source training and forget relevant target domain knowledge.
no code implementations • 17 Apr 2024 • Chaoyue Song, Jiacheng Wei, Chuan-Sheng Foo, Guosheng Lin, Fayao Liu
In this paper, we address the challenge of reconstructing general articulated 3D objects from a single video.
no code implementations • 17 Mar 2024 • Wenyu Zhang, Qingmu Liu, Felix Ong Wei Cong, Mohamed Ragab, Chuan-Sheng Foo
UniSSDA is at the intersection of Universal Domain Adaptation (UniDA) and Semi-Supervised Domain Adaptation (SSDA): the UniDA setting does not allow for fine-grained categorization of target private classes not represented in the source domain, while SSDA focuses on the restricted closed-set setting where source and target label spaces match exactly.
no code implementations • 14 Mar 2024 • Cheng Chen, Xiaofeng Yang, Fan Yang, Chengzeng Feng, Zhoujie Fu, Chuan-Sheng Foo, Guosheng Lin, Fayao Liu
In this paper, we present a new framework Sculpt3D that equips the current pipeline with explicit injection of 3D priors from retrieved reference objects without re-training the 2D diffusion model.
no code implementations • 1 Oct 2023 • Jingtan Wang, Xinyang Lu, Zitong Zhao, Zhongxiang Dai, Chuan-Sheng Foo, See-Kiong Ng, Bryan Kian Hsiang Low
The impressive performances of large language models (LLMs) and their immense potential for commercialization have given rise to serious concerns over the intellectual property (IP) of their training data.
no code implementations • 14 Jul 2023 • Dapeng Hu, Jian Liang, Xinchao Wang, Chuan-Sheng Foo
The conventional in-domain calibration method, \textit{temperature scaling} (TempScal), encounters challenges due to domain distribution shifts and the absence of labeled target domain data.
1 code implementation • 14 Jul 2023 • Mohamed Ragab, Emadeldeen Eldele, Min Wu, Chuan-Sheng Foo, XiaoLi Li, Zhenghua Chen
The existing SFDA methods that are mainly designed for visual applications may fail to handle the temporal dynamics in time series, leading to impaired adaptation performance.
1 code implementation • 9 Jun 2023 • Xiaoqiang Lin, Xinyi Xu, See-Kiong Ng, Chuan-Sheng Foo, Bryan Kian Hsiang Low
In collaborative learning with streaming data, nodes (e. g., organizations) jointly and continuously learn a machine learning (ML) model by sharing the latest model updates computed from their latest streaming data.
1 code implementation • 24 Mar 2023 • Weide Liu, Zhonghua Wu, Yang Zhao, Yuming Fang, Chuan-Sheng Foo, Jun Cheng, Guosheng Lin
Current methods for few-shot segmentation (FSSeg) have mainly focused on improving the performance of novel classes while neglecting the performance of base classes.
no code implementations • 31 Jan 2023 • Kiran Krishnamachari, See-Kiong Ng, Chuan-Sheng Foo
Using this result, we propose a general measure of any differentiable model's Fourier-sensitivity using the unitary Fourier-transform of its input-gradient.
no code implementations • ICCV 2023 • Wenyu Zhang, Li Shen, Chuan-Sheng Foo
We propose to distil useful target domain information through a co-learning strategy to improve target pseudolabel quality for finetuning the source model.
1 code implementation • 29 Sep 2022 • Manas Gupta, Efe Camci, Vishandi Rudy Keneta, Abhishek Vaidyanathan, Ritwik Kanodia, Chuan-Sheng Foo, Wu Min, Lin Jie
Surprisingly, we find that vanilla Global MP performs very well against the SOTA techniques.
no code implementations • 16 Jun 2022 • Wenyu Zhang, Mohamed Ragab, Chuan-Sheng Foo
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains and without access to target domain samples during training.
1 code implementation • 30 May 2022 • Wenyu Zhang, Li Shen, Wanyue Zhang, Chuan-Sheng Foo
Recent test-time adaptation methods update batch normalization layers of pre-trained source models deployed in new target environments with streaming data to mitigate such performance degradation.
1 code implementation • 18 May 2022 • Xun Xu, Manh Cuong Nguyen, Yasin Yazici, Kangkang Lu, Hlaing Min, Chuan-Sheng Foo
In this work, we propose SemiCurv, a semi-supervised learning (SSL) framework for curvilinear structure segmentation that is able to utilize such unlabelled data to reduce the labelling burden.
1 code implementation • 15 Mar 2022 • Mohamed Ragab, Emadeldeen Eldele, Wee Ling Tan, Chuan-Sheng Foo, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li
Our evaluation includes adapting state-of-the-art visual domain adaptation methods to time series data as well as the recent methods specifically developed for time series data.
1 code implementation • 11 Dec 2021 • Wanyue Zhang, Xun Xu, Fayao Liu, Chuan-Sheng Foo
Data augmentation is an important technique to reduce overfitting and improve learning performance, but existing works on data augmentation for 3D point cloud data are based on heuristics.
Ranked #1 on 3D Point Cloud Data Augmentation on ModelNet40
1 code implementation • 9 Nov 2021 • Chaitanya K. Joshi, Fayao Liu, Xu Xun, Jie Lin, Chuan-Sheng Foo
Past work on distillation for GNNs proposed the Local Structure Preserving loss (LSP), which matches local structural relationships defined over edges across the student and teacher's node embeddings.
no code implementations • 29 Sep 2021 • Cuong Manh Nguyen, Le Zhang, Arun Raja, Xun Xu, Balagopal Unnikrishnan, Kangkang Lu, Chuan-Sheng Foo
Label collection is costly in many applications, which poses the need for label-efficient learning.
no code implementations • 29 Sep 2021 • Wenyu Zhang, Li Shen, Chuan-Sheng Foo, Wanyue Zhang
Test-time adaptation of pre-trained source models with streaming unlabelled target data is an attractive setting that protects the privacy of source data, but it has mini-batch size and class-distribution requirements on the streaming data which might not be desirable in practice.
no code implementations • 29 Sep 2021 • Wenyu Zhang, Chuan-Sheng Foo, Mohamed Ragab
Domain generalization aims to learn models robust to domain shift, with limited source domains at training and without any access to target domain samples except at test time.
no code implementations • 29 Sep 2021 • Mohamed Ragab, Emadeldeen Eldele, Wee Ling Tan, Chuan-Sheng Foo, Zhenghua Chen, Min Wu, Chee Kwoh, XiaoLi Li
Our evaluation includes adaptations of state-of-the-art visual domain adaptation methods to time series data in addition to recent methods specifically developed for time series data.
1 code implementation • 29 Sep 2021 • Manas Gupta, Vishandi Rudy Keneta, Abhishek Vaidyanathan, Ritwik Kanodia, Efe Camci, Chuan-Sheng Foo, Jie Lin
We showcase that magnitude based pruning, specifically, global magnitude pruning (GP) is sufficient to achieve SOTA performance on a range of neural network architectures.
no code implementations • 29 Sep 2021 • Kiran Chari, Chuan-Sheng Foo, See-Kiong Ng
The ability to generalize to out-of-distribution data is a major challenge for modern deep neural networks.
1 code implementation • 23 Sep 2021 • Astha Garg, Wenyu Zhang, Jules Samaran, Savitha Ramasamy, Chuan-Sheng Foo
Several techniques for multivariate time series anomaly detection have been proposed recently, but a systematic comparison on a common set of datasets and metrics is lacking.
no code implementations • 4 Aug 2021 • Fayao Liu, Guosheng Lin, Chuan-Sheng Foo, Chaitanya K. Joshi, Jie Lin
In this work we propose PointDisc, a point discriminative learning method to leverage self-supervisions for data-efficient 3D point cloud classification and segmentation.
no code implementations • ICLR 2021 • Kangkang Lu, Cuong Manh Nguyen, Xun Xu, Kiran Chari, Yu Jing Goh, Chuan-Sheng Foo
In this paper, we propose ARMOURED, an adversarially robust training method based on semi-supervised learning that consists of two components.
no code implementations • 25 Jun 2020 • Yasin Yazici, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Vijay Chandrasekhar
We examine two key questions in GAN training, namely overfitting and mode drop, from an empirical perspective.
no code implementations • 16 Apr 2020 • Saisubramaniam Gopalakrishnan, Pranshu Ranjan Singh, Yasin Yazici, Chuan-Sheng Foo, Vijay Chandrasekhar, ArulMurugan Ambikapathi
Utilization of classification latent space information for downstream reconstruction and generation is an intriguing and a relatively unexplored area.
1 code implementation • 13 Feb 2020 • Kazuki Osawa, Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Chuan-Sheng Foo, Rio Yokota
Large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size.
2 code implementations • 22 Dec 2019 • Wei-Hong Li, Chuan-Sheng Foo, Hakan Bilen
Recent semi-supervised learning methods have shown to achieve comparable results to their supervised counterparts while using only a small portion of labels in image classification tasks thanks to their regularization strategies.
no code implementations • ICLR 2019 • Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay Chandrasekhar, Georgios Piliouras
Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond.
1 code implementation • 9 Feb 2019 • Yasin Yazici, Bruno Lecouat, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, Vijay Chandrasekhar
We propose a GAN design which models multiple distributions effectively and discovers their commonalities and particularities.
1 code implementation • 19 Dec 2018 • Bruno Lecouat, Ken Chang, Chuan-Sheng Foo, Balagopal Unnikrishnan, James M. Brown, Houssam Zenati, Andrew Beers, Vijay Chandrasekhar, Jayashree Kalpathy-Cramer, Pavitra Krishnaswamy
Supervised deep learning algorithms have enabled significant performance gains in medical image classification tasks.
no code implementations • 15 Nov 2018 • Leo Laugier, Daniil Bash, Jose Recatala, Hong Kuan Ng, Savitha Ramasamy, Chuan-Sheng Foo, Vijay R. Chandrasekhar, Kedar Hippalgaonkar
We introduce the use of Crystal Graph Convolutional Neural Networks (CGCNN), Fully Connected Neural Networks (FCNN) and XGBoost to predict thermoelectric properties.
no code implementations • 12 Nov 2018 • Anran Wang, Anh Tuan Luu, Chuan-Sheng Foo, Hongyuan Zhu, Yi Tay, Vijay Chandrasekhar
In this paper, we present the Holistic Multi-modal Memory Network (HMMN) framework which fully considers the interactions between different input sources (multi-modal context, question) in each hop.
1 code implementation • ICLR 2019 • Bruno Lecouat, Chuan-Sheng Foo, Houssam Zenati, Vijay Chandrasekhar
Generative Adversarial Networks are powerful generative models that are able to model the manifold of natural images.
no code implementations • 7 Jul 2018 • Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay Chandrasekhar, Georgios Piliouras
Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond.
1 code implementation • ICLR 2019 • Yasin Yazici, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, Vijay Chandrasekhar
We examine two different techniques for parameter averaging in GAN training.
2 code implementations • 23 May 2018 • Bruno Lecouat, Chuan-Sheng Foo, Houssam Zenati, Vijay R. Chandrasekhar
GANS are powerful generative models that are able to model the manifold of natural images.
no code implementations • NeurIPS 2007 • Chuan-Sheng Foo, Chuong B. Do, Andrew Y. Ng
Using multiple regularization hyperparameters is an effective method for managing model complexity in problems where input features have varying amounts of noise.