Search Results for author: Yew Soon Ong

Found 19 papers, 7 papers with code

Make Me Happier: Evoking Emotions Through Image Diffusion Models

no code implementations13 Mar 2024 Qing Lin, Jingfeng Zhang, Yew Soon Ong, Mengmi Zhang

For the first time, we present a novel challenge of emotion-evoked image generation, aiming to synthesize images that evoke target emotions while retaining the semantics and structures of the original scenes.

Image Generation

LIST: Learning to Index Spatio-Textual Data for Embedding based Spatial Keyword Queries

no code implementations12 Mar 2024 Ziqi Yin, Shanshan Feng, Shang Liu, Gao Cong, Yew Soon Ong, Bin Cui

With the proliferation of spatio-textual data, Top-k KNN spatial keyword queries (TkQs), which return a list of objects based on a ranking function that evaluates both spatial and textual relevance, have found many real-life applications.

Pseudo Label

MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation

1 code implementation22 Sep 2023 Jiahao Xie, Wei Li, Xiangtai Li, Ziwei Liu, Yew Soon Ong, Chen Change Loy

We present MosaicFusion, a simple yet effective diffusion-based data augmentation approach for large vocabulary instance segmentation.

Data Augmentation Instance Segmentation +1

Towards Building Voice-based Conversational Recommender Systems: Datasets, Potential Solutions, and Prospects

1 code implementation14 Jun 2023 Xinghua Qu, Hongyang Liu, Zhu Sun, Xiang Yin, Yew Soon Ong, Lu Lu, Zejun Ma

Conversational recommender systems (CRSs) have become crucial emerging research topics in the field of RSs, thanks to their natural advantages of explicitly acquiring user preferences via interactive conversations and revealing the reasons behind recommendations.

Recommendation Systems

A Survey of Learning on Small Data: Generalization, Optimization, and Challenge

no code implementations29 Jul 2022 Xiaofeng Cao, Weixin Bu, Shengjun Huang, MinLing Zhang, Ivor W. Tsang, Yew Soon Ong, James T. Kwok

In future, learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI, which requires machines to recognize objectives and scenarios relying on small data as humans.

Active Learning Contrastive Learning +4

Masked Frequency Modeling for Self-Supervised Visual Pre-Training

3 code implementations15 Jun 2022 Jiahao Xie, Wei Li, Xiaohang Zhan, Ziwei Liu, Yew Soon Ong, Chen Change Loy

We present Masked Frequency Modeling (MFM), a unified frequency-domain-based approach for self-supervised pre-training of visual models.

Image Classification Image Restoration +2

Unsupervised Object-Level Representation Learning from Scene Images

1 code implementation NeurIPS 2021 Jiahao Xie, Xiaohang Zhan, Ziwei Liu, Yew Soon Ong, Chen Change Loy

Extensive experiments on COCO show that ORL significantly improves the performance of self-supervised learning on scene images, even surpassing supervised ImageNet pre-training on several downstream tasks.

Object Representation Learning +2

An Improved Transfer Model: Randomized Transferable Machine

no code implementations27 Nov 2020 Pengfei Wei, Xinghua Qu, Yew Soon Ong, Zejun Ma

Existing studies usually assume that the learned new feature representation is \emph{domain-invariant}, and thus train a transfer model $\mathcal{M}$ on the source domain.

Transfer Learning

Delving into Inter-Image Invariance for Unsupervised Visual Representations

2 code implementations26 Aug 2020 Jiahao Xie, Xiaohang Zhan, Ziwei Liu, Yew Soon Ong, Chen Change Loy

In this work, we present a comprehensive empirical study to better understand the role of inter-image invariance learning from three main constituting components: pseudo-label maintenance, sampling strategy, and decision boundary design.

Contrastive Learning Pseudo Label +1

Jacobian Adversarially Regularized Networks for Robustness

1 code implementation ICLR 2020 Alvin Chan, Yi Tay, Yew Soon Ong, Jie Fu

Adversarial examples are crafted with imperceptible perturbations with the intent to fool neural networks.

Automatic Construction of Multi-layer Perceptron Network from Streaming Examples

no code implementations8 Oct 2019 Mahardhika Pratama, Choiru Za'in, Andri Ashfahani, Yew Soon Ong, Weiping Ding

The advantage of NADINE, namely elastic structure and online learning trait, is numerically validated using nine data stream classification and regression problems where it demonstrates performance improvement over prominent algorithms in all problems.

General Classification regression

DEVDAN: Deep Evolving Denoising Autoencoder

no code implementations8 Oct 2019 Andri Ashfahani, Mahardhika Pratama, Edwin Lughofer, Yew Soon Ong

The Denoising Autoencoder (DAE) enhances the flexibility of the data stream method in exploiting unlabeled samples.

Denoising

Autonomous Deep Learning: Incremental Learning of Denoising Autoencoder for Evolving Data Streams

no code implementations24 Sep 2018 Mahardhika Pratama, Andri Ashfahani, Yew Soon Ong, Savitha Ramasamy, Edwin Lughofer

The generative learning phase of Autoencoder (AE) and its successor Denosing Autoencoder (DAE) enhances the flexibility of data stream method in exploiting unlabelled samples.

Denoising Incremental Learning

Metamorphic Relation Based Adversarial Attacks on Differentiable Neural Computer

no code implementations7 Sep 2018 Alvin Chan, Lei Ma, Felix Juefei-Xu, Xiaofei Xie, Yang Liu, Yew Soon Ong

Deep neural networks (DNN), while becoming the driving force of many novel technology and achieving tremendous success in many cutting-edge applications, are still vulnerable to adversarial attacks.

Question Answering Relation

Addressing Expensive Multi-objective Games with Postponed Preference Articulation via Memetic Co-evolution

no code implementations17 Nov 2017 Adam Żychowski, Abhishek Gupta, Jacek Mańdziuk, Yew Soon Ong

This paper presents algorithmic and empirical contributions demonstrating that the convergence characteristics of a co-evolutionary approach to tackle Multi-Objective Games (MOGs) with postponed preference articulation can often be hampered due to the possible emergence of the so-called Red Queen effect.

MIML-FCN+: Multi-instance Multi-label Learning via Fully Convolutional Networks with Privileged Information

no code implementations CVPR 2017 Hao Yang, Joey Tianyi Zhou, Jianfei Cai, Yew Soon Ong

As the proposed PI loss is convex and SGD compatible and the framework itself is a fully convolutional network, MIML-FCN+ can be easily integrated with state of-the-art deep learning networks.

Image Captioning Multi-Label Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.