Search Results for author: Jaemin Yoo

Found 18 papers, 11 papers with code

End-To-End Self-tuning Self-supervised Time Series Anomaly Detection

no code implementations3 Apr 2024 Boje Deforce, Meng-Chieh Lee, Bart Baesens, Estefanía Serral Asensio, Jaemin Yoo, Leman Akoglu

A two-fold challenge for TSAD is a versatile and unsupervised model that can detect various different types of time series anomalies (spikes, discontinuities, trend shifts, etc.)

Anomaly Detection Data Augmentation +2

Self-Supervision for Tackling Unsupervised Anomaly Detection: Pitfalls and Opportunities

no code implementations28 Aug 2023 Leman Akoglu, Jaemin Yoo

Self-supervised learning (SSL) is a growing torrent that has recently transformed machine learning and its many real world applications, by learning on massive amounts of unlabeled data via self-generated supervisory signals.

Data Augmentation Density Estimation +3

DSV: An Alignment Validation Loss for Self-supervised Outlier Model Selection

1 code implementation13 Jul 2023 Jaemin Yoo, Yue Zhao, Lingxiao Zhao, Leman Akoglu

DSV captures the alignment between an augmentation function and the anomaly-generating mechanism with surrogate losses, which approximate the discordance and separability of test data, respectively.

Data Augmentation Model Selection +2

End-to-End Augmentation Hyperparameter Tuning for Self-Supervised Anomaly Detection

no code implementations21 Jun 2023 Jaemin Yoo, Lingxiao Zhao, Leman Akoglu

The first is a new unsupervised validation loss that quantifies the alignment between the augmented training data and the (unlabeled) test data.

Data Augmentation Self-Supervised Anomaly Detection +2

Classification of Edge-dependent Labels of Nodes in Hypergraphs

1 code implementation5 Jun 2023 Minyoung Choe, Sunwoo Kim, Jaemin Yoo, Kijung Shin

Interestingly, many real-world systems modeled as hypergraphs contain edge-dependent node labels, i. e., node labels that vary depending on hyperedges.

Classification Node Clustering

Towards Deep Attention in Graph Neural Networks: Problems and Remedies

1 code implementation4 Jun 2023 Soo Yong Lee, Fanchen Bu, Jaemin Yoo, Kijung Shin

AERO-GNN provably mitigates the proposed problems of deep graph attention, which is further empirically demonstrated with (a) its adaptive and less smooth attention functions and (b) higher performance at deep layers (up to 64).

Deep Attention Graph Attention +1

NetEffect: Discovery and Exploitation of Generalized Network Effects

1 code implementation31 Dec 2022 Meng-Chieh Lee, Shubhranshu Shekhar, Jaemin Yoo, Christos Faloutsos

Given a large graph with few node labels, how can we (a) identify whether there is generalized network-effects (GNE) or not, (b) estimate GNE to explain the interrelations among node classes, and (c) exploit GNE efficiently to improve the performance on downstream tasks?

Graph Mining Node Classification

Less is More: SlimG for Accurate, Robust, and Interpretable Graph Mining

1 code implementation8 Oct 2022 Jaemin Yoo, Meng-Chieh Lee, Shubhranshu Shekhar, Christos Faloutsos

Graph neural networks (GNNs) have succeeded in many graph mining tasks, but their generalizability to various graph scenarios is limited due to the difficulty of training, hyperparameter tuning, and the selection of a model itself.

Graph Mining Node Classification

Data Augmentation is a Hyperparameter: Cherry-picked Self-Supervision for Unsupervised Anomaly Detection is Creating the Illusion of Success

1 code implementation16 Aug 2022 Jaemin Yoo, Tiancheng Zhao, Leman Akoglu

Self-supervised learning (SSL) has emerged as a promising alternative to create supervisory signals to real-world problems, avoiding the extensive cost of manual labeling.

Data Augmentation Self-Supervised Anomaly Detection +2

Accurate Node Feature Estimation with Structured Variational Graph Autoencoder

1 code implementation9 Jun 2022 Jaemin Yoo, Hyunsik Jeon, Jinhong Jung, U Kang

Given a graph with partial observations of node features, how can we estimate the missing features accurately?

Variational Inference

Transition Matrix Representation of Trees with Transposed Convolutions

1 code implementation22 Feb 2022 Jaemin Yoo, Lee Sael

How can we effectively find the best structures in tree models?

Model-Agnostic Augmentation for Accurate Graph Classification

1 code implementation21 Feb 2022 Jaemin Yoo, Sooyeon Shim, U Kang

Then, we propose NodeSam (Node Split and Merge) and SubMix (Subgraph Mix), two model-agnostic approaches for graph augmentation that satisfy all desired properties with different motivations.

Graph Classification

Generalizing Tree Models for Improving Prediction Accuracy

no code implementations1 Jan 2021 Jaemin Yoo, Lee Sael

In this work, we propose Decision Transformer Network (DTN), our highly accurate and interpretable tree model based on our generalized framework of tree models, decision transformers.

Signed Graph Diffusion Network

no code implementations28 Dec 2020 Jinhong Jung, Jaemin Yoo, U Kang

In this paper, we propose Signed Graph Diffusion Network (SGDNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs.

Link Sign Prediction Network Embedding

Knowledge Extraction with No Observable Data

1 code implementation NeurIPS 2019 Jaemin Yoo, Minyong Cho, Taebum Kim, U Kang

Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small.

Data-free Knowledge Distillation

Efficient Learning of Bounded-Treewidth Bayesian Networks from Complete and Incomplete Data Sets

no code implementations7 Feb 2018 Mauro Scanagatta, Giorgio Corani, Marco Zaffalon, Jaemin Yoo, U Kang

We present a novel anytime algorithm (k-MAX) method for this task, which scales up to thousands of variables.

Imputation

Cannot find the paper you are looking for? You can Submit a new open access paper.