1 code implementation • 27 Mar 2025 • Minjun Kim, Jaehyeon Choi, SeungJoo Lee, Jinhong Jung, U Kang
In this paper, we propose AugWard (Augmentation-Aware Training with Graph Distance and Consistency Regularization), a novel graph representation learning framework that carefully considers the diversity introduced by graph augmentation.
1 code implementation • 20 May 2024 • Junghun Kim, Ka Hyun Park, Hoyoung Yoon, U Kang
Given an edge-incomplete graph, how can we accurately find the missing links?
no code implementations • 27 Jan 2024 • Seungcheol Park, Jaehyeon Choi, Sojin Lee, U Kang
How can we compress language models without sacrificing accuracy?
1 code implementation • 5 Oct 2023 • Hyunsik Jeon, Jong-eun Lee, Jeongin Yun, U Kang
To estimate the user-bundle relationship more accurately, CoHeat addresses the highly skewed distribution of bundle interactions through a popularity-based coalescence approach, which incorporates historical and affiliation information based on the bundle's popularity.
no code implementations • ICCV 2023 • Huiwen Xu, U Kang
It is important to have a general method for measuring transferability that can be applied in a variety of situations, such as selecting the best self-supervised pre-trained models that do not have classifiers, and selecting the best transferring layer for a target task.
1 code implementation • 7 Aug 2023 • Seungcheol Park, Hojun Choi, U Kang
As a result, K-prune shows significant accuracy improvements up to 58. 02%p higher F1 score compared to existing retraining-free pruning algorithms under a high compression rate of 80% on the SQuAD benchmark without any retraining process.
1 code implementation • 28 May 2023 • Jun-Gi Jang, Jeongyoung Lee, Yong-chan Park, U Kang
Although real-time analysis is necessary in the dual-way streaming, static PARAFAC2 decomposition methods fail to efficiently work in this setting since they perform PARAFAC2 decomposition for accumulated tensors whenever new data arrive.
1 code implementation • 17 Dec 2022 • Jun-Gi Jang, Sooyeon Shim, Vladimir Egay, Jeeyong Lee, Jongmin Park, Suhyun Chae, U Kang
How can we accurately identify new memory workloads while classifying known memory workloads?
1 code implementation • 19 Oct 2022 • Hyunsik Jeon, Jun-Gi Jang, Taehun Kim, U Kang
BundleMage effectively mixes user preferences of items and bundles using an adaptive gate technique to achieve high accuracy for the bundle matching.
1 code implementation • 19 Oct 2022 • Jongjin Kim, Hyunsik Jeon, Jaeri Lee, U Kang
However, it is challenging to tackle aggregate-level diversity with a matrix factorization (MF), one of the most common recommendation model, since skewed real world data lead to skewed recommendation results of MF.
1 code implementation • 12 Aug 2022 • Hyunsik Jeon, Jongjin Kim, Hoyoung Yoon, Jaeri Lee, U Kang
SmartSense then summarizes sequences of users considering queried contexts in a query-attentive manner to extract the query-related patterns from the sequential actions.
1 code implementation • 9 Jun 2022 • Jaemin Yoo, Hyunsik Jeon, Jinhong Jung, U Kang
Given a graph with partial observations of node features, how can we estimate the missing features accurately?
no code implementations • 24 Mar 2022 • Jun-Gi Jang, U Kang
In this paper, we propose DPar2, a fast and scalable PARAFAC2 decomposition method for irregular dense tensors.
1 code implementation • 21 Feb 2022 • Jaemin Yoo, Sooyeon Shim, U Kang
Then, we propose NodeSam (Node Split and Merge) and SubMix (Subgraph Mix), two model-agnostic approaches for graph augmentation that satisfy all desired properties with different motivations.
no code implementations • 1 Jan 2021 • Seongmin Lee, Hyunsik Jeon, U Kang
Given multiple source datasets with labels, how can we train a target model with no labeled data?
no code implementations • 28 Dec 2020 • Jinhong Jung, Jaemin Yoo, U Kang
In this paper, we propose Signed Graph Diffusion Network (SGDNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs.
no code implementations • 19 Dec 2020 • JaeHun Jung, Jinhong Jung, U Kang
However, most of the existing mod-els for TKG completion extend static KG embeddings that donot fully exploit TKG structure, thus lacking in 1) account-ing for temporally relevant events already residing in the lo-cal neighborhood of a query, and 2) path-based inference that facilitates multi-hop reasoning and better interpretability.
no code implementations • 16 Dec 2020 • Dawon Ahn, Jun-Gi Jang, U Kang
The essential problems of how to exploit the temporal property for tensor decomposition and consider the sparsity of time slices remain unresolved.
no code implementations • 30 Sep 2020 • Hyun Dong Lee, Seongmin Lee, U Kang
How can we effectively regularize BERT?
no code implementations • 30 Sep 2020 • Ikhyun Cho, U Kang
PTP is a KD-specialized initialization method, which can act as a good initial guide for the student.
no code implementations • 29 Sep 2020 • Seongmin Lee, Hyunsik Jeon, U Kang
Multi-source domain adaptation (MSDA) aims to train a model using multiple source datasets different from a target dataset in the absence of target data labels.
no code implementations • 28 Sep 2020 • Ikhyun Cho, U Kang
SPS is a new parameter sharing method that allows greater model complexity for the student model.
no code implementations • 28 Aug 2020 • Yong-chan Park, Jun-Gi Jang, U Kang
In this paper, we propose a fast Partial Fourier Transform (PFT), a careful modification of the Cooley-Tukey algorithm that enables one to specify an arbitrary consecutive range where the coefficients should be computed.
no code implementations • 23 Dec 2019 • Seungcheol Park, Huiwen Xu, Taehun Kim, Inhwan Hwang, Kyung-Jun Kim, U Kang
We address the problem of measuring transferability between source and target datasets, where the source and the target have different feature spaces and distributions.
1 code implementation • NeurIPS 2019 • Jaemin Yoo, Minyong Cho, Taebum Kim, U Kang
Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small.
no code implementations • 25 Sep 2019 • Chun Quan, Jun-Gi Jang, Hyun Dong Lee, U Kang
A promising direction is based on depthwise separable convolution which replaces a standard convolution with a depthwise convolution and a pointwise convolution.
no code implementations • 25 Sep 2019 • Huiwen Xu, U Kang
In this paper, we define the problem of unsupervised domain adaptation under double blind constraint, where either the source or the target domain cannot observe the data in the other domain, but data from both domains are used for training.
no code implementations • 25 Sep 2019 • Jun-Gi Jang, Chun Quan, Hyun Dong Lee, U Kang
By exploiting the knowledge of a trained standard model and carefully determining the order of depthwise separable convolution via GEP, FALCON achieves sufficient accuracy close to that of the trained standard model.
no code implementations • 22 Aug 2019 • Hyunsik Jeon, Bonhun Koo, U Kang
Given a sparse rating matrix and an auxiliary matrix of users or items, how can we accurately predict missing ratings considering different data contexts of entities?
no code implementations • 7 Feb 2018 • Mauro Scanagatta, Giorgio Corani, Marco Zaffalon, Jaemin Yoo, U Kang
We present a novel anytime algorithm (k-MAX) method for this task, which scales up to thousands of variables.