no code implementations • ECCV 2020 • Yujun Cai, Lin Huang, Yiwei Wang, Tat-Jen Cham, Jianfei Cai, Junsong Yuan, Jun Liu, Xu Yang, Yiheng Zhu, Xiaohui Shen, Ding Liu, Jing Liu, Nadia Magnenat Thalmann
Last, in order to incorporate a general motion space for high-quality prediction, we build a memory-based dictionary, which aims to preserve the global motion patterns in training data to guide the predictions.
no code implementations • 24 May 2023 • Fei Wang, Wenjie Mo, Yiwei Wang, Wenxuan Zhou, Muhao Chen
Meanwhile, our in-context intervention effectively reduces the knowledge conflicts between parametric knowledge and contextual knowledge in GPT-3. 5 and improves the F1 score by 9. 14 points on a challenging test set derived from Re-TACRED.
1 code implementation • 22 May 2023 • Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, Muhao Chen
In principle, textual context determines the ground-truth relation and the RE models should be able to correctly identify the relations reflected by the textual context.
1 code implementation • 29 Nov 2022 • Yuxuan Liang, Yutong Xia, Songyu Ke, Yiwei Wang, Qingsong Wen, Junbo Zhang, Yu Zheng, Roger Zimmermann
Air pollution is a crucial issue affecting human health and livelihoods, as well as one of the barriers to economic and social growth.
no code implementations • 17 Sep 2022 • Yiwei Wang, Bryan Hooi, Yozen Liu, Tong Zhao, Zhichun Guo, Neil Shah
However, HadamardMLP lacks the scalability for retrieving top scoring neighbors on large graphs, since to the best of our knowledge, there does not exist an algorithm to retrieve the top scoring neighbors for HadamardMLP decoders in sublinear complexity.
no code implementations • Findings (NAACL) 2022 • Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Bryan Hooi
GRAPHCACHE aggregates the features from sentences in the whole dataset to learn global representations of properties, and use them to augment the local features within individual sentences.
1 code implementation • NAACL 2022 • Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Dayiheng Liu, Baosong Yang, Juncheng Liu, Bryan Hooi
In this paper, we propose the CORE (Counterfactual Analysis based Relation Extraction) debiasing method that guides the RE models to focus on the main effects of textual context without losing the entity information.
no code implementations • Findings (NAACL) 2022 • Juncheng Liu, Zequn Sun, Bryan Hooi, Yiwei Wang, Dayiheng Liu, Baosong Yang, Xiaokui Xiao, Muhao Chen
We study dangling-aware entity alignment in knowledge graphs (KGs), which is an underexplored but important problem.
no code implementations • 19 Apr 2022 • Justin Baker, Hedi Xia, Yiwei Wang, Elena Cherkaev, Akil Narayan, Long Chen, Jack Xin, Andrea L. Bertozzi, Stanley J. Osher, Bao Wang
Learning neural ODEs often requires solving very stiff ODE systems, primarily using explicit adaptive step size ODE solvers.
1 code implementation • NeurIPS 2021 • Juncheng Liu, Kenji Kawaguchi, Bryan Hooi, Yiwei Wang, Xiaokui Xiao
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN), to efficiently capture very long-range dependencies.
no code implementations • 18 Dec 2021 • Yiwei Wang, Yujun Cai, Yuxuan Liang, Henghui Ding, Changhu Wang, Bryan Hooi
In this work, we propose the TNS (Time-aware Neighbor Sampling) method: TNS learns from temporal information to provide an adaptive receptive neighborhood for every node at any time.
no code implementations • NeurIPS 2021 • Yiwei Wang, Yujun Cai, Yuxuan Liang, Henghui Ding, Changhu Wang, Siddharth Bhatia, Bryan Hooi
To address this issue, our idea is to transform the temporal graphs using data augmentation (DA) with adaptive magnitudes, so as to effectively augment the input features and preserve the essential semantic information.
no code implementations • 1 Dec 2021 • Yiwei Wang, Yujun Cai, Yuxuan Liang, Wei Wang, Henghui Ding, Muhao Chen, Jing Tang, Bryan Hooi
Representing a label distribution as a one-hot vector is a common practice in training node classification models.
no code implementations • 21 Nov 2021 • Yindong Chen, Yiwei Wang, Lulu Kang, Chun Liu
We propose a novel deterministic sampling method to approximate a target distribution $\rho^*$ by minimizing the kernel discrepancy, also known as the Maximum Mean Discrepancy (MMD).
1 code implementation • 29 Jun 2021 • Siddharth Bhatia, Yiwei Wang, Bryan Hooi, Tanmoy Chakraborty
Specifically, the generative model learns to approximate the distribution of anomalous samples from the candidate set of graph snapshots, and the discriminative model detects whether the sampled snapshot is from the ground-truth or not.
no code implementations • CVPR 2021 • Yuxing Tang, Zhenjie Cao, Yanbo Zhang, Zhicheng Yang, Zongcheng Ji, Yiwei Wang, Mei Han, Jie Ma, Jing Xiao, Peng Chang
Starting with a fully supervised model trained on the data with pixel-level masks, the proposed framework iteratively refines the model itself using the entire weakly labeled data (image-level soft label) in a self-training fashion.
1 code implementation • 1 Jun 2021 • Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, Bryan Hooi
In this work, we propose the Mixup methods for two fundamental tasks in graph learning: node and graph classification.
Ranked #15 on
Node Classification
on Pubmed
no code implementations • 27 Jan 2021 • Chun Liu, Yiwei Wang, Teng-Fei Zhang
In this paper, we study a new micro-macro model for a reactive polymeric fluid, which is derived recently in [Y. Wang, T.-F. Zhang, and C. Liu, \emph{J. Non-Newton.
Analysis of PDEs 35A01, 35A15, 76A10, 76M30, 82D60
no code implementations • ICCV 2021 • Yujun Cai, Yiwei Wang, Yiheng Zhu, Tat-Jen Cham, Jianfei Cai, Junsong Yuan, Jun Liu, Chuanxia Zheng, Sijie Yan, Henghui Ding, Xiaohui Shen, Ding Liu, Nadia Magnenat Thalmann
Notably, by considering this problem as a conditional generation process, we estimate a parametric distribution of the missing regions based on the input conditions, from which to sample and synthesize the full motion series.
1 code implementation • 13 Dec 2020 • Juncheng Liu, Yiwei Wang, Bryan Hooi, Renchi Yang, Xiaokui Xiao
We argue that the representation power in unlabelled nodes can be useful for active learning and for further improving performance of active learning for node classification.
no code implementations • 22 Sep 2020 • Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, Bryan Hooi
We present a new method to regularize graph neural networks (GNNs) for better generalization in graph classification.
1 code implementation • 14 Apr 2020 • Yiwei Wang, Jiuhai Chen, Chun Liu, Lulu Kang
Using the EVI framework, we can derive many existing Particle-based Variational Inference (ParVI) methods, including the popular Stein Variational Gradient Descent (SVGD) approach.
1 code implementation • 28 Feb 2020 • Yuxuan Liang, Kun Ouyang, Yiwei Wang, Ye Liu, Junbo Zhang, Yu Zheng, David S. Rosenblum
This framework consists of three parts: 1) a local feature extraction module to learn representations for each region; 2) a global context module to extract global contextual priors and upsample them to generate the global features; and 3) a region-specific predictor based on tensor decomposition to provide customized predictions for each region, which is very parameter-efficient compared to previous methods.
no code implementations • 28 Jan 2019 • Yi Ren, Steven Elliott, Yiwei Wang, Yezhou Yang, Wenlong Zhang
While intelligence of autonomous vehicles (AVs) has significantly advanced in recent years, accidents involving AVs suggest that these autonomous systems lack gracefulness in driving when interacting with human drivers.
Robotics Computer Science and Game Theory