1 code implementation • 12 Jun 2025 • Kangwei Liu, Siyuan Cheng, Bozhong Tian, Xiaozhuan Liang, Yuyang Yin, Meng Han, Ningyu Zhang, Bryan Hooi, Xi Chen, Shumin Deng
In addition, we propose a knowledge-augmented baseline that integrates both human-annotated knowledge rules and implicit knowledge from large language models, enabling smaller models to achieve performance comparable to state-of-the-art LLMs.
1 code implementation • 3 Jun 2025 • Tri Cao, Bennett Lim, Yue Liu, Yuan Sui, Yuexin Li, Shumin Deng, Lin Lu, Nay Oo, Shuicheng Yan, Bryan Hooi
Each test case is a variant of a web platform, designed to be interactive, deployed in a realistic environment, and containing a visually embedded malicious prompt.
1 code implementation • 29 May 2025 • James Xu Zhao, Jimmy Z. J. Liu, Bryan Hooi, See-Kiong Ng
Large language models (LLMs) are widely used for long-form text generation.
1 code implementation • 26 May 2025 • Hui Chen, Miao Xiong, Yujie Lu, Wei Han, Ailin Deng, Yufei He, Jiaying Wu, Yibo Li, Yue Liu, Bryan Hooi
Recent advancements in AI agents have demonstrated their growing potential to drive and support scientific discovery.
1 code implementation • 26 May 2025 • Ruihan Gong, Yue Liu, Wenjie Qu, Mingzhe Du, Yufei He, Yingwei Ma, Yulin Chen, Xiang Liu, Yi Wen, Xinfeng Li, Ruidong Wang, Xinzhong Zhu, Bryan Hooi, Jiaheng Zhang
Inspired by UTT, we propose a new reasoning paradigm, termed Chain of Unconscious Thought (CoUT), to improve the token efficiency of LRMs by guiding them to mimic human unconscious thought and internalize reasoning processes.
no code implementations • 21 May 2025 • Jiaying Wu, Fanxiao Li, Min-Yen Kan, Bryan Hooi
The real-world impact of misinformation stems from the underlying misleading narratives that creators seek to convey.
1 code implementation • 16 May 2025 • Yue Liu, Shengfang Zhai, Mingzhe Du, Yulin Chen, Tri Cao, Hongcheng Gao, Cheng Wang, Xinfeng Li, Kun Wang, Junfeng Fang, Jiaheng Zhang, Bryan Hooi
Then, based on it, we cold-start our model's reasoning ability via SFT.
1 code implementation • 15 May 2025 • Zhiyuan Hu, Yibo Wang, Hanze Dong, Yuhui Xu, Amrita Saha, Caiming Xiong, Bryan Hooi, Junnan Li
Large reasoning models (LRMs) already possess a latent capacity for long chain-of-thought reasoning.
1 code implementation • ICML 2025 • Yuan Li, Jun Hu, Zemin Liu, Bryan Hooi, Jia Chen, Bingsheng He
To address this, Graph Condensation (GC) methods aim to compress large graphs into smaller, synthetic ones that are more manageable for GNN training.
no code implementations • 24 Apr 2025 • Cheng Wang, Yue Liu, Baolong Bi, Duzhen Zhang, Zhongzhi Li, Junfeng Fang, Bryan Hooi
Large Reasoning Models (LRMs) have exhibited extraordinary prowess in tasks like mathematics and coding, leveraging their advanced reasoning capabilities.
no code implementations • 22 Apr 2025 • Zhiyuan Hu, Shiyun Xiong, Yifan Zhang, See-Kiong Ng, Anh Tuan Luu, Bo An, Shuicheng Yan, Bryan Hooi
Recent advancements in visual language models (VLMs) have notably enhanced their capabilities in handling complex Graphical User Interface (GUI) interaction tasks.
1 code implementation • 21 Apr 2025 • Hongcheng Gao, Yue Liu, Yufei He, Longxu Dou, Chao Du, Zhijie Deng, Bryan Hooi, Min Lin, Tianyu Pang
This paper proposes a query-level meta-agent named FlowReasoner to automate the design of query-level multi-agent systems, i. e., one system per user query.
no code implementations • 10 Apr 2025 • Tianyi Wu, Zhiwei Xue, Yue Liu, Jiaheng Zhang, Bryan Hooi, See-Kiong Ng
Despite achieving the promising attack success rate using dictionary-based evaluation, existing jailbreak attack methods fail to output detailed contents to satisfy the harmful request, leading to poor performance on GPT-based evaluation.
1 code implementation • 3 Apr 2025 • Sifan Li, Yujun Cai, Bryan Hooi, Nanyun Peng, Yiwei Wang
Traditional Chinese Medicine (TCM) has seen increasing adoption in healthcare, with specialized Large Language Models (LLMs) emerging to support clinical applications.
no code implementations • 2 Apr 2025 • Zhaochen Wang, Yujun Cai, Zi Huang, Bryan Hooi, Yiwei Wang, Ming-Hsuan Yang
Vision-language models (VLMs) have advanced rapidly in processing multimodal information, but their ability to reconcile conflicting signals across modalities remains underexplored.
no code implementations • 1 Apr 2025 • Chunxue Xu, Yiwei Wang, Bryan Hooi, Yujun Cai, Songze Li
Visual Language Models (VLMs) have become foundational models for document understanding tasks, widely used in the processing of complex multimodal documents across domains such as finance, law, and academia.
no code implementations • 31 Mar 2025 • Nuo Chen, Zhiyuan Hu, Qingyun Zou, Jiaying Wu, Qian Wang, Bryan Hooi, Bingsheng He
The rise of Large Language Models (LLMs) as evaluators offers a scalable alternative to human annotation, yet existing Supervised Fine-Tuning (SFT) for judges approaches often fall short in domains requiring complex reasoning.
1 code implementation • 29 Mar 2025 • Yue Liu, Jiaying Wu, Yufei He, Hongcheng Gao, Hongyu Chen, Baolong Bi, Jiaheng Zhang, Zhiqi Huang, Bryan Hooi
Large Reasoning Models (LRMs) significantly improve the reasoning ability of Large Language Models (LLMs) by learning to reason, exhibiting promising performance in complex task-solving.
no code implementations • 27 Mar 2025 • Cheng Wang, Yiwei Wang, Yujun Cai, Bryan Hooi
Retrieval-augmented generation (RAG) systems enhance large language models by incorporating external knowledge, addressing issues like outdated internal knowledge and hallucination.
no code implementations • 25 Mar 2025 • Yu Cui, Bryan Hooi, Yujun Cai, Yiwei Wang
Recent reasoning large language models (LLMs) have demonstrated remarkable improvements in mathematical reasoning capabilities through long Chain-of-Thought.
1 code implementation • 25 Mar 2025 • Yuan Li, Jun Hu, Jiaxin Jiang, Zemin Liu, Bryan Hooi, Bingsheng He
Recent advances in graph learning have paved the way for innovative retrieval-augmented generation (RAG) systems that leverage the inherent relational structures in graph data.
Ranked #1 on
Modality completion
on Amazon Baby
no code implementations • 24 Mar 2025 • Wenhao You, Bryan Hooi, Yiwei Wang, Youke Wang, Zong Ke, Ming-Hsuan Yang, Zi Huang, Yujun Cai
While safety mechanisms have significantly progressed in filtering harmful text inputs, MLLMs remain vulnerable to multimodal jailbreaks that exploit their cross-modal reasoning capabilities.
no code implementations • 14 Mar 2025 • Shuyang Hao, Yiwei Wang, Bryan Hooi, Jun Liu, Muhao Chen, Zi Huang, Yujun Cai
However, we identify a critical limitation: not every adversarial optimization step leads to a positive outcome, and indiscriminately accepting optimization results at each step may reduce the overall attack success rate.
1 code implementation • CVPR 2025 • Ailin Deng, Tri Cao, Zhirui Chen, Bryan Hooi
Vision-Language Models (VLMs) excel in integrating visual and textual information for vision-centric tasks, but their handling of inconsistencies between modalities is underexplored.
no code implementations • 27 Feb 2025 • Yuan Sui, Yufei He, Tri Cao, Simeng Han, Bryan Hooi
Large Language Models (LLMs) increasingly rely on prolonged reasoning chains to solve complex tasks.
no code implementations • 20 Feb 2025 • Jiaxi Li, Yiwei Wang, Kai Zhang, Yujun Cai, Bryan Hooi, Nanyun Peng, Kai-Wei Chang, Jin Lu
Large language models (LLMs) have been widely adopted in various downstream task domains.
no code implementations • 17 Feb 2025 • Tianyi Wu, Jingwei Ni, Bryan Hooi, Jiaheng Zhang, Elliott Ash, See-Kiong Ng, Mrinmaya Sachan, Markus Leippold
Instruction Fine-tuning (IFT) can enhance the helpfulness of Large Language Models (LLMs), but it may lower their truthfulness.
1 code implementation • 16 Feb 2025 • Haoming Xu, Ningyuan Zhao, Liming Yang, Sendong Zhao, Shumin Deng, Mengru Wang, Bryan Hooi, Nay Oo, Huajun Chen, Ningyu Zhang
Current unlearning methods for large language models usually rely on reverse optimization to reduce target token probabilities.
1 code implementation • 16 Feb 2025 • Yufei He, Yuexin Li, Jiaying Wu, Yuan Sui, Yulin Chen, Bryan Hooi
As large language models (LLMs) continue to evolve, ensuring their alignment with human goals and values remains a pressing challenge.
no code implementations • 12 Feb 2025 • Zhen Xiong, Yujun Cai, Bryan Hooi, Nanyun Peng, Zhecheng Li, Yiwei Wang
Large Language Models (LLMs) have demonstrated strong generalization capabilities across a wide range of natural language processing (NLP) tasks.
no code implementations • 5 Feb 2025 • Wenhao You, Bryan Hooi, Yiwei Wang, Euijin Choo, Ming-Hsuan Yang, Junsong Yuan, Zi Huang, Yujun Cai
Recent advancements in diffusion models have driven the growth of text-guided image editing tools, enabling precise and iterative modifications of synthesized content.
1 code implementation • 2 Feb 2025 • Yufei He, Yuan Sui, Xiaoxin He, Yue Liu, Yifei Sun, Bryan Hooi
Multimodal graphs (MMGs) represent such graphs where each node is associated with features from different modalities, while the edges capture the relationships between these entities.
1 code implementation • 30 Jan 2025 • Yue Liu, Hongcheng Gao, Shengfang Zhai, Jun Xia, Tianyi Wu, Zhiwei Xue, Yulin Chen, Kenji Kawaguchi, Jiaheng Zhang, Bryan Hooi
Then, we introduce reasoning SFT to unlock the reasoning capability of guard models.
1 code implementation • 28 Jan 2025 • Jinlan Fu, Shenzhen Huangfu, Hao Fei, Xiaoyu Shen, Bryan Hooi, Xipeng Qiu, See-Kiong Ng
To address these challenges, in this work, we propose a Cross-modal Hierarchical Direct Preference Optimization (CHiP) to address these limitations.
no code implementations • 15 Jan 2025 • Adam Goodge, Wee Siong Ng, Bryan Hooi, See Kiong Ng
Foundation models have revolutionized artificial intelligence, setting new benchmarks in performance and enabling transformative capabilities across a wide range of vision and language tasks.
1 code implementation • 15 Jan 2025 • Zhi Zheng, Zhuoliang Xie, Zhenkun Wang, Bryan Hooi
Handcrafting heuristics for solving complex optimization tasks (e. g., route planning and task allocation) is a common practice but requires extensive domain knowledge.
1 code implementation • 18 Dec 2024 • Jun Hu, Bryan Hooi, Bingsheng He, Yinwei Wei
Our results indicate that the optimal $K$ for certain modalities on specific datasets can be as low as 1 or 2, which may restrict the GNNs' capacity to capture global information.
Ranked #1 on
Multi-modal Recommendation
on Amazon Clothing
no code implementations • CVPR 2025 • Shuyang Hao, Bryan Hooi, Jun Liu, Kai-Wei Chang, Zi Huang, Yujun Cai
Despite inheriting security measures from underlying language models, Vision-Language Models (VLMs) may still be vulnerable to safety alignment issues.
1 code implementation • 27 Nov 2024 • Zhecheng Li, Yiwei Wang, Bryan Hooi, Yujun Cai, Nanyun Peng, Kai-Wei Chang
Recent studies reveal that while LLMs can detect unanswerable questions, they struggle to assist users in reformulating these questions.
no code implementations • 21 Nov 2024 • Tri Cao, Minh-Huy Trinh, Ailin Deng, Quoc-Nam Nguyen, Khoa Duong, Ngai-Man Cheung, Bryan Hooi
However, existing models primarily operate in a binary setting, and the anomaly scores they produce are usually based on the deviation of data points from normal data, which may not accurately reflect practical severity.
no code implementations • 16 Nov 2024 • Wei Zhuo, Zemin Liu, Bryan Hooi, Bingsheng He, Guang Tan, Rizal Fathony, Jia Chen
Label imbalance and homophily-heterophily mixture are the fundamental problems encountered when applying Graph Neural Networks (GNNs) to Graph Fraud Detection (GFD) tasks.
no code implementations • 26 Oct 2024 • Zhecheng Li, Yiwei Wang, Bryan Hooi, Yujun Cai, Naifan Cheung, Nanyun Peng, Kai-Wei Chang
To resolve this question, we fully explore the potential of large language models on cross-lingual summarization task for low-resource languages through our four-step zero-shot method: Summarization, Improvement, Translation and Refinement (SITR) with correspondingly designed prompts.
no code implementations • 26 Oct 2024 • Zhecheng Li, Yiwei Wang, Bryan Hooi, Yujun Cai, Zhen Xiong, Nanyun Peng, Kai-Wei Chang
Text classification involves categorizing a given text, such as determining its sentiment or identifying harmful content.
no code implementations • 10 Oct 2024 • Yuan Sui, Yufei He, Zifeng Ding, Bryan Hooi
Recent works integrating Knowledge Graphs (KGs) have led to promising improvements in enhancing reasoning accuracy of Large Language Models (LLMs).
2 code implementations • 2 Oct 2024 • Yue Liu, Xiaoxin He, Miao Xiong, Jinlan Fu, Shumin Deng, Bryan Hooi
Second, we verify the strong ability of LLMs to perform the text-flipping task, and then develop 4 variants to guide LLMs to denoise, understand, and execute harmful behaviors accurately.
no code implementations • 26 Sep 2024 • Shen Li, Jianqing Xu, Jiaying Wu, Miao Xiong, Ailin Deng, Jiazhen Ji, Yuge Huang, Wenjie Feng, Shouhong Ding, Bryan Hooi
This equivalence motivates an ID-preserving sampling algorithm, which operates over an adjusted gradient vector field, enabling the generation of fake face recognition datasets that approximate the distribution of real-world faces.
1 code implementation • 5 Sep 2024 • Cheng Wang, Yiwei Wang, Bryan Hooi, Yujun Cai, Nanyun Peng, Kai-Wei Chang
The training data in large language models is key to their success, but it also presents privacy and security risks, as it may contain sensitive information.
1 code implementation • 31 Aug 2024 • Zhiyuan Hu, Yuliang Liu, Jinman Zhao, Suyuchen Wang, Yan Wang, Wei Shen, Qing Gu, Anh Tuan Luu, See-Kiong Ng, Zhiwei Jiang, Bryan Hooi
Large language models (LLMs) face significant challenges in handling long-context tasks because of their limited effective context window size during pretraining, which restricts their ability to generalize over extended sequences.
1 code implementation • 12 Aug 2024 • Jehyun Lee, Peiyuan Lim, Bryan Hooi, Dinil Mon Divakaran
In this work, we take steps to study the efficacy of large language models (LLMs), in particular the multimodal LLMs, in detecting phishing webpages.
no code implementations • 24 Jul 2024 • Adam Goodge, Bryan Hooi, Wee Siong Ng
Contrastive Language-Image Pre-training (CLIP) achieves remarkable performance in various downstream tasks through the alignment of image and text input embeddings and holds great promise for anomaly detection.
Ranked #3 on
Anomaly Detection
on One-class CIFAR-10
no code implementations • 4 Jul 2024 • Yufei He, Zhenyu Hou, Yukuo Cen, Feng He, Xu Cheng, Bryan Hooi
Extensive experiments have demonstrated that our framework can perform pre-training on real-world web-scale graphs with over 540 million nodes and 12 billion edges and generalizes effectively to unseen new graphs with different downstream tasks.
1 code implementation • 19 Jun 2024 • Rishabh Anand, Chaitanya K. Joshi, Alex Morehead, Arian R. Jamasb, Charles Harris, Simon V. Mathis, Kieran Didi, Rex Ying, Bryan Hooi, Pietro Liò
We introduce RNA-FrameFlow, the first generative model for 3D RNA backbone design.
no code implementations • 7 Jun 2024 • Juncheng Liu, Chenghao Liu, Gerald Woo, Yiwei Wang, Bryan Hooi, Caiming Xiong, Doyen Sahoo
However, existing Transformer models often fall short of capturing both intricate dependencies across variate and temporal dimensions in MTS data.
no code implementations • 22 May 2024 • Yuan Sui, Yufei He, Nian Liu, Xiaoxin He, Kun Wang, Bryan Hooi
A distinctive feature of our approach is its blend of natural language planning with beam search to optimize the selection of reasoning paths.
1 code implementation • 4 Mar 2024 • Yuexin Li, Chengyu Huang, Shumin Deng, Mei Lin Lock, Tri Cao, Nay Oo, Hoon Wei Lim, Bryan Hooi
Phishing attacks have inflicted substantial losses on individuals and businesses alike, necessitating the development of robust and efficient automated phishing detection approaches.
2 code implementations • 23 Feb 2024 • Ailin Deng, Zhirui Chen, Bryan Hooi
Large Vision-Language Models (LVLMs) are susceptible to object hallucinations, an issue in which their generated text contains non-existent objects, greatly limiting their reliability and practicality.
1 code implementation • 21 Feb 2024 • Yufei He, Yuan Sui, Xiaoxin He, Bryan Hooi
However, graph learning has predominantly focused on single-graph models, tailored to specific tasks or datasets, lacking the ability to transfer learned knowledge to different domains.
1 code implementation • 19 Feb 2024 • Jihai Zhang, Xiang Lan, Xiaoye Qu, Yu Cheng, Mengling Feng, Bryan Hooi
Self-Supervised Contrastive Learning has proven effective in deriving high-quality representations from unlabeled data.
2 code implementations • 12 Feb 2024 • Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh V. Chawla, Thomas Laurent, Yann Lecun, Xavier Bresson, Bryan Hooi
Given a graph with textual attributes, we enable users to `chat with their graph': that is, to ask questions about the graph using a conversational interface.
1 code implementation • 5 Feb 2024 • Zhiyuan Hu, Chumin Liu, Xidong Feng, Yilun Zhao, See-Kiong Ng, Anh Tuan Luu, Junxian He, Pang Wei Koh, Bryan Hooi
In the face of uncertainty, the ability to *seek information* is of fundamental importance.
no code implementations • 30 Nov 2023 • Yifan Zhang, Bryan Hooi
The learned adaptors empower these diffusion models to generate high-quality images in just a single step.
1 code implementation • 15 Nov 2023 • Shumin Deng, Ningyu Zhang, Nay Oo, Bryan Hooi
Large Language Models (LLMs) employing Chain-of-Thought (CoT) prompting have broadened the scope for improving multi-step reasoning capabilities.
1 code implementation • 23 Oct 2023 • Jun Hu, Bryan Hooi, Bingsheng He
To achieve low information loss, we introduce a Relation-wise Neighbor Collection component with an Even-odd Propagation Scheme, which aims to collect information from neighbors in a finer-grained way.
Ranked #1 on
Heterogeneous Node Classification
on OAG-L1-Field
1 code implementation • 20 Oct 2023 • Yiwei Wang, Yujun Cai, Muhao Chen, Yuxuan Liang, Bryan Hooi
We have two main findings: i) ChatGPT's decision is sensitive to the order of labels in the prompt; ii) ChatGPT has a clearly higher chance to select the labels at earlier positions as the answer.
no code implementations • 16 Oct 2023 • Haoran Li, Yulin Chen, Jinglong Luo, Jiecong Wang, Hao Peng, Yan Kang, Xiaojin Zhang, Qi Hu, Chunkit Chan, Zenglin Xu, Bryan Hooi, Yangqiu Song
The advancement of large language models (LLMs) has significantly enhanced the ability to effectively tackle various downstream NLP tasks and unify these tasks into generative pipelines.
1 code implementation • 16 Oct 2023 • Jiaying Wu, Jiafeng Guo, Bryan Hooi
To address this, we introduce SheepDog, a style-robust fake news detector that prioritizes content over style in determining news veracity.
1 code implementation • 15 Oct 2023 • Xu Liu, Junfeng Hu, Yuan Li, Shizhe Diao, Yuxuan Liang, Bryan Hooi, Roger Zimmermann
To address these issues, we propose UniTime for effective cross-domain time series learning.
Ranked #5 on
Time Series Forecasting
on ETTh1 (336) Multivariate
1 code implementation • 11 Oct 2023 • Minji Yoon, Jing Yu Koh, Bryan Hooi, Ruslan Salakhutdinov
We study three research questions raised by MMGL: (1) how can we infuse multiple neighbor information into the pretrained LMs, while avoiding scalability issues?
1 code implementation • 3 Oct 2023 • Jintian Zhang, Xin Xu, Ningyu Zhang, Ruibo Liu, Bryan Hooi, Shumin Deng
This paper probes the collaboration mechanisms among contemporary NLP systems by melding practical experiments with theoretical insights.
1 code implementation • 28 Sep 2023 • Jiaying Wu, Shen Li, Ailin Deng, Miao Xiong, Bryan Hooi
Despite considerable advances in automated fake news detection, due to the timely nature of news, it remains a critical open question how to effectively predict the veracity of news articles based on limited fact-checks.
no code implementations • 16 Sep 2023 • Zhiyuan Hu, Yue Feng, Yang Deng, Zekun Li, See-Kiong Ng, Anh Tuan Luu, Bryan Hooi
Recently, the development of large language models (LLMs) has been significantly enhanced the question answering and dialogue generation, and makes them become increasingly popular in current practical scenarios.
1 code implementation • 26 Aug 2023 • Zemin Liu, Yuan Li, Nan Chen, Qian Wang, Bryan Hooi, Bingsheng He
However, these methods often suffer from data imbalance, a common issue in graph data where certain segments possess abundant data while others are scarce, thereby leading to biased learning outcomes.
1 code implementation • 22 Jun 2023 • Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, Bryan Hooi
To better break down the problem, we define a systematic framework with three components: prompting strategies for eliciting verbalized confidence, sampling methods for generating multiple responses, and aggregation techniques for computing consistency.
1 code implementation • 16 Jun 2023 • Zhiyuan Hu, Yue Feng, Anh Tuan Luu, Bryan Hooi, Aldo Lipani
This approach uses LLM as annotation-free user simulator to assess dialogue responses, combining them with smaller fine-tuned end-to-end TOD models.
1 code implementation • NeurIPS 2023 • Xu Liu, Yutong Xia, Yuxuan Liang, Junfeng Hu, Yiwei Wang, Lei Bai, Chao Huang, Zhenguang Liu, Bryan Hooi, Roger Zimmermann
To mitigate these limitations, we introduce the LargeST benchmark dataset.
1 code implementation • 14 Jun 2023 • Zhiyuan Hu, Chumin Liu, Yue Feng, Anh Tuan Luu, Bryan Hooi
Controllable text generation is a challenging and meaningful field in natural language generation (NLG).
1 code implementation • NeurIPS 2023 • Miao Xiong, Ailin Deng, Pang Wei Koh, Jiaying Wu, Shen Li, Jianqing Xu, Bryan Hooi
We examine the problem over 504 pretrained ImageNet models and observe that: 1) Proximity bias exists across a wide variety of model architectures and sizes; 2) Transformer-based models are relatively more susceptible to proximity bias than CNN-based models; 3) Proximity bias persists even after performing popular calibration algorithms like temperature scaling; 4) Models tend to overfit more heavily on low proximity samples than on high proximity samples.
3 code implementations • 31 May 2023 • Xiaoxin He, Xavier Bresson, Thomas Laurent, Adam Perold, Yann Lecun, Bryan Hooi
With the advent of powerful large language models (LLMs) such as GPT or Llama2, which demonstrate an ability to reason and to utilize general knowledge, there is a growing need for techniques which combine the textual modelling abilities of LLMs with the structural learning capabilities of GNNs.
Ranked #3 on
Node Property Prediction
on ogbn-arxiv
(using extra training data)
1 code implementation • 30 May 2023 • Yuwen Li, Miao Xiong, Bryan Hooi
Label errors have been found to be prevalent in popular text, vision, and audio datasets, which heavily influence the safe development and evaluation of machine learning algorithms.
1 code implementation • 23 May 2023 • Shumin Deng, Shengyu Mao, Ningyu Zhang, Bryan Hooi
Event-centric structured prediction involves predicting structured outputs of events.
1 code implementation • 22 May 2023 • Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, Muhao Chen
In principle, textual context determines the ground-truth relation and the RE models should be able to correctly identify the relations reflected by the textual context.
2 code implementations • 10 May 2023 • Mingqi Yang, Wenjie Feng, Yanming Shen, Bryan Hooi
Proposing an effective and flexible matrix to represent a graph is a fundamental challenge that has been explored from multiple perspectives, e. g., filtering in Graph Fourier Transforms.
Ranked #8 on
Graph Regression
on ZINC
no code implementations • 2 May 2023 • Ailin Deng, Miao Xiong, Bryan Hooi
To overcome this incoherence issue, we design a \emph{neighborhood agreement measure} between latent spaces and find that this agreement is surprisingly well-correlated with the reliability of a model's predictions.
1 code implementation • 17 Apr 2023 • Baixiang Huang, Bryan Hooi, Kai Shu
To bridge this gap, we have constructed a real-world graph-based Traffic Accident Prediction (TAP) data repository, along with two representative tasks: accident occurrence prediction and accident severity prediction.
1 code implementation • 25 Feb 2023 • Aashish Kolluri, Sarthak Choudhary, Bryan Hooi, Prateek Saxena
We present RETEXO, the first framework which eliminates the severe communication bottleneck in distributed GNN training while respecting any given data partitioning configuration.
1 code implementation • 6 Feb 2023 • Ailin Deng, Shen Li, Miao Xiong, Zhirui Chen, Bryan Hooi
Trustworthy machine learning is of primary importance to the practical deployment of deep learning models.
no code implementations • 30 Jan 2023 • Xu Liu, Yuxuan Liang, Chao Huang, Hengchang Hu, Yushi Cao, Bryan Hooi, Roger Zimmermann
Spatio-temporal graph neural networks (STGNN) have become the most popular solution to traffic forecasting.
no code implementations • CVPR 2023 • Jianqing Xu, Shen Li, Ailin Deng, Miao Xiong, Jiaying Wu, Jiaxiang Wu, Shouhong Ding, Bryan Hooi
Mean ensemble (i. e. averaging predictions from multiple models) is a commonly-used technique in machine learning that improves the performance of each individual model.
3 code implementations • 27 Dec 2022 • Xiaoxin He, Bryan Hooi, Thomas Laurent, Adam Perold, Yann Lecun, Xavier Bresson
First, they capture long-range dependency and mitigate the issue of over-squashing as demonstrated on Long Range Graph Benchmark and TreeNeighbourMatch datasets.
Ranked #6 on
Graph Regression
on Peptides-struct
1 code implementation • 29 Nov 2022 • Miao Xiong, Shen Li, Wenjie Feng, Ailin Deng, Jihai Zhang, Bryan Hooi
How do we know when the predictions made by a classifier can be trusted?
1 code implementation • NeurIPS 2023 • Yifan Zhang, Daquan Zhou, Bryan Hooi, Kai Wang, Jiashi Feng
Specifically, GIF conducts data imagination by optimizing the latent features of the seed data in the semantically meaningful space of the prior model, resulting in the creation of photo-realistic images with new content.
no code implementations • 24 Oct 2022 • Kaixin Wang, Kuangqi Zhou, Jiashi Feng, Bryan Hooi, Xinchao Wang
In Reinforcement Learning (RL), Laplacian Representation (LapRep) is a task-agnostic state representation that encodes the geometry of the environment.
1 code implementation • 15 Oct 2022 • Juncheng Liu, Bryan Hooi, Kenji Kawaguchi, Xiaokui Xiao
Recently, implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs.
1 code implementation • 30 Sep 2022 • Shumin Deng, Chengming Wang, Zhoubo Li, Ningyu Zhang, Zelin Dai, Hehong Chen, Feiyu Xiong, Ming Yan, Qiang Chen, Mosha Chen, Jiaoyan Chen, Jeff Z. Pan, Bryan Hooi, Huajun Chen
We release all the open resources (OpenBG benchmarks) derived from it for the community and report experimental results of KG-centric tasks.
no code implementations • 25 Sep 2022 • Nicholas Lim, Bryan Hooi, See-Kiong Ng, Yong Liang Goh
Sparsity of the User-POI matrix is a well established problem for next POI recommendation, which hinders effective learning of user preferences.
1 code implementation • 19 Sep 2022 • Jiaying Wu, Bryan Hooi
As social media becomes a hotbed for the spread of misinformation, the crucial task of rumor detection has witnessed promising advances fostered by open-source benchmark datasets.
no code implementations • 17 Sep 2022 • Yiwei Wang, Bryan Hooi, Yozen Liu, Tong Zhao, Zhichun Guo, Neil Shah
However, HadamardMLP lacks the scalability for retrieving top scoring neighbors on large graphs, since to the best of our knowledge, there does not exist an algorithm to retrieve the top scoring neighbors for HadamardMLP decoders in sublinear complexity.
no code implementations • 23 Aug 2022 • Shen Li, Bryan Hooi
Without exploiting any label information, the principal components recovered store the most informative elements in their \emph{leading} dimensions and leave the negligible in the \emph{trailing} ones, allowing for clear performance improvements of $5\%$-$10\%$ in downstream tasks.
no code implementations • 18 Aug 2022 • Adrien Benamira, Tristan Guérand, Thomas Peyrin, Trevor Yap, Bryan Hooi
We propose $\mathcal{T}$ruth $\mathcal{T}$able net ($\mathcal{TT}$net), a novel Convolutional Neural Network (CNN) architecture that addresses, by design, the open challenges of interpretability, formal verification, and logic gate conversion.
1 code implementation • 15 Jun 2022 • Adam Goodge, Bryan Hooi, See Kiong Ng, Wee Siong Ng
However, the anomaly scoring function is not adaptive to the natural variation in reconstruction error across the range of normal samples, which hinders their ability to detect real anomalies.
no code implementations • Findings (NAACL) 2022 • Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Bryan Hooi
GRAPHCACHE aggregates the features from sentences in the whole dataset to learn global representations of properties, and use them to augment the local features within individual sentences.
1 code implementation • NAACL 2022 • Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Dayiheng Liu, Baosong Yang, Juncheng Liu, Bryan Hooi
In this paper, we propose the CORE (Counterfactual Analysis based Relation Extraction) debiasing method that guides the RE models to focus on the main effects of textual context without losing the entity information.
1 code implementation • 6 May 2022 • Aashish Kolluri, Teodora Baluta, Bryan Hooi, Prateek Saxena
In this paper, we present a new neural network architecture called LPGNet for training on graphs with privacy-sensitive edges.
no code implementations • Findings (NAACL) 2022 • Juncheng Liu, Zequn Sun, Bryan Hooi, Yiwei Wang, Dayiheng Liu, Baosong Yang, Xiaokui Xiao, Muhao Chen
We study dangling-aware entity alignment in knowledge graphs (KGs), which is an underexplored but important problem.
2 code implementations • 5 Apr 2022 • Jun Hu, Bryan Hooi, Shengsheng Qian, Quan Fang, Changsheng Xu
Based on a Markov process that trades off two types of distances, we present Markov Graph Diffusion Collaborative Filtering (MGDCF) to generalize some state-of-the-art GNN-based CF models.
Ranked #4 on
Multi-modal Recommendation
on Amazon Sports
1 code implementation • NeurIPS 2021 • Juncheng Liu, Kenji Kawaguchi, Bryan Hooi, Yiwei Wang, Xiaokui Xiao
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN), to efficiently capture very long-range dependencies.
3 code implementations • 16 Feb 2022 • Shumin Deng, Yubo Ma, Ningyu Zhang, Yixin Cao, Bryan Hooi
Information Extraction (IE) seeks to derive structured information from unstructured texts, often facing challenges in low-resource scenarios due to data scarcity and unseen classes.
no code implementations • 30 Jan 2022 • Kaixin Wang, Navdeep Kumar, Kuangqi Zhou, Bryan Hooi, Jiashi Feng, Shie Mannor
The key of this perspective is to decompose the value space, in a state-wise manner, into unions of hypersurfaces.
no code implementations • 18 Dec 2021 • Yiwei Wang, Yujun Cai, Yuxuan Liang, Henghui Ding, Changhu Wang, Bryan Hooi
In this work, we propose the TNS (Time-aware Neighbor Sampling) method: TNS learns from temporal information to provide an adaptive receptive neighborhood for every node at any time.
1 code implementation • 10 Dec 2021 • Adam Goodge, Bryan Hooi, See Kiong Ng, Wee Siong Ng
This allows us to introduce learnability into local outlier methods, in the form of a neural network, for greater flexibility and expressivity: specifically, we propose LUNAR, a novel, graph neural network-based anomaly detection method.
no code implementations • 2 Dec 2021 • Shen Li, Jianqing Xu, Bryan Hooi
This paper proposes a probabilistic contrastive loss function for self-supervised learning.
no code implementations • NeurIPS 2021 • Yiwei Wang, Yujun Cai, Yuxuan Liang, Henghui Ding, Changhu Wang, Siddharth Bhatia, Bryan Hooi
To address this issue, our idea is to transform the temporal graphs using data augmentation (DA) with adaptive magnitudes, so as to effectively augment the input features and preserve the essential semantic information.
no code implementations • 1 Dec 2021 • Yiwei Wang, Yujun Cai, Yuxuan Liang, Wei Wang, Henghui Ding, Muhao Chen, Jing Tang, Bryan Hooi
Representing a label distribution as a one-hot vector is a common practice in training node classification models.
no code implementations • 11 Nov 2021 • Shubhranshu Shekhar, Dhivya Eswaran, Bryan Hooi, Jonathan Elmer, Christos Faloutsos, Leman Akoglu
Given a cardiac-arrest patient being monitored in the ICU (intensive care unit) for brain activity, how can we predict their health outcomes as early as possible?
1 code implementation • NeurIPS 2021 • Koki Kawabata, Siddharth Bhatia, Rui Liu, Mohit Wadhwa, Bryan Hooi
In general, given a data stream of events with seasonal patterns that innovate over time, how can we effectively and efficiently forecast future events?
1 code implementation • 9 Oct 2021 • Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, Jiashi Feng
Deep long-tailed learning, one of the most challenging problems in visual recognition, aims to train well-performing deep models from a large number of images that follow a long-tailed class distribution.
no code implementations • 29 Sep 2021 • Adrien Benamira, Thomas Peyrin, Bryan Hooi
Moreover, the corresponding SAT conversion method intrinsically leads to formulas with a large number of variables and clauses, impeding interpretability as well as formal verification scalability.
Explainable Artificial Intelligence (XAI)
Explanation Generation
+1
1 code implementation • 26 Aug 2021 • Xu Liu, Yuxuan Liang, Chao Huang, Yu Zheng, Bryan Hooi, Roger Zimmermann
In view of this, one may ask: can we leverage the additional signals from contrastive learning to alleviate data scarcity, so as to benefit STG forecasting?
2 code implementations • 20 Jul 2021 • Yifan Zhang, Bryan Hooi, Lanqing Hong, Jiashi Feng
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution.
Ranked #9 on
Long-tail Learning
on iNaturalist 2018
1 code implementation • 12 Jul 2021 • Kaixin Wang, Kuangqi Zhou, Qixin Zhang, Jie Shao, Bryan Hooi, Jiashi Feng
It enables learning high-quality Laplacian representations that faithfully approximate the ground truth.
1 code implementation • 29 Jun 2021 • Siddharth Bhatia, Yiwei Wang, Bryan Hooi, Tanmoy Chakraborty
Specifically, the generative model learns to approximate the distribution of anomalous samples from the candidate set of graph snapshots, and the discriminative model detects whether the sampled snapshot is from the ground-truth or not.
1 code implementation • CVPR 2021 • Shen Li, Jianqing Xu, Xiaqing Xu, Pengcheng Shen, Shaoxin Li, Bryan Hooi
Probabilistic Face Embeddings (PFE) is the first attempt to address this dilemma.
2 code implementations • 13 Jun 2021 • Ailin Deng, Bryan Hooi
Given high-dimensional time series data (e. g., sensor data), how can we detect anomalous events, such as system faults and attacks?
Ranked #8 on
Unsupervised Anomaly Detection
on SMAP
1 code implementation • 8 Jun 2021 • Siddharth Bhatia, Mohit Wadhwa, Kenji Kawaguchi, Neil Shah, Philip S. Yu, Bryan Hooi
This higher-order sketch has the useful property of preserving the dense subgraph structure (dense subgraphs in the input turn into dense submatrices in the data structure).
1 code implementation • 7 Jun 2021 • Siddharth Bhatia, Arjit Jain, Shivin Srivastava, Kenji Kawaguchi, Bryan Hooi
Given a stream of entries over time in a multi-dimensional data setting where concept drift is present, how can we detect anomalous activities?
1 code implementation • 1 Jun 2021 • Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, Bryan Hooi
In this work, we propose the Mixup methods for two fundamental tasks in graph learning: node and graph classification.
Ranked #16 on
Node Classification
on Pubmed
2 code implementations • 4 Apr 2021 • Rui Liu, Siddharth Bhatia, Bryan Hooi
Isconna does not actively explore or maintain pattern snippets; it instead measures the consecutive presence and absence of edge records.
1 code implementation • NeurIPS 2021 • Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng
In this paper, we investigate whether applying contrastive learning to fine-tuning would bring further benefits, and analytically find that optimizing the contrastive loss benefits both discriminative representation learning and model optimization during fine-tuning.
no code implementations • 1 Jan 2021 • Shen Li, Jianqing Xu, Xiaqing Xu, Pengcheng Shen, Shaoxin Li, Bryan Hooi
To address these issues, in this paper, we propose a novel framework for face uncertainty learning in hyperspherical space.
1 code implementation • 30 Dec 2020 • Shimiao Li, Amritanshu Pandey, Bryan Hooi, Christos Faloutsos, Larry Pileggi
Given sensor readings over time from a power grid, how can we accurately detect when an anomaly occurs?
1 code implementation • 13 Dec 2020 • Juncheng Liu, Yiwei Wang, Bryan Hooi, Renchi Yang, Xiaokui Xiao
We argue that the representation power in unlabelled nodes can be useful for active learning and for further improving performance of active learning for node classification.
1 code implementation • 3 Dec 2020 • Nicholas Lim, Bryan Hooi, See-Kiong Ng, Xueou Wang, Yong Liang Goh, Renrong Weng, Rui Tan
Next destination recommendation is an important task in the transportation domain of taxi and ride-hailing services, where users are recommended with personalized destinations given their current origin location.
1 code implementation • 26 Nov 2020 • Minji Yoon, Bryan Hooi, Kijung Shin, Christos Faloutsos
This allows us to detect sudden changes in the importance of any node.
1 code implementation • 26 Nov 2020 • Minji Yoon, Théophile Gervet, Bryan Hooi, Christos Faloutsos
We first define a unified framework UNIFIEDGM that integrates various message-passing based graph algorithms, ranging from conventional algorithms like PageRank to graph neural networks.
no code implementations • 6 Oct 2020 • Nicholas Lim, Bryan Hooi, See-Kiong Ng, Xueou Wang, Yong Liang Goh, Renrong Weng, Jagannadan Varadarajan
Next Point-of-Interest (POI) recommendation is a longstanding problem across the domains of Location-Based Social Networks (LBSN) and transportation.
no code implementations • 28 Sep 2020 • Wesley Joon-Wie Tann, Ee-Chien Chang, Bryan Hooi
We introduce the problem of explaining graph generation, formulated as controlling the generative process to produce desired graphs with explainable structures.
no code implementations • 22 Sep 2020 • Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, Bryan Hooi
We present a new method to regularize graph neural networks (GNNs) for better generalization in graph classification.
3 code implementations • 17 Sep 2020 • Siddharth Bhatia, Rui Liu, Bryan Hooi, Minji Yoon, Kijung Shin, Christos Faloutsos
Given a stream of graph edges from a dynamic graph, how can we assign anomaly scores to edges in an online manner, for the purpose of detecting unusual behavior, using constant time and memory?
1 code implementation • 17 Sep 2020 • Siddharth Bhatia, Arjit Jain, Bryan Hooi
Hence, in this work, we propose ExGAN, a GAN-based approach to generate realistic and extreme samples.
1 code implementation • 17 Sep 2020 • Siddharth Bhatia, Arjit Jain, Pan Li, Ritesh Kumar, Bryan Hooi
Given a stream of entries in a multi-aspect data setting i. e., entries having multiple dimensions, how can we detect anomalous activities in an unsupervised manner?
Ranked #1 on
Intrusion Detection
on CIC-DDoS
2 code implementations • 12 Jun 2020 • Kuangqi Zhou, Yanfei Dong, Kaixin Wang, Wee Sun Lee, Bryan Hooi, Huan Xu, Jiashi Feng
In this work, we study performance degradation of GCNs by experimentally examining how stacking only TRANs or PROPs works.
no code implementations • 12 Jun 2020 • Manh Tuan Do, Se-eun Yoon, Bryan Hooi, Kijung Shin
Graphs have been utilized as a powerful tool to model pairwise relationships between people or objects.
Social and Information Networks Physics and Society
no code implementations • 6 Jun 2020 • Wesley Joon-Wie Tann, Ee-Chien Chang, Bryan Hooi
Given an observed graph and some user-specified Markov model parameters, ${\rm S{\small HADOW}C{\small AST}}$ controls the conditions to generate desired graphs.
9 code implementations • 11 Nov 2019 • Siddharth Bhatia, Bryan Hooi, Minji Yoon, Kijung Shin, Christos Faloutsos
Given a stream of graph edges from a dynamic graph, how can we assign anomaly scores to edges in an online manner, for the purpose of detecting unusual behavior, using constant time and memory?
Ranked #1 on
Anomaly Detection in Edge Streams
on Darpa
2 code implementations • ICLR 2020 • Shen Li, Bryan Hooi, Gim Hee Lee
Yet, most deep generative models do not address the question of identifiability, and thus fail to deliver on the promise of the recovery of the true latent sources that generate the observations.
1 code implementation • 4 Feb 2018 • Kijung Shin, Bryan Hooi, Jisu Kim, Christos Faloutsos
Can we detect it when data are too large to fit in memory or even on a disk?
Databases Distributed, Parallel, and Cluster Computing Social and Information Networks H.2.8
1 code implementation • 6 May 2017 • Shenghua Liu, Bryan Hooi, Christos Faloutsos
Hence, we propose HoloScope, which uses information from graph topology and temporal spikes to more accurately detect groups of fraudulent users.
Social and Information Networks
no code implementations • 30 Mar 2017 • Srijan Kumar, Bryan Hooi, Disha Makhija, Mohit Kumar, Christos Faloutsos, V. S. Subrahamanian
We propose three metrics: (i) the fairness of a user that quantifies how trustworthy the user is in rating the products, (ii) the reliability of a rating that measures how reliable the rating is, and (iii) the goodness of a product that measures the quality of the product.
no code implementations • 19 Nov 2015 • Bryan Hooi, Neil Shah, Alex Beutel, Stephan Gunnemann, Leman Akoglu, Mohit Kumar, Disha Makhija, Christos Faloutsos
To combine these 2 approaches, we formulate our Bayesian Inference for Rating Data (BIRD) model, a flexible Bayesian model of user rating behavior.