Search Results for author: Hongyu Ren

Found 30 papers, 23 papers with code

TimeGraphs: Graph-based Temporal Reasoning

no code implementations6 Jan 2024 Paridhi Maheshwari, Hongyu Ren, Yanan Wang, Rok Sosic, Jure Leskovec

The results demonstrate both robustness and efficiency of TimeGraphs on a range of temporal reasoning tasks.

Zero-shot Generalization

MAgIC: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration

1 code implementation14 Nov 2023 Lin Xu, Zhiyuan Hu, Daquan Zhou, Hongyu Ren, Zhen Dong, Kurt Keutzer, See Kiong Ng, Jiashi Feng

Large Language Models (LLMs) have marked a significant advancement in the field of natural language processing, demonstrating exceptional capabilities in reasoning, tool usage, and memory.

Benchmarking Language Modelling +1

Approximate Answering of Graph Queries

no code implementations12 Aug 2023 Michael Cochez, Dimitrios Alivanistos, Erik Arakelyan, Max Berrendorf, Daniel Daza, Mikhail Galkin, Pasquale Minervini, Mathias Niepert, Hongyu Ren

We will first provide an overview of the different query types which can be supported by these methods and datasets typically used for evaluation, as well as an insight into their limitations.

Knowledge Graphs World Knowledge

Enabling tabular deep learning when $d \gg n$ with an auxiliary knowledge graph

no code implementations7 Jun 2023 Camilo Ruiz, Hongyu Ren, Kexin Huang, Jure Leskovec

However, for tabular datasets with extremely high $d$-dimensional features but limited $n$ samples (i. e. $d \gg n$), machine learning models struggle to achieve strong performance due to the risk of overfitting.

Inductive Bias

PRODIGY: Enabling In-context Learning Over Graphs

no code implementations NeurIPS 2023 Qian Huang, Hongyu Ren, Peng Chen, Gregor Kržmanc, Daniel Zeng, Percy Liang, Jure Leskovec

In-context learning is the ability of a pretrained model to adapt to novel and diverse downstream tasks by conditioning on prompt examples, without optimizing any parameters.

In-Context Learning Knowledge Graphs

Neural Graph Reasoning: Complex Logical Query Answering Meets Graph Databases

1 code implementation26 Mar 2023 Hongyu Ren, Mikhail Galkin, Michael Cochez, Zhaocheng Zhu, Jure Leskovec

Extending the idea of graph databases (graph DBs), NGDB consists of a Neural Graph Storage and a Neural Graph Engine.

Link Prediction Logical Reasoning +1

Inductive Logical Query Answering in Knowledge Graphs

1 code implementation13 Oct 2022 Mikhail Galkin, Zhaocheng Zhu, Hongyu Ren, Jian Tang

Exploring the efficiency--effectiveness trade-off, we find the inductive relational structure representation method generally achieves higher performance, while the inductive node representation method is able to answer complex queries in the inference-only regime without any training on queries and scales to graphs of millions of nodes.

Complex Query Answering Entity Embeddings +2

TripleE: Easy Domain Generalization via Episodic Replay

1 code implementation4 Oct 2022 Xiaomeng Li, Hongyu Ren, Huifeng Yao, Ziwei Liu

In this paper, we propose TripleE, and the main idea is to encourage the network to focus on training on subsets (learning with replay) and enlarge the data space in learning on subsets.

Domain Generalization

VQA-GNN: Reasoning with Multimodal Knowledge via Graph Neural Networks for Visual Question Answering

no code implementations ICCV 2023 Yanan Wang, Michihiro Yasunaga, Hongyu Ren, Shinya Wada, Jure Leskovec

Visual question answering (VQA) requires systems to perform concept-level reasoning by unifying unstructured (e. g., the context in question and answer; "QA context") and structured (e. g., knowledge graph for the QA context and scene; "concept graph") multimodal knowledge.

Knowledge Graphs Question Answering +1

GreaseLM: Graph REASoning Enhanced Language Models for Question Answering

1 code implementation21 Jan 2022 Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D. Manning, Jure Leskovec

Answering complex questions about textual narratives requires reasoning over both stated context and the world knowledge that underlies it.

Knowledge Graphs Negation +2

SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs

1 code implementation28 Oct 2021 Hongyu Ren, Hanjun Dai, Bo Dai, Xinyun Chen, Denny Zhou, Jure Leskovec, Dale Schuurmans

There are two important reasoning tasks on KGs: (1) single-hop knowledge graph completion, which involves predicting individual links in the KG; and (2), multi-hop reasoning, where the goal is to predict which KG entities satisfy a given logical query.

Scheduling

Modeling Heterogeneous Hierarchies with Relation-specific Hyperbolic Cones

1 code implementation NeurIPS 2021 Yushi Bai, Rex Ying, Hongyu Ren, Jure Leskovec

Here we present ConE (Cone Embedding), a KG embedding model that is able to simultaneously model multiple hierarchical as well as non-hierarchical relations in a knowledge graph.

Ancestor-descendant prediction Knowledge Graph Completion +2

GreaseLM: Graph REASoning Enhanced Language Models

no code implementations ICLR 2022 Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning, Jure Leskovec

Answering complex questions about textual narratives requires reasoning over both stated context and the world knowledge that underlies it.

Knowledge Graphs Negation +2

On the Opportunities and Risks of Foundation Models

2 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

Combiner: Full Attention Transformer with Sparse Computation Cost

2 code implementations NeurIPS 2021 Hongyu Ren, Hanjun Dai, Zihang Dai, Mengjiao Yang, Jure Leskovec, Dale Schuurmans, Bo Dai

However, the key limitation of transformers is their quadratic memory and time complexity $\mathcal{O}(L^2)$ with respect to the sequence length in attention layers, which restricts application in extremely long sequences.

Image Generation Language Modelling

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering

4 code implementations NAACL 2021 Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, Jure Leskovec

The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.

Graph Representation Learning Knowledge Graphs +5

OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs

6 code implementations17 Mar 2021 Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, Jure Leskovec

Enabling effective and efficient machine learning (ML) over large-scale graph data (e. g., graphs with billions of edges) can have a great impact on both industrial and scientific applications.

BIG-bench Machine Learning Graph Learning +4

Graph Information Bottleneck

1 code implementation NeurIPS 2020 Tailin Wu, Hongyu Ren, Pan Li, Jure Leskovec

We design two sampling algorithms for structural regularization and instantiate the GIB principle with two new models: GIB-Cat and GIB-Bern, and demonstrate the benefits by evaluating the resilience to adversarial attacks.

Representation Learning

OCEAN: Online Task Inference for Compositional Tasks with Context Adaptation

1 code implementation17 Aug 2020 Hongyu Ren, Yuke Zhu, Jure Leskovec, Anima Anandkumar, Animesh Garg

We propose a variational inference framework OCEAN to perform online task inference for compositional tasks.

Variational Inference

Open Graph Benchmark: Datasets for Machine Learning on Graphs

20 code implementations NeurIPS 2020 Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, Jure Leskovec

We present the Open Graph Benchmark (OGB), a diverse set of challenging and realistic benchmark datasets to facilitate scalable, robust, and reproducible graph machine learning (ML) research.

Knowledge Graphs Node Property Prediction

Relational Message Passing for Knowledge Graph Completion

4 code implementations17 Feb 2020 Hongwei Wang, Hongyu Ren, Jure Leskovec

Specifically, two kinds of neighborhood topology are modeled for a given entity pair under the relational message passing framework: (1) Relational context, which captures the relation types of edges adjacent to the given entity pair; (2) Relational paths, which characterize the relative position between the given two entities in the knowledge graph.

Knowledge Graph Completion Relation

Query2box: Reasoning over Knowledge Graphs in Vector Space using Box Embeddings

6 code implementations ICLR 2020 Hongyu Ren, Weihua Hu, Jure Leskovec

Our main insight is that queries can be embedded as boxes (i. e., hyper-rectangles), where a set of points inside the box corresponds to a set of answer entities of the query.

Complex Query Answering

Multi-Agent Generative Adversarial Imitation Learning

1 code implementation NeurIPS 2018 Jiaming Song, Hongyu Ren, Dorsa Sadigh, Stefano Ermon

Imitation learning algorithms can be used to learn a policy from expert demonstrations without access to a reward signal.

Imitation Learning reinforcement-learning +1

Adversarial Constraint Learning for Structured Prediction

1 code implementation27 May 2018 Hongyu Ren, Russell Stewart, Jiaming Song, Volodymyr Kuleshov, Stefano Ermon

Constraint-based learning reduces the burden of collecting labels by having users specify general properties of structured outputs, such as constraints imposed by physical laws.

Pose Estimation Structured Prediction +3

RAN4IQA: Restorative Adversarial Nets for No-Reference Image Quality Assessment

no code implementations14 Dec 2017 Hongyu Ren, Diqi Chen, Yizhou Wang

The evaluator predicts perceptual score by extracting feature representations from the distorted and restored patches to measure GoR.

No-Reference Image Quality Assessment NR-IQA +1

Cannot find the paper you are looking for? You can Submit a new open access paper.