1 code implementation • 21 Jun 2024 • Nemin Wu, Qian Cao, Zhangyu Wang, Zeping Liu, Yanlin Qi, Jielu Zhang, Joshua Ni, Xiaobai Yao, Hongxu Ma, Lan Mu, Stefano Ermon, Tanuja Ganu, Akshay Nambi, Ni Lao, Gengchen Mai
To fill this gap, we propose TorchSpatial, a learning framework and benchmark for location (point) encoding, which is one of the most fundamental data types of spatial representation learning.
1 code implementation • 28 May 2024 • Zhangyu Wang, Gengchen Mai, Krzysztof Janowicz, Ni Lao
A wide range of (multivariate) temporal (1D) and spatial (2D) data analysis tasks, such as grouping vehicle sensor trajectories, can be formulated as clustering with given metric constraints.
1 code implementation • 28 Mar 2024 • Zhongliang Zhou, Jielu Zhang, Zihan Guan, Mengxuan Hu, Ni Lao, Lan Mu, Sheng Li, Gengchen Mai
Geolocating precise locations from images presents a challenging problem in computer vision and information retrieval. Traditional methods typically employ either classification, which dividing the Earth surface into grid cells and classifying images accordingly, or retrieval, which identifying locations by matching images with a database of image-location pairs.
no code implementations • 30 Sep 2023 • Gengchen Mai, Ni Lao, Weiwei Sun, Yuchi Ma, Jiaming Song, Chenlin Meng, Hongxu Ma, Jinmeng Rao, Ziyuan Li, Stefano Ermon
Existing digital sensors capture images at fixed spatial and spectral resolutions (e. g., RGB, multispectral, and hyperspectral images), and each combination requires bespoke machine learning models.
no code implementations • 30 Jun 2023 • Gengchen Mai, Yao Xuan, Wenyun Zuo, Yutong He, Jiaming Song, Stefano Ermon, Krzysztof Janowicz, Ni Lao
So when applied to large-scale real-world GPS coordinate datasets, which require distance metric learning on the spherical surface, both types of models can fail due to the map projection distortion problem (2D) and the spherical-to-Euclidean distance approximation error (3D).
no code implementations • 1 May 2023 • Gengchen Mai, Ni Lao, Yutong He, Jiaming Song, Stefano Ermon
To directly leverage the abundant geospatial information associated with images in pre-training, fine-tuning, and inference stages, we present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
no code implementations • 13 Apr 2023 • Gengchen Mai, Weiming Huang, Jin Sun, Suhang Song, Deepak Mishra, Ninghao Liu, Song Gao, Tianming Liu, Gao Cong, Yingjie Hu, Chris Cundy, Ziyuan Li, Rui Zhu, Ni Lao
In this work, we explore the promises and challenges of developing multimodal foundation models for GeoAI.
2 code implementations • 17 Oct 2022 • Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, Kelvin Guu
Language models (LMs) now excel at many tasks such as few-shot learning, question answering, reasoning, and dialog.
1 code implementation • 29 Sep 2022 • Gengchen Mai, Chiyu Jiang, Weiwei Sun, Rui Zhu, Yao Xuan, Ling Cai, Krzysztof Janowicz, Stefano Ermon, Ni Lao
For the spatial domain approach, we propose ResNet1D, a 1D CNN-based polygon encoder, which uses circular padding to achieve loop origin invariance on simple polygons.
no code implementations • 25 Jan 2022 • Gengchen Mai, Yao Xuan, Wenyun Zuo, Krzysztof Janowicz, Ni Lao
However, a map projection distortion problem rises when applying location encoding models to large-scale real-world GPS coordinate datasets (e. g., species images taken all over the world) - all current location encoding models are designed for encoding points in a 2D (Euclidean) space but not on a spherical surface, e. g., earth surface.
1 code implementation • 2 Dec 2021 • Gengchen Mai, Weiming Huang, Ling Cai, Rui Zhu, Ni Lao
With the help of this tool, the retrieved data from KGs are directly materialized in a GIS format which is ready for spatial analysis and mapping.
no code implementations • 7 Nov 2021 • Gengchen Mai, Krzysztof Janowicz, Yingjie Hu, Song Gao, Bo Yan, Rui Zhu, Ling Cai, Ni Lao
A common need for artificial intelligence models in the broader geoscience is to represent and encode various types of spatial data, such as points (e. g., points of interest), polylines (e. g., trajectories), polygons (e. g., administrative regions), graphs (e. g., transportation networks), or rasters (e. g., remote sensing images), in a hidden embedding space so that they can be readily incorporated into deep learning models.
no code implementations • 29 Sep 2021 • Gengchen Mai, Yao Xuan, Wenyun Zuo, Yutong He, Stefano Ermon, Jiaming Song, Krzysztof Janowicz, Ni Lao
Location encoding is valuable for a multitude of tasks where both the absolute positions and local contexts (image, text, and other types of metadata) of spatial objects are needed for accurate predictions.
no code implementations • 19 May 2021 • Gengchen Mai, Krzysztof Janowicz, Rui Zhu, Ling Cai, Ni Lao
As an important part of Artificial Intelligence (AI), Question Answering (QA) aims at generating answers to questions phrased in natural language.
no code implementations • NAACL (MIA) 2022 • Ivan Montero, Shayne Longpre, Ni Lao, Andrew J. Frank, Christopher DuBois
Existing methods for open-retrieval question answering in lower resource languages (LRLs) lag significantly behind English.
1 code implementation • 25 Apr 2020 • Gengchen Mai, Krzysztof Janowicz, Ling Cai, Rui Zhu, Blake Regalia, Bo Yan, Meilin Shi, Ni Lao
We also construct a geographic knowledge graph as well as a set of geographic query-answer pairs called DBGeo to evaluate the performance of SE-KGE in comparison to multiple baselines.
1 code implementation • 14 Mar 2020 • Gengchen Mai, Krzysztof Janowicz, Sathya Prasad, Meilin Shi, Ling Cai, Rui Zhu, Blake Regalia, Ni Lao
In the geospatial aspect, we propose to enrich a query by using both place partonomy and distance decay.
2 code implementations • ICLR 2020 • Gengchen Mai, Krzysztof Janowicz, Bo Yan, Rui Zhu, Ling Cai, Ni Lao
The key idea is to use neural networks to convert words in texts to vector space representations based on word positions in a sentence and their contexts, which are suitable for end-to-end training of downstream tasks.
2 code implementations • 30 Sep 2019 • Gengchen Mai, Krzysztof Janowicz, Bo Yan, Rui Zhu, Ling Cai, Ni Lao
Recently, several studies have explored methods for using KG embedding to answer logical queries.
no code implementations • 28 Sep 2019 • Felix Wu, Boyi Li, Lequn Wang, Ni Lao, John Blitzer, Kilian Q. Weinberger
This paper introduces Integrated Triaging, a framework that prunes almost all context in early layers of a network, leaving the remaining (deep) layers to scan only a tiny fraction of the full corpus.
no code implementations • ICLR Workshop drlStructPred 2019 • Jacob Biloki, Chen Liang, Ni Lao
We consider the problem of weakly supervised structured prediction (SP) with reinforcement learning (RL) – for example, given a database table and a question, perform a sequence of computation actions on the table, which generates a response and receives a binary success-failure reward.
2 code implementations • 28 Feb 2019 • Felix Wu, Boyi Li, Lequn Wang, Ni Lao, John Blitzer, Kilian Q. Weinberger
In this technical report, we introduce FastFusionNet, an efficient variant of FusionNet [12].
no code implementations • 5 Oct 2018 • Gengchen Mai, Krzysztof Janowicz, Cheng He, Sumang Liu, Ni Lao
To test a system's ability to understand the text we adopt an information retrieval evaluation by ranking all the review sentences for a question based on the likelihood that they answer this question.
4 code implementations • NeurIPS 2018 • Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc Le, Ni Lao
We present Memory Augmented Policy Optimization (MAPO), a simple and novel way to leverage a memory buffer of promising trajectories to reduce the variance of policy gradient estimate.
no code implementations • ICLR 2018 • Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao
Existing end-to-end deep QA models (Miller et al., 2016; Weston et al., 2014) need to read the entire text after observing the question, and therefore their complexity in responding a question is linear in the text size.
no code implementations • 17 Nov 2017 • Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao
Though deep neural networks have great success in natural language processing, they are limited at more knowledge intensive AI tasks, such as open-domain Question Answering (QA).
2 code implementations • ICLR 2018 • Felix Wu, Ni Lao, John Blitzer, Guandao Yang, Kilian Weinberger
State-of-the-art deep reading comprehension models are dominated by recurrent neural nets.
no code implementations • 4 Dec 2016 • Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao
In this work, we propose the Manager-Programmer-Computer framework, which integrates neural networks with non-differentiable memory to support abstract, scalable and precise operations through a friendly neural computer interface.
2 code implementations • ACL 2017 • Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base.
no code implementations • 28 Jun 2014 • Ni Lao, Jun Zhu
We prove that the gradient of candidate features can be represented solely as a function of signals and errors, and that CFI is an efficient approximation of gradient-based evaluation methods.
no code implementations • 12 Apr 2014 • William Yang Wang, Kathryn Mazaitis, Ni Lao, Tom Mitchell, William W. Cohen
We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank (PPR) on a linearized version of the proof space, and using on this connection, we develop a proveably-correct approximate grounding scheme, based on the PageRank-Nibble algorithm.
no code implementations • NeurIPS 2010 • Ni Lao, Jun Zhu, Liu Liu, Yandong Liu, William W. Cohen
Markov networks (MNs) can incorporate arbitrarily complex features in modeling relational data.