Search Results for author: Anthony G. Cohn

Found 13 papers, 3 papers with code

Exploring the GLIDE model for Human Action Effect Prediction

no code implementations PVLAM (LREC) 2022 Fangjun Li, David C. Hogg, Anthony G. Cohn

GLIDE is a generative neural network that can synthesize (inpaint) masked areas of an image, conditioned on a short piece of text.

Advancing Spatial Reasoning in Large Language Models: An In-Depth Evaluation and Enhancement Using the StepGame Benchmark

1 code implementation8 Jan 2024 Fangjun Li, David C. Hogg, Anthony G. Cohn

We analyze GPT's spatial reasoning performance on the rectified benchmark, identifying proficiency in mapping natural language text to spatial relations but limitations in multi-hop reasoning.

Relation Mapping Text Generation

Language Models as a Service: Overview of a New Paradigm and its Challenges

no code implementations28 Sep 2023 Emanuele La Malfa, Aleksandar Petrov, Simon Frieder, Christoph Weinhuber, Ryan Burnell, Raza Nazar, Anthony G. Cohn, Nigel Shadbolt, Michael Wooldridge

This paper has two goals: on the one hand, we delineate how the aforementioned challenges act as impediments to the accessibility, replicability, reliability, and trustworthiness of LMaaS.

Benchmarking

Object-agnostic Affordance Categorization via Unsupervised Learning of Graph Embeddings

no code implementations30 Mar 2023 Alexia Toumpa, Anthony G. Cohn

Acquiring knowledge about object interactions and affordances can facilitate scene understanding and human-robot collaboration tasks.

Object Scene Understanding

Exploring the GLIDE model for Human Action-effect Prediction

no code implementations1 Aug 2022 Fangjun Li, David C. Hogg, Anthony G. Cohn

GLIDE is a generative neural network that can synthesize (inpaint) masked areas of an image, conditioned on a short piece of text.

Defect segmentation: Mapping tunnel lining internal defects with ground penetrating radar data using a convolutional neural network

no code implementations29 Mar 2020 Senlin Yang, Zhengfang Wang, Jing Wang, Anthony G. Cohn, Jia-Qi Zhang, Peng Jiang, Qingmei Sui

This research proposes a Ground Penetrating Radar (GPR) data processing method for non-destructive detection of tunnel lining internal defects, called defect segmentation.

GPR

Human-like Planning for Reaching in Cluttered Environments

1 code implementation28 Feb 2020 Mohamed Hasan, Matthew Warburton, Wisdom C. Agboh, Mehmet R. Dogar, Matteo Leonetti, He Wang, Faisal Mushtaq, Mark Mon-Williams, Anthony G. Cohn

From this, we devised a qualitative representation of the task space to abstract the decision making, irrespective of the number of obstacles.

Decision Making

GPRInvNet: Deep Learning-Based Ground Penetrating Radar Data Inversion for Tunnel Lining

no code implementations12 Dec 2019 Bin Liu, Yuxiao Ren, Hanchi Liu, Hui Xu, Zhengfang Wang, Anthony G. Cohn, Peng Jiang

The results have demonstrated that the GPRInvNet is capable of effectively reconstructing complex tunnel lining defects with clear boundaries.

GPR Time Series Analysis

ViTac: Feature Sharing between Vision and Tactile Sensing for Cloth Texture Recognition

1 code implementation21 Feb 2018 Shan Luo, Wenzhen Yuan, Edward Adelson, Anthony G. Cohn, Raul Fuentes

In this paper, addressing for the first time (to the best of our knowledge) texture recognition from tactile images and vision, we propose a new fusion method named Deep Maximum Covariance Analysis (DMCA) to learn a joint latent space for sharing features through vision and tactile sensing.

CLAD: A Complex and Long Activities Dataset with Rich Crowdsourced Annotations

no code implementations11 Sep 2017 Jawad Tayyub, Majd Hawasly, David C. Hogg, Anthony G. Cohn

This paper introduces a novel activity dataset which exhibits real-life and diverse scenarios of complex, temporally-extended human activities and actions.

Activity Recognition object-detection +1

Natural Language Grounding and Grammar Induction for Robotic Manipulation Commands

no code implementations WS 2017 Muhannad Alomari, Paul Duckworth, Majd Hawasly, David C. Hogg, Anthony G. Cohn

This is achieved by first learning a set of visual {`}concepts{'} that abstract the visual feature spaces into concepts that have human-level meaning.

Cannot find the paper you are looking for? You can Submit a new open access paper.