Search Results for author: Ini Oguntola

Found 4 papers, 1 papers with code

Benchmarking and Enhancing Disentanglement in Concept-Residual Models

no code implementations30 Nov 2023 Renos Zabounidis, Ini Oguntola, Konghao Zhao, Joseph Campbell, Simon Stepputtis, Katia Sycara

Concept bottleneck models (CBMs) are interpretable models that first predict a set of semantically meaningful features, i. e., concepts, from observations that are subsequently used to condition a downstream task.

Benchmarking Disentanglement

Theory of Mind as Intrinsic Motivation for Multi-Agent Reinforcement Learning

no code implementations3 Jul 2023 Ini Oguntola, Joseph Campbell, Simon Stepputtis, Katia Sycara

The ability to model the mental states of others is crucial to human social intelligence, and can offer similar benefits to artificial agents with respect to the social dynamics induced in multi-agent settings.

Multi-agent Reinforcement Learning reinforcement-learning

Deep Interpretable Models of Theory of Mind

no code implementations7 Apr 2021 Ini Oguntola, Dana Hughes, Katia Sycara

When developing AI systems that interact with humans, it is essential to design both a system that can understand humans, and a system that humans can understand.

SlimNets: An Exploration of Deep Model Compression and Acceleration

1 code implementation1 Aug 2018 Ini Oguntola, Subby Olubeko, Christopher Sweeney

We show that by combining pruning and knowledge distillation methods we can create a compressed network 85 times smaller than the original, all while retaining 96% of the original model's accuracy.

Knowledge Distillation Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.