no code implementations • 1 Mar 2017 • Hadi Hosseini, Kate Larson, Robin Cohen
One-sided matching mechanisms are fundamental for assigning a set of indivisible objects to a set of self-interested agents when monetary transfers are not allowed.
no code implementations • 4 Mar 2015 • Hadi Hosseini, Kate Larson, Robin Cohen
For assignment problems where agents, specifying ordinal preferences, are allocated indivisible objects, two widely studied randomized mechanisms are the Random Serial Dictatorship (RSD) and Probabilistic Serial Rule (PS).
no code implementations • 7 Jul 2014 • Hadi Hosseini, Jesse Hoey, Robin Cohen
This paper considers a novel approach to scalable multiagent resource allocation in dynamic settings.
no code implementations • WS 2019 • Mohammed Alliheedi, Robert E. Mercer, Robin Cohen
In particular, we conduct a detailed study with human annotators to confirm that our selection of semantic roles is effective in determining the underlying rhetorical structure of existing biomedical articles in an extensive dataset.
no code implementations • 9 Jun 2020 • Omar Abdel Wahab, Jamal Bentahar, Robin Cohen, Hadi Otrok, Azzam Mourad
In this paper, we propose a mechanism to deal with dishonest opinions in recommendation-based trust models, at both the collection and processing levels.
no code implementations • 10 Oct 2020 • Henry Chen, Robin Cohen, Kerstin Dautenhahn, Edith Law, Krzysztof Czarnecki
Based on the results, we distill twelve practical design recommendations for AV visual signals, with focus on signal pattern design and placement.
no code implementations • 3 May 2021 • Gaurav Sahu, Robin Cohen, Olga Vechtomova
This paper envisions a multi-agent system for detecting the presence of hate speech in online social media platforms such as Twitter and Facebook.
no code implementations • 11 Nov 2021 • Alexandre Parmentier, Robin Cohen, Xueguang Ma, Gaurav Sahu, Queenie Chen
In this paper, we present an approach for predicting trust links between peers in social media, one that is grounded in the artificial intelligence area of multiagent trust modeling.
1 code implementation • 10 Jan 2023 • Liam Hebert, Lukasz Golab, Robin Cohen
We propose a system to predict harmful discussions on social media platforms.
no code implementations • 25 Jan 2023 • Liam Hebert, Hong Yi Chen, Robin Cohen, Lukasz Golab
Included is an exploration of the kinds of posts that permeate social media today, including the use of hateful images.
no code implementations • 7 Mar 2023 • Gaith Rjoub, Jamal Bentahar, Omar Abdel Wahab, Rabeb Mizouni, Alyssa Song, Robin Cohen, Hadi Otrok, Azzam Mourad
The black-box nature of artificial intelligence (AI) models has been the source of many concerns in their use for critical applications.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 11 Apr 2024 • Dahlia Shehata, Robin Cohen, Charles Clarke
To the end, we employ two prompting-based LLM variants (GPT-3. 5-turbo and GPT-4) to extend the two RumourEval subtasks: (1) veracity prediction, and (2) stance classification.
1 code implementation • 18 Jul 2023 • Liam Hebert, Gaurav Sahu, Yuxuan Guo, Nanda Kishore Sreenivas, Lukasz Golab, Robin Cohen
We present the Multi-Modal Discussion Transformer (mDT), a novel methodfor detecting hate speech in online social networks such as Reddit discussions.
1 code implementation • WS 2019 • Dhruv Kumar, Robin Cohen, Lukasz Golab
We propose an attention-based neural network approach to detect abusive speech in online social networks.
1 code implementation • 17 Sep 2019 • Braden Hurl, Robin Cohen, Krzysztof Czarnecki, Steven Waslander
Inter-vehicle communication for autonomous vehicles (AVs) stands to provide significant benefits in terms of perception robustness.
1 code implementation • 27 May 2022 • Liam Hebert, Lukasz Golab, Pascal Poupart, Robin Cohen
We evaluate our methods on the Meta-World environment and find that our approach yields significant improvements over FedAvg and non-federated Soft Actor-Critic single-agent methods.