Search Results for author: Pim Haselager

Found 4 papers, 0 papers with code

Minding rights: Mapping ethical and legal foundations of 'neurorights'

no code implementations13 Feb 2023 Sjors Ligthart, Marcello Ienca, Gerben Meynen, Fruzsina Molnar-Gabor, Roberto Andorno, Christoph Bublitz, Paul Catley, Lisa Claydon, Thomas Douglas, Nita Farahany, Joseph J. Fins, Sara Goering, Pim Haselager, Fabrice Jotterand, Andrea Lavazza, Allan McCay, Abel Wajnerman Paz, Stephen Rainey, Jesper Ryberg, Philipp Kellmeyer

The rise of neurotechnologies, especially in combination with AI-based methods for brain data analytics, has given rise to concerns around the protection of mental privacy, mental integrity and cognitive liberty - often framed as 'neurorights' in ethical, legal and policy discussions.

The 3TConv: An Intrinsic Approach to Explainable 3D CNNs

no code implementations1 Jan 2021 Gabrielle Ras, Luca Ambrogioni, Pim Haselager, Marcel van Gerven, Umut Güçlü

In a 3TConv the 3D convolutional filter is obtained by learning a 2D filter and a set of temporal transformation parameters, resulting in a sparse filter requiring less parameters.

Action Recognition

Explainable 3D Convolutional Neural Networks by Learning Temporal Transformations

no code implementations29 Jun 2020 Gabriëlle Ras, Luca Ambrogioni, Pim Haselager, Marcel A. J. van Gerven, Umut Güçlü

Finally, we implicitly demonstrate that, in popular ConvNets, the 2DConv can be replaced with a 3TConv and that the weights can be transferred to yield pretrained 3TConvs.

Image Classification

Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

no code implementations20 Mar 2018 Gabrielle Ras, Marcel van Gerven, Pim Haselager

Different kinds of users are identified and their concerns revealed, relevant statements from the General Data Protection Regulation are analyzed in the context of Deep Neural Networks (DNNs), a taxonomy for the classification of existing explanation methods is introduced, and finally, the various classes of explanation methods are analyzed to verify if user concerns are justified.

Cannot find the paper you are looking for? You can Submit a new open access paper.