Search Results for author: Philipp Hacker

Found 7 papers, 0 papers with code

Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity

no code implementations14 Jan 2024 Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato, Luciano Floridi

The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape.

AI Regulation in Europe: From the AI Act to Future Regulatory Challenges

no code implementations6 Oct 2023 Philipp Hacker

This chapter provides a comprehensive discussion on AI regulation in the European Union, contrasting it with the more sectoral and self-regulatory approach in the UK.

Management

Regulating ChatGPT and other Large Generative AI Models

no code implementations5 Feb 2023 Philipp Hacker, Andreas Engel, Marco Mauer

We tailor regulatory duties to these different actors along the value chain and suggest strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large.

Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness under the DMA, the GDPR, and beyond

no code implementations9 Dec 2022 Philipp Hacker, Johann Cordes, Janina Rochon

Artificial intelligence is not only increasingly used in business and administration contexts, but a race for its regulation is also underway, with the EU spearheading the efforts.

Fairness Jurisprudence

The European AI Liability Directives -- Critique of a Half-Hearted Approach and Lessons for the Future

no code implementations25 Nov 2022 Philipp Hacker

This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability.

Explainable Artificial Intelligence (XAI) Fairness

Matching Code and Law: Achieving Algorithmic Fairness with Optimal Transport

no code implementations21 Dec 2017 Meike Zehlike, Philipp Hacker, Emil Wiedemann

As a consequence, the algorithm enables the decision maker to adopt intermediate ``worldviews'' on the degree of discrimination encoded in algorithmic processes, adding nuance to the extreme cases of ``we're all equal'' (WAE) and ``what you see is what you get'' (WYSIWYG) proposed so far in the literature.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.