A Look Into The Future

Drawn graphic showing different people exchanging ideas symbolised by blurbs filled with symbols like light bulbs and scales.
© Ezequiel Hyon

Fri, 30.09.2022 9:30 AM - 10:30 AM

Online



Schedule


09:30 Linda Bonyo (Lawyers Hub): Europe’s Artificial Intelligence Act and its Possible Effect on Africa (Non-EU countries)
In April of 2021, the European Commission submitted its proposal for a European Union regulatory framework on artificial intelligence (AI) in order to improve the functioning of the internal market by outlining a uniform legal framework to develop, market and use AI in conformity with Union values.
The proposed Artificial Intelligence Act represents the first attempt globally for horizontal regulation of AI and its extraterritorial application means that it will have a range of implications for the development of AI regulation across the globe.

Upon taking effect, which experts indicate could occur at the beginning of 2023, it will have a broad impact on the use of AI and machine learning for citizens and companies around the world.[1] Its provisions having extraterritorial impact will affect the development and deployment of many AI systems around the world and further inspire similar legislative efforts.[2]
Broadly, the Act assigns applications of AI to 3 risk categories namely: applications and systems which create an unacceptable risk – which are banned, high risk applications – which are regulated and subject to specific requirements and lastly applications which are not explicitly banned or listed as high-risk which have been left majorly unregulated.
Specifically, it seeks to fulfill objectives targeted at: ensuring that AI systems placed and used on the Union market are safe and respect existing law on fundamental rights and Union values, ensuring legal certainty so as to facilitate investment and innovation in AI, enhancing governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems, facilitating the development of a single market for lawful, safe and trustworthy AI applications and preventing market fragmentation.[3] 
Like the GDPR before it, the Act could become a global standard by which varied jurisdictions determine the extent of AI’s positive or negative effect on them. It therefore forces policymakers and stakeholders to consider, like in data privacy, what the international repercussions of the Act will be (the de facto Brussels effect); and the extent to which it unilaterally impacts international rulemaking (the de jure Brussels effect).[4]
The proposed Act has been lauded for its probable benefits but also bears its disadvantages which could affect not only EU Member States, but also non-EU jurisdictions including the African continent.
The Act affects efforts to build international cooperation on AI[5] including avoiding unjustified restrictions to the flow of goods and data, tapping the potential of AI to address global challenges, and affirming the principles of openness, and those of fundamental human rights such as protection of democracy and freedom of expression. There are three categories of AI requirements that are most likely to have global considerations, and warrant separate discussion: high-risk AI in products, high-risk AI in human services, and AI transparency requirements.[6]

Similarly, the proposed Act has been flagged for varied shortcomings such as: the extent that it protects fundamental rights and the need to have stronger measures that reduce all associated risks as well as having its enforcement and implementation done through a self-assessment basis which negates the element of enforcing compliance under the law and the establishment of an effective framework to enforce legal rights and duties. Concerns have also been related to the proposal seemingly being overzealous about issues that fall exclusively to individual harm while overlooking those that protect against AI’s societal harms.

This talk provides a closer look at the Act in light of the pros and cons indicated, and the effect it may have on non-EU countries, especially the African continent. This will be achieved by interrogating the issues of the definition of AI as proposed in the Act, the risk-based approach adopted to the regulation of AI including the regulation of applications and systems categorized as high-risk as well as the enforcement and implementation of the Act as touches on compliance and assessment procedures.

[1] Woodie, A., 2022. Europe’s AI Act Would Regulate Tech Globally. [online]

[2] Engler, A., 2022. The EU AI Act will have global impact, but a limited Brussels Effect. [online] Brookings.

[3] Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS

[4] Engler, A., 2022. The EU AI Act will have global impact, but a limited Brussels Effect. [online] Brookings. 

[5] Tielemans, J., 2022. The European Union AI Act: Next steps and issues for building international cooperation in AI. [online] Brookings. 

[6] Engler, A., 2022. The EU AI Act will have global impact, but a limited Brussels Effect. [online] Brookings.


09:50 Tarek R. Besold (Dekra): Ethical AI business models: Why it's hard and why it's worth it
The development of so called ethical or trustworthy AI - and corresponding products or services - is gaining traction both in academia as well as in industry labs. We will have a look at the challenges that arise during the development of "ethical" or "trustworthy" consumer offerings (ranging from practical issues with most common data acquisition approaches to foundational questions regarding most of the currently popular digital business models) and discuss what it means for a company and its management to make "ethics" or "trustworthiness" a core value of their AI work.


10:10 Q&A