PrivacySolved AI Machine Learning

Briefing

Artificial Intelligence (AI) and machine learning services are increasing at a rapid rate around the world. These technologies affect all sectors and are predicted to be key drivers of economic growth, innovation, automation, security and knowledge. Such rapid expansion creates unique opportunities but also creates systemic risks.  The European Union (EU) seeks first mover advantage in creating a trusted environment for the growth and development of responsible AI. This aim has economic and geopolitical motives, but also holds benefits for economic participation, innovation, social equity, human rights, environmental improvement, research and development. In April 2021, the European Commission published a package of legal and policy measures to promote responsible AI. The key initiative is a draft law on AI Regulation.  There is also a Machinery Regulation to increase product safety and an update to the EU’s Coordinated Plan for AI. The EU institutions will finalise and adopt these measures in the coming months and years.

Artificial Intelligence Systems  

The AI Regulation applies to Artificial Intelligence systems (AI systems). This is broadly defined as software that is developed with one or more techniques and approaches applied to a set of human-defined objectives that generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. These techniques and approaches include machine learning, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning. Logic and knowledge-based approaches including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems are also covered. The definition also extends to statistical approaches, Bayesian estimation, search and optimisation methods.

The Scope of the EU AI Regulation 

The AI Regulation will apply to both private sector and public sector organisations inside and outside of the EU if the AI system is placed in the EU market or if its use affects people located in the EU. Providers, developers and manufacturers of AI systems and users (buyers) of high-risk AI systems are included in the scope of the AI Regulation. However, private and non-professional users of AI systems are not included.

Unacceptable Risk (Prohibited), High Risk, Limited Risk and Minimal Risk AI

The AI Regulation is based on four levels of a risk–based approach:

Unacceptable risk (Prohibited): This relates to a few very harmful uses of AI that infringe EU values because they violate EU fundamental rights. This includes banning social scoring by governments, exploitation of the vulnerabilities of children and the use of subliminal techniques. Live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes (subject to narrow exceptions) will also be banned.

High Risk: This relates to a limited number of AI systems that create an adverse impact on the safety or the fundamental rights (protected by the EU Charter of Fundamental Rights) of individuals. These include Biometric identification and categorisation of natural persons, Critical infrastructure management and operations as well as Education and Vocational training. The category also includes Employment, workers management and access to self-employment, Access to and enjoyment of essential Private Services and Public Services and benefits as well as Law Enforcement. Further, Migration, Asylum and Border Control Management systems and Administration of Justice and Democratic process systems are also included. These categories can be reviewed and expanded, over time, to create effective futureproofing. These categories also include safety components of products covered by sectorial EU laws. These categories of systems will always be high-risk when covered by third-party conformity assessment under these sectorial laws.

To ensure trust, consistency, effective protection and compliance with EU fundamental rights, mandatory requirements for all high risk AI systems are proposed. These include the quality of data sets used, technical documentation and record keeping, transparency and providing information to users, human oversight as well as AI system robustness, accuracy and cybersecurity. Where data breaches or cyber security attack incidents occur, national AI Regulation authorities will have access to the information needed to investigate whether the use of the AI system complied with the law.

Limited Risk: For certain AI systems, specific transparency requirements are imposed, for example where there is a high probability of manipulation, for example when using chatbots. Users should be made aware that they are interacting with a machine or automated system.

Minimal Risk: All other AI systems can be developed, sold and used subject to existing laws without additional EU legal obligations. Most of the systems used in the EU will fall into this category. Providers of these Minimal Risk systems can voluntarily choose to apply trustworthy AI requirements and adhere to voluntary codes of conduct.

Enforcement and Penalties

Each EU Member State will apply and enforce the AI Regulation by choosing national authorities to implement, apply, supervise, enforce and carry out market surveillance activities. Each national AI authority will also represent each country on the EU-level European Artificial Intelligence Board (EAIB).

The AI Regulation requires EU Member States to put in place effective and proportionate penalties, including administrative fines, for infringements and inform the European Commission. When AI systems enter the market or are in use and do not meet the requirements of the AI Regulation, EU Member States are required to take enforcement action. The AI Regulation sets out the following penalty thresholds:

(i) Up to €30m or 6% of the total worldwide annual turnover of the previous financial year (whichever is higher) for infringements of prohibited practices or noncompliance with data requirements;

(ii) Up to €20m or 4% of the total worldwide annual turnover of the previous financial year for non-compliance with any of the other requirements or obligations in the AI Regulation;

(iii) Up to €10m or 2% of the total worldwide annual turnover of the previous financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national authorities in reply to a request.

Strategic and Operational Impacts

AI system providers and users (buyers) have been put on notice of this significant regulatory framework that will have cascading impacts around the world and on AI systems. The full impacts will emerge over time, but the following are significant:

  • The definition of Artificial Intelligence systems is deliberately broad and subject to future expansion and regulatory reinforcement. This does not necessarily make the AI Regulation unfocussed or difficult to enforce; this expansive definition is a robust policy position by the EU that all AI systems that interface with the EU will be subject to some sort of regulation, even if these are self-regulation and codes of conduct for Minimal Risk AI systems.  
  • The AI Regulation acknowledges that the GDPR is the accepted baseline protection for personal data and special categories of data. The AI Regulation is seen as a detailed law that overlays and supplements the GDPR for AI systems. As a result, poor or ineffective GDPR compliance will negatively impact a business or organisation’s ability to operationalise the AI Regulation.
  • The AI Regulation is a bold geopolitical, economic, social equity and commercial effort to dictate the future of international AI regulation.  The Regulation’s application outside of the EU is also intentional, and this effect should not be underestimated.
  • The AI Regulation puts forward a radical hybrid of infrastructure, hardware, software, data and cybersecurity protections (the complete IT stack and supply chain) by incorporating elements of product safety, product liability, consumer protection, product conformity and product certification. 
  • AI systems providers, purchasers and users will need to review their digital transformation and new technology lifecycles to ensure that their AI systems are purchased and adopted efficiently within the implementation timeline of the AI Regulation. Old noncompliant AI systems will need to be significantly upgraded or decommissioned over time. Businesses and organisations that are AI-only or are significantly reliant on AI systems will need to establish bespoke AI Regulation projects and allocate significant and flexible financial budgets. Relationships between AI systems providers, businesses and organisational users will become more inter-dependent and mature over time.

For help with Artificial Intelligence systems, Data Ethics compliance, New Technology and Digital Transformation projects, Board awareness and Staff training, contact PrivacySolved:

London +44 207 175 9771

Dublin +353 1 960 9370

Email: contact@privacysolved.com

PS042021