twitter
youtube
instagram
facebook
telegram
apple store
play market
night_theme
ru
arm
search
WHAT ARE YOU LOOKING FOR ?






Any use of materials is allowed only if there is a hyperlink to Caliber.az
Caliber.az © 2025. .
WORLD
A+
A-

AI Act: Spanish presidency sets out options on key topics of negotiation

04 July 2023 06:03

The topics of AI definition, high-risk classification, list of high-risk use cases and the fundamental rights impact assessment will be on the table of the Council this week as the Spanish presidency prepares to dive headfirst into negotiations.

Spain has taken over the rotating presidency of the EU Council of Ministers on July 1. On top of its digital priorities, Madrid seeks to reach a political agreement on the AI Act, a landmark legislation to regulate Artificial Intelligence based on its potential to cause harm, EURACTIV reports.

The Spanish presidency circulated a document, dated June 29 and seen by EURACTIV, to inform an exchange of views on four critical points of the AI rulebook on July 5 at the Telecom Working Party, a technical body of the Council.

The discussion will inform the position of the presidency in the next negotiation session with the EU Council, Parliament and Commission, so-called trialogues, on July 18.

AI definition

The European Parliament’s definition of Artificial Intelligence aligns with the Organisation for Economic Co-operation and Development (OECD), trying to anticipate future adjustments being discussed within the international organisation. 

“Artificial intelligence system (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments,” reads the Parliament’s text.

By contrast, whilst the Council also adopted some elements from the OECD’s definition, it further narrowed it down to machine learning approaches and logic- and knowledge-based approaches to avoid traditional software from being caught in the definition.

“This [the OECD’s] definition seems to cover software that should not be classified as AI,” reads the presidency’s note, setting out three possible options: sticking with the Council’s text, moving toward the Parliament or waiting for the September trialogue to assess the direction the OECD is taking.

High-risk classification

The AI Act mandates the developers of systems at high risk of causing harm to people’s safety and fundamental rights to comply with a stricter regime concerning risk management, data governance and technical documentation.

How systems should fall into this category was subject to hefty amendments. Initially, the draft law automatically classified high-risk AI applications that fell into a list of use cases in Annex III. Both co-legislators removed this automatism and introduced an “extra layer”.

For the Council, this layer concerns the significance of the output of the AI system in the decision-making process, with purely accessory outputs kept out of the scope.

MEPs introduced a system whereby AI developers should self-assess whether the application covered by Annex III was high risk based on guidance provided by the EU Commission. If the companies consider their system is not high-risk, they would have to inform the relevant authority, which should reply within three months if they consider there was a misclassification.

Again, the options involve maintaining the Council’s general approach or moving toward the Parliament, but several midway solutions are also envisaged in this case.

One option is to adopt the MEPs’ version but without the notification of the competent authorities. Alternatively, this version could be further refined, introducing clear criteria for AI providers to self-assess as binding rules rather than “soft’ guidance.

The final proposal is the Parliament’s system without notification and with binding criteria, plus exploring “further options to provide additional guidance for providers, for example, using a repository of examples of AI systems covered by Annex III that should not be considered high-risk.”

List of high-risk use cases

Both co-legislators heavily amended the list in Annex III. EU countries deleted deep fake detection by law enforcement authorities, crime analytics, and the verification of the authenticity of travel documents while adding critical digital infrastructure and life and health insurance.

MEPs expanded the list significantly, introducing biometrics, critical infrastructure, recommender systems of the largest social media, systems that might influence electoral outcomes, AI used in dispute resolutions, and border management.

“Delegations are asked to provide their views on the additions and modifications described above,” the note continues.

Fundamental rights impact assessment

Left-to-centre lawmakers want to oblige users of high-risk AI systems to conduct a fundamental rights impact assessment before the tool is put into service, which should consider the intended use, temporal scope and categories of people of groups likely affected.

In addition, a six-weeks consultation with the relevant stakeholder should be launched to inform the impact assessment.

“The Council’s text does not include such an obligation, and it is important to recall that the GDPR [General Data Protection Regulation] already requires both businesses and public organisations to consider whether high risks are likely to occur to rights and freedoms during their processing of personal data,” adds the document.

The Spanish presidency did not even give the option of agreeing to the Parliament’s text without curtailing the measure to only public sector uses. Additional options include removing the six-week consultation period or the requirement to inform authorities about the assessment.

Additional questions

The presidency also asked two additional questions. First, the EU Parliament’s mandate touches upon concepts such as democracy, the rule of law and sustainability. Hence EU countries are asked whether they think the AI Act is the right place to address these issues.

Second, member states are asked for their view on whether the term “deployer” should be introduced to avoid confusion.

Caliber.Az
Views: 96

share-lineLiked the story? Share it on social media!
print
copy link
Ссылка скопирована
ads
WORLD
The most important world news
loading