📘 User Guide

AI Act Self-Assessment Tool - EU Regulation 2024/1689

← Back to Assessment Tool

Table of Contents

1. Introduction and Purpose

Welcome to the comprehensive guide for using the AI Act Self-Assessment Tool, designed to help organizations understand if and how EU Regulation 2024/1689 (AI Act) applies to their artificial intelligence systems.

Tool Objective: Provide a preliminary assessment of your AI system's classification under the AI Act, identify applicable regulatory obligations, and provide a roadmap for compliance.

What This Tool Is NOT

⚠️ Important: This tool DOES NOT replace professional legal advice. It is an educational and informational tool that still requires verification by qualified experts for official compliance.

Target Audience

2. How the Tool Works

Assessment Methodology

The tool uses a structured approach with 8 key questions covering all fundamental aspects of the AI Act:

1 System Identification

Determines if the system falls within the definition of "AI system" according to Art. 3(1) of the AI Act.

2 Operator Role

Identifies your role in the value chain (provider, deployer, importer, etc.).

3 Prohibited Practices

Verifies if the system falls into one of 8 banned categories (Art. 5).

4 GPAI Models

Determines if it's a General Purpose AI model (Art. 51-56).

5-6 High Risk

Verifies if the system falls under Annex I or III (high-risk systems).

7 Transparency

Identifies specific transparency obligations (Art. 50).

8 Geographic Scope

Determines territorial applicability of the AI Act (Art. 2).

Scoring System

The tool uses a weighting system to calculate overall risk level:

Category Weight Classification
Prohibited Practices 100 PROHIBITED
High Risk (Annex I/III) 60 High Risk
GPAI Systemic 70 GPAI Systemic
GPAI Standard 40 GPAI
Transparency 20 Limited Risk
Other <20 Minimal Risk

3. Step-by-Step Guide to the 8 Questions

Question 1: System Type

Is your system classifiable as Artificial Intelligence?

Regulatory basis: Art. 3(1) AI Act - Official Text

AI System Definition: A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Key characteristics of an AI system:

Example - AI System:
A chatbot using NLP (Natural Language Processing) to answer customer questions, learns from interactions and improves responses over time → IS an AI system
Example - NOT AI:
Accounting software with fixed programmed rules that automatically calculates taxes based on predefined rules → NOT an AI system (traditional software based on deterministic rules)

Reference: Full definition in Art. 3(1) AI Act

Question 2: Operator Role

What is your role regarding the AI system?

Regulatory basis: Art. 3 (definitions) and Art. 22-27 (specific obligations) - Official Text

Provider (Art. 2(s))

Natural or legal person that develops or has an AI system developed and places it on the market or puts it into service under its own name or trademark.

Deployer (Art. 2(t))

Natural or legal person using an AI system under its authority, except for personal non-professional activities.

⚠️ FRIA Warning: If you're a deployer of high-risk systems and operate as a public authority or in critical sectors, you must perform a Fundamental Rights Impact Assessment before deployment (Art. 27). Art. 27 - Official Text

Importer (Art. 2(u))

Natural or legal person established in the EU that places on the market an AI system bearing the name or trademark of a person established outside the EU.

Distributor (Art. 2(v))

Natural or legal person in the supply chain that makes an AI system available on the market without being the provider or importer.

Authorised Representative (Art. 22-23)

Legal person established in the EU appointed in writing by a non-EU provider to act on its behalf.

Reference: Full definitions in Art. 3 and Art. 22-27 AI Act

Question 3: Prohibited Practices

Does the system fall under a PROHIBITED practice?

Regulatory basis: Art. 5 AI Act - Official Text

🚫 ABSOLUTE PROHIBITION: Prohibited practices are banned from February 2, 2025. Penalties up to €35,000,000 or 7% of global annual turnover (Art. 99(3)).

The 8 prohibited categories:

1. Subliminal Techniques (Art. 5(1)(a))

Techniques operating beyond a person's consciousness to distort behavior in a manner causing harm.

Example:
Subliminal messages in videos or audio influencing purchase decisions without user awareness.

2. Exploitation of Vulnerabilities (Art. 5(1)(b))

Exploits specific vulnerabilities due to age, disability, or socio-economic situation.

3. Social Scoring (Art. 5(1)(c))

Evaluation or classification of natural persons based on social behavior or personal characteristics.

4. Predictive Policing - Profiling (Art. 5(1)(d))

Assessment of risk of a natural person committing offenses based solely on profiling or personality traits assessment.

5. Untargeted Facial Scraping (Art. 5(1)(e))

Creating or expanding facial recognition databases through untargeted scraping of images from internet or CCTV.

6. Emotion Recognition - Workplace/School (Art. 5(1)(f))

Inferring emotions of natural persons in the workplace or educational institutions.

Exceptions: Medical or safety reasons.

7. Biometric Categorisation - Sensitive Attributes (Art. 5(1)(g))

Biometric categorisation to infer race, political opinions, trade union membership, sexual orientation, religion.

8. Real-time Remote Biometric Identification (Art. 5(1)(h))

Real-time remote biometric identification in publicly accessible spaces for law enforcement.

Narrow exceptions (Art. 5(2-7)): Search for missing children, preventing terrorist threats, etc.

Complete details: Art. 5 AI Act - Official Text

Question 4: GPAI Models

Is it a General Purpose AI (GPAI) model?

Regulatory basis: Art. 3(63), 51-56 AI Act

GPAI Definition: AI model trained with large amounts of data using self-supervision at scale, displaying significant general capabilities to perform widely applicable tasks, and can be integrated into various downstream systems or applications.

Standard GPAI (Art. 53)

Foundation models with general capabilities but without systemic risks.

Obligations:

GPAI with Systemic Risk (Art. 55)

GPAI models with high impact capabilities. Presumption if >10²⁵ FLOPS.

Additional obligations:

Full requirements: Art. 51-56 AI Act - Official Text

Question 5: Annex I - Regulated Products

Is it a safety component of regulated products?

Regulatory basis: Annex I + Art. 6(1) AI Act

Annex I lists products already subject to EU harmonisation legislation. If AI is a safety component of these products, it automatically becomes high-risk.

Annex I Categories:

Complete list: Annex I AI Act - Official Text

Question 6: Annex III - 8 High-Risk Areas

Does the system operate in one of the 8 high-risk areas?

Regulatory basis: Annex III + Art. 6(2) AI Act

Annex III identifies 8 areas where AI systems are considered high-risk for fundamental rights and safety.

The 8 high-risk areas:

  1. Biometric identification and categorisation
  2. Critical infrastructure management
  3. Education and vocational training
  4. Employment and worker management
  5. Essential public/private services
  6. Law enforcement
  7. Migration, asylum, border control
  8. Administration of justice

Detailed requirements: Annex III AI Act - Official Text

Question 7: Transparency

Does it require specific transparency obligations?

Regulatory basis: Art. 50 AI Act

Some AI systems must inform users of their artificial nature, even if not high-risk.

Chatbots and Conversational Assistants (Art. 50(1))

Deployers of AI systems interacting with people must inform them they're interacting with AI.

Deepfakes and Synthetic Content (Art. 50(2))

Deployers of AI generating/manipulating image/audio/video content must mark it as artificially generated/manipulated.

Full obligations: Art. 50 AI Act - Official Text

Question 8: Geographic Scope

Where will the system be used or marketed?

Regulatory basis: Art. 2 (Scope) AI Act

The AI Act applies to:

Territorial scope: Art. 2 AI Act - Official Text

4. AI Act Criteria and Regulation

Fundamental AI Act Principles

The AI Act is based on a risk-based approach, classifying AI systems into categories with obligations proportionate to risk.

Category Risk Obligations Applicability
PROHIBITED Unacceptable Absolute ban Feb 2, 2025
High Risk High CE conformity, documentation, registration Aug 2, 2026
GPAI Systemic Systemic Art. 53 + Art. 55 Aug 2, 2025
GPAI Standard Moderate Art. 53 Aug 2, 2025
Limited Risk Low Transparency (Art. 50) Aug 2, 2026
Minimal Risk Minimal No specific obligations -

High-Risk Systems Obligations (Detail)

Art. 9 - Risk Management System

Continuous iterative process throughout lifecycle:

Reference: Art. 9 AI Act

Art. 10 - Data Governance

Training, validation and testing datasets must be:

Reference: Art. 10 AI Act

Art. 11 - Technical Documentation

Complete documentation (kept 10 years) must include:

Reference: Art. 11 AI Act

Art. 12-19 - Record-keeping (Logging)

Automatic event logging (logs):

Reference: Art. 12-19 AI Act

Sanctions (Art. 99)

Violation Maximum Penalty
Prohibited practices (Art. 5) €35,000,000 or 7% global turnover
GPAI violations (Art. 53, 55) €15,000,000 or 3% global turnover
Other violations (Art. 9-15, 26, 50, 72) €7,500,000 or 1.5% global turnover

Penalty details: Art. 99 AI Act - Official Text

5. Interpreting Results

Possible Classifications

PROHIBITED System

Required action: Immediately stop development/deployment. Consult legal experts. Evaluate compliant alternatives.

Legal reference: Art. 5 AI Act

High-Risk System

Required action: Start full Art. 9-15 compliance. EU database registration. CE marking. Timeline: by August 2, 2026.

Legal reference: Art. 6, 9-15, 43-49 AI Act

GPAI Standard

Required action: Model documentation, copyright policy, training data summary. Timeline: by August 2, 2025.

Legal reference: Art. 53 AI Act

GPAI with Systemic Risk

Required action: Standard GPAI obligations + adversarial testing, risk mitigation, advanced cybersecurity, AI Office notification. Timeline: by August 2, 2025.

Legal reference: Art. 53 + 55 AI Act

Limited Risk (Transparency)

Required action: Implement transparency obligations (Art. 50). User notifications, synthetic content marking. Timeline: by August 2, 2026.

Legal reference: Art. 50 AI Act

Minimal Risk

Required action: No specific AI Act obligations. Consider voluntary codes of conduct (Art. 69).

Legal reference: Art. 69 AI Act

6. Practical Examples

Example 1: Customer Service Chatbot

Scenario: A retail company develops an AI chatbot for customer support on their e-commerce site.

Result: Limited Risk - Transparency Obligations (Art. 50)

Legal reference: Art. 50 AI Act

Example 2: HR Recruitment System

Scenario: US multinational uses AI platform for CV screening and candidate selection in European subsidiaries.

Result: High Risk (Annex III - Area 4: Employment)

Legal reference: Annex III + Art. 6(2), 9-15, 26-27 AI Act

Example 3: GPT-4 Integrated in Medical App

Scenario: Italian startup develops medical diagnostic app using GPT-4 to analyze symptoms and suggest diagnoses.

Result: GPAI Systemic + High Risk (Cumulative Classification)

Legal reference: Art. 51-56 (GPAI) + Annex I (Medical Devices) + Art. 9-15 AI Act

7. Frequently Asked Questions (FAQ)

General

Q: Does the AI Act apply to all AI systems?

A: No. The AI Act only applies to systems used in the EU or whose output is used in the EU. Systems for personal non-professional use are excluded (Art. 2(6)).

Reference: Art. 2 AI Act

Q: When does the AI Act enter into force?

A: Progressively:

Q: What if I develop the system outside the EU but sell it in the EU?

A: The AI Act applies. You must appoint an Authorised Representative in the EU (Art. 22-23) and comply with all applicable obligations.

Reference: Art. 22-23 AI Act

High Risk

Q: If my system is high-risk, how long do I have to comply?

A:

Reference: Art. 113 AI Act (Transitional Provisions)

Q: How much does compliance cost for a high-risk system?

A: Depends on system complexity. Typical costs include:

GPAI

Q: If I use GPT-4 in my app, must I comply with GPAI obligations?

A: Depends on your role:

Reference: Art. 51-56 AI Act

8. Limitations and Disclaimer

⚠️ Important Notices

This tool is provided exclusively for informational and educational purposes.

The generated assessment DOES NOT constitute professional legal advice and cannot replace the evaluation of qualified experts in AI Act compliance (EU Regulation 2024/1689).

Tool Limitations:

  • Provides preliminary analysis based on declarative user responses
  • Each specific case requires evaluation by legal professionals
  • EU Regulation 2024/1689 requires contextual interpretation of rules
  • Official guidelines and case law may evolve over time

Studio Legale Fabiano and Avv. Nicola Fabiano disclaim all liability for:

  • Decisions made based on this tool's results
  • Direct or indirect damages arising from tool use
  • Incompleteness or inaccuracy of provided information
  • Regulatory changes subsequent to tool release date

For official AI Act compliance verification, it is strongly recommended to consult qualified professionals specialized in technology law and AI compliance.

Professional Consultation Contacts

For professional AI Act compliance assistance:

Version and Release Date

Guide version: 1.0.0
Date: October 2025
Based on: EU Regulation 2024/1689 (AI Act) - Official text published in the Official Journal of the EU

License and Copyright

© 2025 Studio Legale Fabiano - Avv. Nicola Fabiano
All rights reserved.
License: CC BY-NC-ND 4.0

← Back to Assessment Tool