AI Act Self-Assessment Tool - EU Regulation 2024/1689
Access the official EU Regulation 2024/1689 (AI Act):
🇬🇧 English Version (PDF)Published in the Official Journal of the European Union
Welcome to the comprehensive guide for using the AI Act Self-Assessment Tool, designed to help organizations understand if and how EU Regulation 2024/1689 (AI Act) applies to their artificial intelligence systems.
The tool uses a structured approach with 8 key questions covering all fundamental aspects of the AI Act:
Determines if the system falls within the definition of "AI system" according to Art. 3(1) of the AI Act.
Identifies your role in the value chain (provider, deployer, importer, etc.).
Verifies if the system falls into one of 8 banned categories (Art. 5).
Determines if it's a General Purpose AI model (Art. 51-56).
Verifies if the system falls under Annex I or III (high-risk systems).
Identifies specific transparency obligations (Art. 50).
Determines territorial applicability of the AI Act (Art. 2).
The tool uses a weighting system to calculate overall risk level:
Category | Weight | Classification |
---|---|---|
Prohibited Practices | 100 | PROHIBITED |
High Risk (Annex I/III) | 60 | High Risk |
GPAI Systemic | 70 | GPAI Systemic |
GPAI Standard | 40 | GPAI |
Transparency | 20 | Limited Risk |
Other | <20 | Minimal Risk |
Regulatory basis: Art. 3(1) AI Act - Official Text
Key characteristics of an AI system:
Reference: Full definition in Art. 3(1) AI Act
Regulatory basis: Art. 3 (definitions) and Art. 22-27 (specific obligations) - Official Text
Natural or legal person that develops or has an AI system developed and places it on the market or puts it into service under its own name or trademark.
Natural or legal person using an AI system under its authority, except for personal non-professional activities.
Natural or legal person established in the EU that places on the market an AI system bearing the name or trademark of a person established outside the EU.
Natural or legal person in the supply chain that makes an AI system available on the market without being the provider or importer.
Legal person established in the EU appointed in writing by a non-EU provider to act on its behalf.
Reference: Full definitions in Art. 3 and Art. 22-27 AI Act
Regulatory basis: Art. 5 AI Act - Official Text
The 8 prohibited categories:
Techniques operating beyond a person's consciousness to distort behavior in a manner causing harm.
Exploits specific vulnerabilities due to age, disability, or socio-economic situation.
Evaluation or classification of natural persons based on social behavior or personal characteristics.
Assessment of risk of a natural person committing offenses based solely on profiling or personality traits assessment.
Creating or expanding facial recognition databases through untargeted scraping of images from internet or CCTV.
Inferring emotions of natural persons in the workplace or educational institutions.
Exceptions: Medical or safety reasons.
Biometric categorisation to infer race, political opinions, trade union membership, sexual orientation, religion.
Real-time remote biometric identification in publicly accessible spaces for law enforcement.
Narrow exceptions (Art. 5(2-7)): Search for missing children, preventing terrorist threats, etc.
Complete details: Art. 5 AI Act - Official Text
Regulatory basis: Art. 3(63), 51-56 AI Act
Foundation models with general capabilities but without systemic risks.
Obligations:
GPAI models with high impact capabilities. Presumption if >10²⁵ FLOPS.
Additional obligations:
Full requirements: Art. 51-56 AI Act - Official Text
Regulatory basis: Annex I + Art. 6(1) AI Act
Annex I lists products already subject to EU harmonisation legislation. If AI is a safety component of these products, it automatically becomes high-risk.
Annex I Categories:
Complete list: Annex I AI Act - Official Text
Regulatory basis: Annex III + Art. 6(2) AI Act
Annex III identifies 8 areas where AI systems are considered high-risk for fundamental rights and safety.
The 8 high-risk areas:
Detailed requirements: Annex III AI Act - Official Text
Regulatory basis: Art. 50 AI Act
Some AI systems must inform users of their artificial nature, even if not high-risk.
Deployers of AI systems interacting with people must inform them they're interacting with AI.
Deployers of AI generating/manipulating image/audio/video content must mark it as artificially generated/manipulated.
Full obligations: Art. 50 AI Act - Official Text
Regulatory basis: Art. 2 (Scope) AI Act
The AI Act applies to:
Territorial scope: Art. 2 AI Act - Official Text
Access the complete AI Act regulation:
📄 Full AI Act Text (English PDF)The AI Act is based on a risk-based approach, classifying AI systems into categories with obligations proportionate to risk.
Category | Risk | Obligations | Applicability |
---|---|---|---|
PROHIBITED | Unacceptable | Absolute ban | Feb 2, 2025 |
High Risk | High | CE conformity, documentation, registration | Aug 2, 2026 |
GPAI Systemic | Systemic | Art. 53 + Art. 55 | Aug 2, 2025 |
GPAI Standard | Moderate | Art. 53 | Aug 2, 2025 |
Limited Risk | Low | Transparency (Art. 50) | Aug 2, 2026 |
Minimal Risk | Minimal | No specific obligations | - |
Continuous iterative process throughout lifecycle:
Reference: Art. 9 AI Act
Training, validation and testing datasets must be:
Reference: Art. 10 AI Act
Complete documentation (kept 10 years) must include:
Reference: Art. 11 AI Act
Automatic event logging (logs):
Reference: Art. 12-19 AI Act
Violation | Maximum Penalty |
---|---|
Prohibited practices (Art. 5) | €35,000,000 or 7% global turnover |
GPAI violations (Art. 53, 55) | €15,000,000 or 3% global turnover |
Other violations (Art. 9-15, 26, 50, 72) | €7,500,000 or 1.5% global turnover |
Penalty details: Art. 99 AI Act - Official Text
Scenario: A retail company develops an AI chatbot for customer support on their e-commerce site.
Result: Limited Risk - Transparency Obligations (Art. 50)
Legal reference: Art. 50 AI Act
Scenario: US multinational uses AI platform for CV screening and candidate selection in European subsidiaries.
Result: High Risk (Annex III - Area 4: Employment)
Legal reference: Annex III + Art. 6(2), 9-15, 26-27 AI Act
Scenario: Italian startup develops medical diagnostic app using GPT-4 to analyze symptoms and suggest diagnoses.
Result: GPAI Systemic + High Risk (Cumulative Classification)
Legal reference: Art. 51-56 (GPAI) + Annex I (Medical Devices) + Art. 9-15 AI Act
A: No. The AI Act only applies to systems used in the EU or whose output is used in the EU. Systems for personal non-professional use are excluded (Art. 2(6)).
Reference: Art. 2 AI Act
A: Progressively:
A: The AI Act applies. You must appoint an Authorised Representative in the EU (Art. 22-23) and comply with all applicable obligations.
Reference: Art. 22-23 AI Act
A:
Reference: Art. 113 AI Act (Transitional Provisions)
A: Depends on system complexity. Typical costs include:
A: Depends on your role:
Reference: Art. 51-56 AI Act
This tool is provided exclusively for informational and educational purposes.
The generated assessment DOES NOT constitute professional legal advice and cannot replace the evaluation of qualified experts in AI Act compliance (EU Regulation 2024/1689).
For official AI Act compliance verification, it is strongly recommended to consult qualified professionals specialized in technology law and AI compliance.
Always refer to the official AI Act text for authoritative guidance:
📄 Official AI Act - English (PDF) 📄 Official AI Act - Italian (PDF)For professional AI Act compliance assistance:
Guide version: 1.0.0
Date: October 2025
Based on: EU Regulation 2024/1689 (AI Act) - Official text published in the Official Journal of the EU
© 2025 Studio Legale Fabiano - Avv. Nicola Fabiano
All rights reserved.
License: CC BY-NC-ND 4.0