⚖️

Important Legal Notice

Please read carefully before proceeding

This tool is provided exclusively for informational and educational purposes.

The generated assessment DOES NOT constitute professional legal advice and cannot replace the evaluation of qualified experts in regulatory compliance matters related to the AI Act (EU Regulation 2024/1689).

⚠️ Warning: The information provided is of a general nature and may not be applicable to your specific case.

Tool Limitations:

  • Provides a preliminary analysis based on declarative user responses
  • Each specific case requires evaluation by legal professionals
  • EU Regulation 2024/1689 requires contextual interpretation of the rules
  • Official guidelines and case law may evolve over time

Studio Legale Fabiano and Avv. Nicola Fabiano disclaim all liability for:

  • Decisions made based on the results of this tool
  • Direct or indirect damages arising from the use of the tool
  • Incompleteness or inaccuracy of the information provided
  • Regulatory changes subsequent to the tool's release date

For official verification of AI Act compliance, it is strongly recommended to consult qualified professionals specialized in technology law and AI compliance.

🍪 Matomo Disclaimer

This site uses only technical cookies from Matomo for anonymous and aggregate navigation statistics. We have set both "do not track support" and opt-out by default, leaving the choice to activate opt-in to the user. You can manage preferences in the box at the bottom of the page.

✓ Currently in opt-out status. Check the box at the bottom of the page to opt-in.

By proceeding, the user declares to have read, understood, and accepted these legal notices and limitations of liability.

🇪🇺 AI Act Self-Assessment Tool

100% Compliant Analysis - EU Regulation 2024/1689

Detected Risk Level
Evaluating...
System Identification

Can your system be classified as Artificial Intelligence?

According to Art. 3(1), an AI system is automated, operates with varying levels of autonomy, can adapt after deployment, and generates outputs (predictions, content, recommendations, or decisions) that influence physical or virtual environments.

🤖
Yes, it is an AI system
Uses machine learning, deep learning, neural networks, NLP, computer vision, or other AI techniques to generate outputs with adaptive capabilities.
💻
No, it is traditional software
System based on deterministic rules, traditional algorithms, or simple automation without inference or learning capabilities.
🔄
Hybrid system or AI component
Contains AI components integrated in broader software, or uses AI for specific functionalities.
Value Chain Role

What is your role with respect to the AI system?

The role determines specific obligations according to the AI Act. Providers and Deployers have the greatest responsibilities. Art. 22-23: Extra-EU Providers must appoint an Authorised Representative.

⚠️ DEPLOYER - FRIA Note: If you are a deployer and the system is high-risk, you may need to perform a Fundamental Rights Impact Assessment (Art. 27) if you are: public entity, essential public services provider, banking/insurance.

🏭
Provider
I develop or have developed the AI system and place it on the market under my name or trademark.
🏢
Deployer (User)
I use an AI system developed by others in my professional activity. Art. 26-27: Specific obligations including FRIA.
📦
Importer
I import AI systems from non-EU countries to place them on the EU market.
🚚
Distributor
I make AI systems available on the market without being a provider or importer.
🌍
Authorised Representative
Authorised representative in EU for non-EU provider (Art. 22-23). Written mandate for compliance.
Prohibited Practices Verification (Art. 5)

Does the system fall under a PROHIBITED practice?

Art. 5 prohibits 8 categories of unacceptable AI systems. Ban effective from February 2, 2025. Penalties up to 35M EUR or 7% turnover.

⚠️ MULTIPLE SELECTION ENABLED: A system may fall under multiple prohibited categories. Select ALL applicable.
No prohibited practice
The system does not fall under any of the 8 categories prohibited by Art. 5.
🧠
1. Subliminal/manipulative techniques
Art. 5(1)(a): Subliminal techniques beyond consciousness to distort behavior causing harm.
⚠️
2. Exploitation of vulnerabilities
Art. 5(1)(b): Exploits vulnerabilities related to age, disability, or socio-economic situation.
📊
3. Social scoring
Art. 5(1)(c): Evaluation/classification of persons based on social behavior or personal traits.
🔮
4. Predictive policing (profiling)
Art. 5(1)(d): Assesses risk of committing crimes based on profiling or personality traits.
📸
5. Untargeted facial scraping
Art. 5(1)(e): Creates/expands facial recognition databases from internet or CCTV.
😊
6. Emotion recognition (workplace/school)
Art. 5(1)(f): Detects emotional state in workplace or educational contexts (medical/safety exceptions).
👥
7. Biometric categorization (sensitive attributes)
Art. 5(1)(g): Infers race, religion, sexual orientation from biometric data.
👁️
8. Real-time remote biometric identification
Art. 5(1)(h): Real-time remote biometric identification (law enforcement exceptions Art. 5(2-7)).
Foundation Models

Is it a General Purpose AI model (GPAI)?

GPAI models (Art. 51-56) have general capabilities and can be used for multiple purposes. Obligations from August 2, 2025. ATTENTION: A GPAI can ALSO be a component of regulated or high-risk systems.

🎯
No, specific system
AI system designed for a specific purpose, not a general-purpose multi-use model.
🌐
Yes, standard GPAI model
Foundation model with general capabilities, large-scale training, multi-purpose (e.g., GPT, BERT). Continue to verify if also component of regulated systems.
GPAI with systemic risk
GPAI model with >10²⁵ FLOPS, high capabilities, potential systemic risks (Art. 55). Continue to verify other requirements.
Annex I - Regulated Products

Is it a safety component of regulated products?

Annex I lists products subject to EU legislation where AI could be a critical safety component. Complete list compliant with AI Act.

No, not a safety component
System is not a safety component of products in Annex I.
⚙️
Machinery/Robotics
Safety component of industrial machinery or robots (Reg. 2023/1230).
🏥
Medical devices
Medical devices, in-vitro diagnostics, or medical software (MDR/IVDR Reg. 2017/745, 2017/746).
🚗
Automotive/Aviation/Railways
Vehicles, aircraft, trains, driver assistance or flight control systems (Reg. 2018/858, 2018/1139).
🧸
Toys
Safety component of toys (Dir. 2009/48/EC).
🛗
Lifts
Safety component of lifts (Dir. 2014/33/EU).
💥
ATEX (Explosive atmospheres)
Equipment for potentially explosive atmospheres (Dir. 2014/34/EU).
📡
Radio Equipment
Radio equipment (Dir. 2014/53/EU).
🔧
Pressure Equipment
Pressure equipment (Dir. 2014/68/EU).
Recreational Craft
Recreational craft (Dir. 2013/53/EU).
🚡
Cableways
Cableway installations (Reg. 2016/424).
🦺
Personal Protective Equipment
Personal protective equipment (Reg. 2016/425).
🔥
Gas Appliances
Gas appliances (Reg. 2016/426).
🚢
Marine Equipment
Marine equipment (Dir. 2014/90/EU).
Annex III - 8 High-Risk Areas

Does the system operate in one of the 8 high-risk areas?

Annex III identifies 8 areas where AI systems are high-risk for fundamental rights and safety.

No high-risk area
System does not operate in any of the 8 areas identified in Annex III.
👤
1. Biometric identification/categorization
Remote biometric identification, categorization of persons from biometric data.
🏗️
2. Critical infrastructure
Management/operation of critical infrastructure: water, gas, electricity, transport, digital networks.
🎓
3. Education and vocational training
Access to educational institutions, student assessment, exam monitoring, plagiarism detection.
💼
4. Employment and worker management
Recruiting, selection, performance evaluation, promotions, contract decisions, worker monitoring.
🏛️
5. Essential public/private services
Social assistance benefits, credit assessment, insurance scoring, emergency services (911).
⚖️
6. Law enforcement
Crime/recidivism risk assessment, polygraphs, evidence reliability assessment, criminal investigation profiling.
🛂
7. Migration, asylum, border control
Migrant risk assessment, asylum application examination, person detection, travel document verification.
⚖️
8. Administration of justice
Legal research/interpretation assistance, judicial decisions, alternative dispute resolution (ADR).
Transparency Obligations (Art. 50)

Does it require specific transparency obligations?

Some systems must inform users of their artificial nature (Art. 50).

🔒
No specific obligation
System does not interact with persons or generate content requiring disclosure.
💬
Chatbot/Conversational assistant
System that interacts with persons via chat, voice, or other conversational means.
🎭
Deepfake/AI audio-video-image content
Generates or manipulates images, audio, video that could appear authentic.
😊
Emotion recognition (lawful contexts)
Recognizes emotions in lawful contexts (not workplace/school, otherwise prohibited Art. 5).
🔍
Biometric categorization (lawful contexts)
Non-prohibited biometric categorization (Art. 50(4)). Requires transparency.
Geographic Scope (Art. 2)

Where will the system be used or marketed?

AI Act applies to systems used in the EU or whose output is used in the EU, regardless of where they are developed.

🇪🇺
EU Market
System will be placed on the market or used in the European Union.
🌍
Output used in EU
System is outside EU but output/results are used in the EU.
🌐
Global distribution
System will be available globally, including in the EU.
🌏
Only non-EU markets
System used exclusively outside EU without connections to European market.
📘 User Guide 📘 User Guide