The current output type options in Questionnaire 1 work well for clear cut cases. However, certain real world AI systems produce outputs that fit multiple categories or none clearly, creating a risk of repeatable under-classification patterns. In practice, this ambiguity creates a risk of systematic under classification, where systems that should fall within the scope of the AI Act are incorrectly excluded, leading to insufficient application of risk controls.
I identified three specific ambiguities:
- Anomaly signals vs. decisions vs. classifications
Fraud detection and intrusion detection systems produce a flag or alert when behaviour deviates from expected patterns. This output can be mapped to classification, score, or probability, but the taxonomy does not make this mapping explicit or intuitive to a good-faith user. However, these outputs frequently act as trigger signals that initiate downstream decisions or actions. A user answering in good faith may therefore select none of the available options and incorrectly place the system outside AI Act scope.
Proposed clarification:
Add a distinct option such as:
A flag or alert indicating deviation from expected behaviour (e.g. fraud detection or unusual activity)
Alternatively, include this explicitly under “A decision” as a trigger signal initiating downstream decision-making.
- Behavioural influence vs. recommendations
Content ranking and sequencing systems (e.g. social media feeds, search ranking) produce output that is technically a “recommendation”. However, their primary function is to structure the decision environment and influence what a person sees or does next. This is not merely a recommendation, but a system that shapes the context in which decisions are made. This distinction is directly relevant for Article 5(1)(a) assessments, where the use of manipulative or deceptive techniques to materially distort behaviour is prohibited. Without this distinction, systems whose primary function is behavioural steering may be described as ‘recommendation systems’, while the features most relevant for Article 5 analysis remain unexamined.
Proposed clarification:
Add a distinct option such as:
A ranking or sequencing of content designed to influence what a person sees or does next
Alternatively, include a note under “A recommendation” explicitly flagging relevance to Article 5.
- Physical control output
Systems that directly actuate physical environments, such as autonomous vehicles or industrial robots, produce outputs that are neither informational nor decisions directed at a human. Recital 12 explicitly references systems that “can influence physical and virtual environments”, but the current taxonomy does not include a corresponding output type.
Proposed clarification:
Add a distinct option such as:
“A physical action or movement, such as steering, braking, or operating machinery”
Audit implication
Incorrect output classification can cascade into incorrect risk classification, leading to insufficient safeguards and false compliance assumptions. The second point is particularly urgent given its direct relevance to Article 5.
Happy to elaborate or provide references to the legislative text if helpful.
King Che Magnusson
OWASP HCI Cognitive Layer project
https://github.com/kingchemagnussonhr-sudo
The current output type options in Questionnaire 1 work well for clear cut cases. However, certain real world AI systems produce outputs that fit multiple categories or none clearly, creating a risk of repeatable under-classification patterns. In practice, this ambiguity creates a risk of systematic under classification, where systems that should fall within the scope of the AI Act are incorrectly excluded, leading to insufficient application of risk controls.
I identified three specific ambiguities:
Fraud detection and intrusion detection systems produce a flag or alert when behaviour deviates from expected patterns. This output can be mapped to classification, score, or probability, but the taxonomy does not make this mapping explicit or intuitive to a good-faith user. However, these outputs frequently act as trigger signals that initiate downstream decisions or actions. A user answering in good faith may therefore select none of the available options and incorrectly place the system outside AI Act scope.
Proposed clarification:
Add a distinct option such as:
A flag or alert indicating deviation from expected behaviour (e.g. fraud detection or unusual activity)
Alternatively, include this explicitly under “A decision” as a trigger signal initiating downstream decision-making.
Content ranking and sequencing systems (e.g. social media feeds, search ranking) produce output that is technically a “recommendation”. However, their primary function is to structure the decision environment and influence what a person sees or does next. This is not merely a recommendation, but a system that shapes the context in which decisions are made. This distinction is directly relevant for Article 5(1)(a) assessments, where the use of manipulative or deceptive techniques to materially distort behaviour is prohibited. Without this distinction, systems whose primary function is behavioural steering may be described as ‘recommendation systems’, while the features most relevant for Article 5 analysis remain unexamined.
Proposed clarification:
Add a distinct option such as:
A ranking or sequencing of content designed to influence what a person sees or does next
Alternatively, include a note under “A recommendation” explicitly flagging relevance to Article 5.
Systems that directly actuate physical environments, such as autonomous vehicles or industrial robots, produce outputs that are neither informational nor decisions directed at a human. Recital 12 explicitly references systems that “can influence physical and virtual environments”, but the current taxonomy does not include a corresponding output type.
Proposed clarification:
Add a distinct option such as:
“A physical action or movement, such as steering, braking, or operating machinery”
Audit implication
Incorrect output classification can cascade into incorrect risk classification, leading to insufficient safeguards and false compliance assumptions. The second point is particularly urgent given its direct relevance to Article 5.
Happy to elaborate or provide references to the legislative text if helpful.
King Che Magnusson
OWASP HCI Cognitive Layer project
https://github.com/kingchemagnussonhr-sudo