Skip to main content
v1.2.2Last Updated: Apr 22, 2026

EU AI Act Positioning

In a nutshell

What this document is:
An overview of how Clawscan is designed with consideration for the European Union Artificial Intelligence Act (EU AI Act).

Why this matters:
Organizations evaluating AI-enabled compliance tools must understand how such systems operate and how they align with responsible AI governance principles.

Who should read this:
Legal teams, compliance officers, data protection officers (DPOs), AI governance specialists, and procurement reviewers.

When to use this:
AI governance reviews, procurement assessments, regulatory evaluations, and internal compliance documentation.


Overview

Clawscan uses artificial intelligence to assist organizations in identifying communications that may contain potential legal or regulatory risks.

The system is designed to support early risk detection in communications, particularly in areas such as competition law, anti-corruption, or other compliance domains.

Clawscan provides risk detection signals, not automated decisions.

Organizations remain responsible for interpreting these signals and determining how they are handled within internal governance processes.

See:


Content-focused analysis

Clawscan is designed to analyze communication content rather than evaluate individuals.

The purpose of the system is to detect potential legal or compliance risks within messages or conversations.

The system does not generate:

  • employee performance indicators
  • behavioral monitoring metrics
  • individual risk scoring
  • employee evaluation dashboards

This design helps ensure that the system supports compliance monitoring without functioning as a tool for evaluating workers.

Clawscan is designed as an assistive compliance system whose outputs require human interpretation and review.

The platform is not intended to replace human decision-making or to perform automated assessments of individuals.


Role of AI within the system

Within Clawscan, AI is used to analyze communication content and identify patterns that may indicate compliance risks.

The system may generate outputs such as:

  • risk classifications
  • numerical risk scores
  • reasoning summaries explaining the detection

These outputs are intended to assist compliance teams in identifying communications that may require review.

AI outputs are indicative signals, not legal conclusions.


Human oversight

Human oversight is a core element of responsible deployment.

Clawscan does not make autonomous decisions affecting individuals.

Organizations remain responsible for:

  • reviewing alerts generated by the system
  • interpreting compliance signals
  • determining appropriate internal actions

The platform is designed to support human decision-making rather than replace it.

See:


Absence of profiling

Clawscan is not designed to perform profiling of individuals.

The system does not create behavioral profiles, risk rankings, or analytical models describing employees or other individuals.

Instead, analysis focuses on communication content and potential legal risk indicators.

Organizations deploying Clawscan are responsible for ensuring that the system is used in accordance with applicable governance frameworks and internal policies.


Transparency and explainability

Clawscan provides structured explanations accompanying AI-generated classifications.

These explanations help reviewers understand why a communication may have triggered a compliance signal.

Explainability mechanisms support:

  • internal compliance reviews
  • governance documentation
  • responsible human oversight

Clawscan does not disclose internal model parameters or proprietary detection methodologies.


Risk-detection design philosophy

Clawscan prioritizes early detection of potential compliance risks.

To reduce the likelihood that relevant risks remain undetected, the system is designed to favor sensitivity over strict precision.

This means that some communications flagged by the system may ultimately not represent compliance issues and require human verification.

This design approach reflects the objective of compliance monitoring: identifying potential issues before they escalate.


Responsible deployment

Organizations deploying Clawscan remain responsible for implementing appropriate governance mechanisms.

Typical safeguards may include:

  • internal policies defining monitoring objectives
  • transparency toward employees
  • structured review procedures for alerts
  • internal compliance governance processes

Clawscan supports these processes but does not determine how they are implemented.

See:


Data protection considerations

Clawscan architecture supports responsible AI deployment through several safeguards:

  • tenant-resident processing of communication content
  • minimal transmission of derived results
  • clear responsibility boundaries between vendor and organization

These design choices help organizations maintain control over communication data.

See: