Sylure
AI & Privacy

How Sylure Uses AI — And How We Keep It Safe

We use AI to help privacy teams work faster. Here's exactly what it does, what it doesn't do, and how we protect the data it touches.

AI capabilities

What Sylure AI Does

DSAR Discovery & Risk Reports

What it does

AI generates a discovery report and risk assessment from DSAR search results — summarising what was found, where, and what categories are involved.

Human role

All AI-generated text is clearly labelled as draft output for human review. Analysts must review and approve before any report is exported or shared.

Analytics Explanations

What it does

AI explains dashboard metrics, asset risk concentrations, and PII composition in plain language — helping teams communicate findings to stakeholders.

Human role

Explanations are generated on demand and scoped by the analyst. AI summaries complement, not replace, the underlying data.

What Sylure AI Does Not Do

No automated decision-making about individuals

AI helps categorise data — it does not make decisions about data subjects.

No raw PII in AI payloads

AI analytics use aggregate-only data. Raw personal data values are not sent to AI models for processing.

No AI training on your data

Customer uploads are not used to train or fine-tune AI models.

No unsupervised outputs

Every AI-generated output is a draft for human review. Nothing is exported or actioned without analyst confirmation.

Our principles

AI Principles

Human-in-the-loop

AI augments analyst workflows. It doesn't replace judgement or automate compliance decisions.

Transparency

AI-generated content is always labelled. Users know what was written by AI and what came from the underlying data.

Data minimisation

AI payloads use aggregate and metadata-level information. We minimise the personal data that AI components can access.

Accountability

AI-assisted actions are captured in audit logs alongside human actions. There's no hidden AI processing that bypasses the audit trail.

AI and UK GDPR

The ICO has published guidance on AI and data protection, including expectations around transparency, fairness, and accountability. Sylure's approach — human-in-the-loop review, aggregate-only AI payloads, and auditable outputs — is designed to align with these expectations. For details on our broader security posture, see the Trust Centre.

Frequently Asked Questions

Sylure uses a combination of pattern-matching classifiers and language models for category detection and report drafting. We do not disclose specific model providers for security reasons, but we're happy to discuss our approach during a demo.

AI-assisted report drafting and analytics explanations can be skipped — analysts can write their own summaries. PII category detection uses pattern-matching (not AI), and all results are presented for human review.

Sylure's AI usage falls within the operational tools category. We monitor evolving AI regulation including the EU AI Act and will update our practices as requirements are clarified for software tools that use AI as a component rather than as a product.

See how AI helps your privacy team

Walk through AI-assisted discovery, draft reporting, and the human review process that keeps everything accountable.