Tuteliq is a child-safety and online-harm detection layer for AI assistants and platforms. It exposes 50 tools across five surfaces: text, voice, image, video, and PDF documents. Including dedicated detectors for grooming, sextortion, self-harm ideation, coercive control, romance scams, social engineering, money-mule recruitment, gambling harm, radicalisation, and AI-generated synthetic content (deepfakes and synthetic CSAM).
Every detection returns structured output: per-message risk scores, evidence-tagged categories with confidence, age-calibrated thresholds, cross-endpoint amplification, and country-aware crisis-helpline routing. Built-in GDPR tools (record/withdraw consent, export/delete account data, audit logs, breach reporting) let teams stay compliant with KOSA, EU DSA, and local data-protection law.
Interactive UI widgets render results inline in Claude, Cursor, and any MCP-compatible client. Authentication is OAuth 2.1 with PKCE and dynamic client registration, no manual token handling. 27 languages, sub-second single-endpoint latency, and a free tier.
child-safety
content-moderation
trust-safety
grooming-detection
fraud-detection
deepfake-detection
synthetic-content
identity-verification
gdpr
multilingual