Detect grooming, bullying, fraud, and 16+ online threats across text, voice, image, and video.
Tuteliq is a child-safety and online-harm detection layer for AI assistants and platforms. It exposes 50 tools across five surfaces: text, voice, image, video, and PDF documents. Including dedicated detectors for grooming, sextortion, self-harm ideation, coercive control, romance scams, social engineering, money-mule recruitment, gambling harm, radicalisation, and AI-generated synthetic content (deepfakes and synthetic CSAM). Every detection returns structured output: per-message risk scores, evidence-tagged categories with confidence, age-calibrated thresholds, cross-endpoint amplification, and country-aware crisis-helpline routing. Built-in GDPR tools (record/withdraw consent, export/delete account data, audit logs, breach reporting) let teams stay compliant with KOSA, EU DSA, and local data-protection law. Interactive UI widgets render results inline in Claude, Cursor, and any MCP-compatible client. Authentication is OAuth 2.1 with PKCE and dynamic client registration, no manual token handling. 27 languages, sub-second single-endpoint latency, and a free tier.
File location: .cursor/mcp.json (project) or ~/.cursor/mcp.json (global)
{
"tuteliq": {
"command": "https://api.tuteliq.ai/mc",
"args": []
}
}If you maintain Tuteliq, add this badge to your README to show it's verified on CuratedMCP:
[](https://curatedmcp.com/marketplace/tuteliq)