A Light in the Digital Dark
See what they see. Understand what they encounter. Walk through it together.
The Story
It started with animal mask tutorials on YouTube.
A child explores creative videos about making animal masks. The algorithm introduces therian culture. What began as an art project becomes an identity exploration the parent doesn't recognize. By the time the parent understands what "therian" means, the reaction is panic — not understanding. The child feels surveilled. The parent feels blindsided. Nobody had the context they needed, when they needed it.
This story repeats every day, with different content, different families.
The Problem
Parents don't know what their children encounter online until it becomes a crisis. Parental controls block content — they don't explain it.
When a parent discovers something unfamiliar (furries, therians, specific influencers, niche communities), they have no framework to assess: is this harmless exploration or genuine concern?
Without early awareness and context, parents oscillate between ignorance and overreaction. Neither serves the child. Neither preserves trust.
What Exists vs. What's Needed
What Luciernaga Does
An AI engine for parental awareness — not surveillance
LLM processes content metadata — titles, descriptions, channels, communities — to identify patterns and emerging interests. Not surveillance. Semantic understanding.
When a new trend is detected, the LLM generates a parent-friendly briefing: what it is, age-appropriateness assessment, expert consensus, nuance and gray areas.
AI generates age-appropriate talking points, conversation openers, and de-escalation strategies — personalized to the specific content and the child's age.
AI classifies signals by urgency — informational, worth watching, needs attention, urgent — using content risk scoring, not keyword matching.
The AI Engine
Five layers powering contextual parental intelligence
Platform API integrations — YouTube Data API, TikTok Research API, browser activity metadata. Content metadata only, never message content.
LLM processes content signals — channel subscriptions, watch patterns, community memberships, search trends. Identifies thematic clusters and emerging interests.
Maintained knowledge base of youth digital culture — communities, trends, influencers, risk profiles. Updated continuously by AI + human curation.
For each detected signal, LLM generates: plain-language briefing, risk assessment, conversation guide, recommended resources. All outputs reviewed against safety guidelines.
Digest scheduling, urgency routing, parent dashboard, mobile push notifications. Parents control frequency and threshold.
How It Works
Link your child's accounts (with age-appropriate transparency). YouTube, TikTok, Discord, browser activity.
Luciernaga's AI engine processes content metadata, identifies thematic patterns, and cross-references its youth culture knowledge base.
The LLM generates a parent-friendly briefing for each detected trend: what it is, who's involved, expert perspective, risk assessment.
AI classifies each signal by urgency. Parents receive digests at their chosen frequency. Urgent items surface immediately.
For each notification, AI generates conversation starters, age-appropriate talking points, and recommended next steps.
Feasibility Assessment
Technically achievable. Ethically complex. Commercially viable.
LLM APIs (Claude, GPT) for content analysis and generation are production-ready. YouTube Data API v3 and TikTok Research API provide metadata access. RAG architecture for maintained knowledge base. Content classification and risk scoring are solved problems. Main challenge: cross-platform data access and privacy compliance.
Privacy vs. protection balance is THE challenge. Age-appropriate consent models needed. Must avoid becoming surveillance. Transparency with the child is non-negotiable.
$4.2B parental control market (2024). Built in Panama, serving Latin America first. Spanish-first approach with bilingual support. Subscription SaaS model. No contextual AI awareness competitor in LatAm. Education ministry and school partnerships viable.
Phased Roadmap
From prototype to scale in 12 months
Core AI pipeline. YouTube integration only. Manual knowledge base seeding. 50 alpha families. Basic parent dashboard.
Add TikTok + browser activity. RAG knowledge base with auto-updating. Conversation guide generation. Mobile app beta. 200 beta families.
Discord + social platform integrations. Smart triage system. School/institution partnerships. Launch in Panama + Costa Rica. 1,000 families.
LatAm expansion — Colombia, Mexico, Chile. API for school platforms. Analytics dashboard for institutions. 5,000+ families.
What's Needed to Execute
In place. LLM/NLP engineering, RAG architecture, full-stack development, API integrations. Building the AI pipeline from day one.
Content risk frameworks, age-appropriate communication models, ethical guidelines.
YouTube Data API (available), TikTok Research API (application required), browser extension for activity metadata.
Privacy advocates, child safety experts, parents. From day one.
12-month runway — AI infrastructure ($15K/mo), engineering team (3 devs), child psych consultant, legal/compliance, 200-family beta program.
COPPA compliance (US), data protection (Panama Law 81), parental consent architecture.
Revenue Model
Subscription tiers for families and institutions
Target: 1,000 paying families by month 9 = $15–19K MRR. School pilot: 3 schools × 200 students = $3K MRR.
Your child explores. You understand. Together, you navigate.
Not just what is happening — but how to walk through it together.
An Ormus Solutions concept. Built to empower, never to extract.
ormus.solutions