How Lumii keeps pupils — and your school — safe.
Lumii sits as another layer alongside the safeguarding and wellbeing work your school already does. This page is the architecture behind it — the alert model, the AI rules, the data boundaries, and the regulatory alignment that make that layer safe to deploy at scale.
AI surfaces. People decide. Always.
Lumii's role is to give school staff visibility they didn't have before — patterns across daily pupil voice, trends across year groups, signals from pupils who would not raise their hand. That visibility is the platform's value.
What happens with that visibility is a human decision, made by trained staff inside the school, governed by the school's existing safeguarding policy. Lumii does not act on alerts. Lumii does not contact pupils, parents, or external agencies. Lumii does not predict harm or make safeguarding recommendations.
This separation is deliberate and structural. The platform is designed so that AI cannot bypass human judgement, and so that the people making decisions about a child's wellbeing are the same people who have always made those decisions — DSLs, pastoral leads, counsellors, Heads — now with better information.
If a regulator, parent, or inspector asks "who is in control of safeguarding decisions in your school?" — the answer is unchanged by introducing Lumii. The trained staff in your school are. Lumii makes their job easier; it does not replace it.
Four levels. Every flag reviewed by a person.
Lumii's alert model is designed to be proportionate — most interactions warrant no action; some warrant attention; a small number warrant immediate response. This distribution holds across thousands of pupils and millions of conversations, and is consistent with what schools experience using other forms of pupil voice.
Green
No concern — routine emotional expression, daily check-in.
No action requiredAmber
Low–moderate emotional concern requiring monitoring.
Routine reviewRed
High emotional distress — urgent attention.
Immediate human reviewPurple
Safeguarding concern — action required.
Immediate human-led processAll Amber, Red, and Purple alerts are reviewed by authorised safeguarding staff. Red and Purple alerts are mandatory safeguarding requirements, not recommendations. The school dashboard records every alert, every review, every action — providing a full audit trail suitable for inspection.
From signal to action — the full path.
Pupil interacts
A pupil engages with Lumii through chat, check-in, story, or activity.
Lumii analyses
The interaction is assessed against the four-level alert model.
Flag raised
If above Green, an alert appears on the dashboard for authorised staff.
Human review
Trained staff review the alert and decide on the right response.
Action recorded
The school's response — escalate, support, monitor, close — is logged with a full audit trail.
Two voices, one alert structure
Lumii captures pupil voice and staff observations through the same alert structure. Authorised staff can add notes to a pupil's record or to a specific Lumii conversation, and flag them Amber, Purple or Red. The school sees both inputs in one place, with one triage flow, and one audit trail.
What Lumii's AI must never do — and what it must always do.
These rules are non-negotiable. They are the boundary between what AI is permitted to do in a school setting and what only a human is permitted to do. Every model behind every Lumii interaction operates inside them.
Must never
- Diagnose any condition
- Offer clinical or medical advice
- Promise confidentiality to a pupil
- Replace or replicate the role of a counsellor or safeguarding lead
- Escalate or take action without human review
- Predict harm
- Provide completed academic answers
- Use ideological, religious, or politically loaded language
- Reinforce a pupil's distress through repetition or amplification
Must always
- Encourage pupils to talk to a trusted adult
- Use a calm, steady, non-clinical tone
- Reinforce emotional safety
- Respect cultural and family values
- Avoid medical, ideological, or inappropriate language
- Stay within age-appropriate content boundaries for the pupil's tier
- Allow the pupil to disengage at any moment
What Lumii holds — and what Lumii will never hold.
Privacy is not a marketing claim at Lumii. It is the architecture. The platform is designed so that even Lumii cannot identify individual pupils — only the school can.
Lumii does not collect
- Pupil names
- Pupil photos
- Audio recordings
- Camera input
- Personal identifiers of any kind
- Location data
- Device-level identifiers tied to individuals
Lumii does not do
- AI training on pupil data
- External data sharing
- Cross-border data transfer
- Sale or commercial use of any pupil interaction
- Sharing of staff wellbeing data with school leadership at the individual level
The mapping between a unique code and a named pupil is held by the school, not by Lumii. Lumii receives a code; the school knows who that code belongs to. This is structural, not a policy choice — it is built into the architecture.
Individual unique codes. Nothing else.
Each pupil receives a unique code from their school. There are no accounts to create, no passwords to remember, no QR codes, no personal identifiers required at any point. The school holds the link between a code and a named pupil. Lumii does not.
This matters because it means there is no pupil database for Lumii to lose, no account credentials to compromise, and no path from a Lumii alert back to a named child without going through the school. The school remains the holder of pupil identity at all times.
Built to fit the frameworks UAE schools work within.
Lumii is designed for the UAE regulatory environment from the architecture up — not retrofitted afterwards. The frameworks below are not aspirational; they shape every product decision.
Education frameworks
- ADEK SPIRE Wellbeing Mark — 15 indicators across 5 dimensions; mandatory for Abu Dhabi schools since September 2025
- ADEK Irtiqa'a — structured wellbeing data and audit trails for inspection
- KHDA / DSIB (Dubai)
- SPEA (Sharjah)
- APEA (Ajman, alongside MoE)
- RAKDOK (Ras Al Khaimah, alongside MoE)
- UAE MoE Generative AI Classroom Framework
Federal & policy frameworks
- Federal Decree-Law No. 26 of 2025 — Child Digital Safety Law; enforcement Jan 2027; Lumii's architecture aligned with CDS Law expectations
- Federal Decree-Law No. 45 of 2021 — Personal Data Protection Law (PDPL)
- Wadeema's Law — Federal Law No. 3 of 2016; child protection duties
- ECA Early Intervention Framework 2026–2028 — maps to five of the eight pillars
Designed for cultural adaptability, not cultural assumption.
Lumii operates in schools across diverse cultural contexts. The platform's tone, language, and content boundaries are designed for adaptability — not for a Western default that gets translated.
- Arabic and English support, with 30+ languages available
- UAE Islamic version available — content aligned with Islamic cultural values and local educational context
- Neutral, dignified tone throughout — no slang, no humour at pupils' expense, no culturally inappropriate topics
- Avoidance of ideological content of any kind
- Respects family structures, faith traditions, and community values
- Designed for dignity and discretion in conservative cultural contexts
- Content boundaries reviewed regularly by clinical and cultural advisors
Real clinicians, governing real product decisions.
Lumii's clinical architecture is overseen by Dr Yebeltal Gezahegn, Consultant Neuropsychiatrist. Dr Gezahegn authors clinical content, reviews the safeguarding architecture, and ensures the AI rules remain consistent with current best practice in adolescent and child mental health.
This is not advisory in name only. Clinical review is a structural part of how the platform changes — alert thresholds, content boundaries, language patterns, and the behaviour of every model on the developmental ladder are all clinically governed.
What clinical oversight means in practice
- Every AI rule reviewed clinically before going live
- Alert thresholds calibrated against clinical safeguarding standards
- Content boundaries reviewed for age-appropriateness across each tier
- Re-entry guides and pastoral content authored or reviewed by Dr Gezahegn
- Neurodiversity traits model authored and clinically governed by Dr Gezahegn
- Regular review of platform behaviour against current best practice
Neurodiversity traits — clinical governance
The neurodiversity traits model is authored and clinically governed by Dr Yebeltal Gezahegn, Consultant Neuropsychiatrist. It is built on the same non-diagnostic principle that governs every other part of the platform — patterns and signals only, never diagnosis or labels. For a full description of what the model identifies, see the Platform page.
Safeguarding architecture, not safeguarding theatre.
A two-week trial. Setup in 24 hours. The same architecture you've just read about.