A second layer, when you want it

Admin oversight gives you visibility — you can read every conversation a child is in. But seeing isn’t the same as preventing, and some families want a second layer: rules that catch certain things before a child sends or reads them, without an adult having to be online at the right moment.

Shoal’s moderation is that second layer. It’s per child, off by default, and turned on only by an admin who wants it.

What you can switch on

  • Block links. Stops the child from sending messages containing a URL, and replaces incoming links with a warning badge so the URL doesn’t render. Live today.
  • Block specific words. Refuses to send a message that contains any word on a list you maintain, with a clear reason shown to the child. Coming soon.
  • Flag specific words. Lets the message through, but writes an entry to the moderation log so an admin can review it later. Coming soon.

Each rule is configured per child, from that member’s moderation panel — so a six-year-old and a thirteen-year-old in the same family can have very different settings, or none at all.

How it actually works

Moderation runs on the child’s own device, before a message is sent and as messages arrive. The server never reads message content; it only stores the rules and a deliberately content-free audit trail (which family, which member, which conversation, which rule, when — never the message itself). The encryption story is unchanged: oversight is structural, moderation is local, and neither requires us to decrypt anything.

When a send is blocked, the child sees a plain-language reason — “Links aren’t allowed in your messages. Edit your message to remove the link.” — rather than a silent failure. When a flagged word is sent, the child is told the message will be reported to an admin. There are no hidden penalties.

Why it isn’t on by default

Most families don’t need it. The closed contact list, admin oversight, and time limits already cover the common cases — and a child who can talk to their cousin without a keyword list inspecting every word is generally a happier child. We don’t think a chat app should default to filtering language.

But for younger children, for families where a particular kind of link or word has caused a real problem, or for the period after something has gone wrong and you want a stricter setting for a while — moderation is there. It’s a soft control, not an adversary-grade firewall: a determined teenager can paraphrase around a word list. The point is to catch the easy cases and to make admin oversight the backstop for the rest.

Three dials, not one

Moderation pairs with admin oversight, time limits, and cross-family connections. Together they’re the dials a family can turn — who a child can talk to, when the app is available, and what can pass through it — without surrendering the rest of the device, the rest of the day, or the rest of your child’s privacy.