Summary
A SaaS team built AI agents to generate legal content like policy drafts and contracts. The generation worked fine; the human review didn’t. Content approvals were ad hoc, slow, and invisible. By shifting to event-based coordination, the team brought legal review into the loop without changing tools or adding workflow engines. Review time dropped from days to hours, and compliance became trackable.
The problem: did anyone approve that?
The team had built AI agents that could draft legal text: privacy policies, contract terms, compliance statements. But everything still needed human approval before it went live.
Their process was simple on paper. The AI would post the output to Slack, and legal would review it “when they had time.” In practice, this meant:
- Approvals delayed by days
- No way to track what needed review
- Missed messages in busy Slack threads
- No prioritisation or visibility
- No audit trail for compliance
Engineering added reminder bots and spreadsheets. None of it stuck. Product teams were frustrated. Legal had no structure. Nobody trusted the process.
Why their fixes didn’t stick
The team tried several common approaches:
- Dedicated Slack channels (still relied on memory)
- Email notifications (ignored or buried)
- Reminder bots (added noise, not clarity)
- Status trackers (outdated within a week)
- Ticketing systems (legal didn’t want to use Linear)
The issue wasn’t tooling. It was coordination. The review process had no real system behind it.
What changed: human review became a first-class step
The team introduced Sailhouse to coordinate the flow between AI and human reviewers.
They kept Slack. They didn’t add a new UI. They just connected the steps:
- When the AI generated content, it emitted a review_required event
- That event was pushed to Slack with an approval interface
- Legal’s response triggered a follow-up event: approve or request changes
- The full exchange was logged with timestamps and review state
- If no one responded, it could escalate automatically
The state machine lived in the event graph, not in backend code.
What made it work
The key difference was that the team didn’t treat human input as an afterthought. By modelling approvals as part of the system, not outside it, they gained:
- Visibility into who had responded and when
- Audit logs for compliance
- The ability to prioritise urgent reviews
- A feedback loop that worked across teams
- A process legal actually used
This wasn’t a workflow engine. It was coordination via events, in tools the team already used.
What changed
- Approval time dropped from 2–3 days to a few hours
- Legal kept using Slack; no new system needed
- Audit logs were available for compliance reviews
- Engineers didn’t need to chase approvals manually
- Product teams had clarity on what was pending and what was live
Why it matters
AI systems often treat human feedback as an exception. In reality, it is often the most important part of the process.
This team didn’t solve the problem with a dashboard or another tool. They solved it by respecting the human-in-the-loop step as part of the system. Slack stayed; the process changed.
By shifting to events, they got a workflow that was visible, reliable, and auditable, without introducing friction for either team.