A PA for your AI. The guardrail layer OpenClaw forgot to build. It learns how you work and makes your coding assistant respect it.
For developers using OpenClaw, Claude Code, Cursor, Windsurf, or any AI coding assistant.
You wrote the rules. Your AI read them. Then it did whatever it wanted anyway.
"You say 'let's discuss this' and the AI starts building."
Intent misalignment. The AI treats every sentence as an instruction to act.
"You've corrected the same mistake 47 times and it still happens."
No persistent enforcement. Corrections vanish with the conversation window.
"Your instruction file says 'never force push'. The AI force pushes."
Static text has no authority. The AI weighs it against 200K tokens of context.
Instruction files are memos. CRE is a person in the room who never forgets and can't be talked past.
OpenClaw ships with a powerful execution engine and leaves safety as an afterthought. The results speak for themselves.
"Don't action until I tell you to."
Meta's AI Alignment Director told OpenClaw not to delete emails. It ignored her, wiped her inbox, and continued deleting through two stop commands. She had to physically kill the process.
"Agent deleted OAuth credentials trying to 'fix' an auth issue."
GitHub Issue #6823. A user's agent autonomously destroyed authentication tokens. No confirmation, no rollback.
"Dumped the contents of the home directory into a group chat."
Kaspersky found nearly 1,000 publicly accessible OpenClaw instances running without authentication. One bot leaked an entire home directory.
"Security is an option, but it is not built in."
Cisco Security Research, January 2026
"The lethal trifecta: tool access, sensitive data, autonomous execution."
Simon Willison, AI security researcher
CRE solves this. Every tool call passes through a deterministic gate before execution. Not a system prompt. Not a polite request. A mechanical layer the AI cannot bypass, talk past, or compress away.
Works with OpenClaw, Claude Code, Cursor, Windsurf, or any AI coding tool that supports hooks.
L1 checks every tool call instantly. L2 reviews conversation context and sends tips back to Claude Code, guiding it to make better decisions.
CRE promotes L2 observations into L1 rules automatically. The system gets faster and cheaper over time.
The LLM catches an intent mismatch and sends advice to Claude Code. This costs 2-5 seconds and API tokens.
CRE extracts the command pattern that triggered the advisory and proposes a new L1 regex rule.
The proposed rule is shown to the human for review. No rule activates without explicit approval.
The rule joins the fast gate. Next time, it resolves in under 10ms with zero API cost.
Run cre dashboard to launch locally on port 8766.
$ git clone https://github.com/tech-and-ai/claude-rule-enforcer.git $ cd claude-rule-enforcer $ pip install -e . $ cp rules.example.json rules.json # customise your rules $ export CRE_LLM_API_KEY="your-key" # for L2 reviews $ cre status
Stop your agent from deleting credentials, wiping inboxes, or running destructive commands. CRE hooks directly into OpenClaw's tool execution pipeline.
$ pip install claude-rule-enforcer $ cre init --adapter openclaw $ cre enable # That's it. Every tool call now passes through CRE. # L1 blocks rm -rf, force push, fork bombs in <10ms. # L2 checks intent: "Did the user actually ask for this?"
Also works with Claude Code, Cursor, Windsurf, Cline, and any tool that supports execution hooks.