The Problem
AI models hallucinate. They:
- Invent functions that don’t exist
- Reference files that aren’t there
- Confidently write broken code
- Mix up table names and columns
This wastes hours of debugging time.
The Solution: Validation Checkpoints
Every AI-generated output passes through validation before execution:
Human Intent
↓
Claude breaks down task
↓ validates against AGENTS.md patterns
Linear ticket created
↓ labels determine routing
Cursor receives specific task
↓ validates against SCHEMA.md
Code generated
↓ validates against actual codebase
Human reviews & approves
↓
Merged to main
Validation Documents
| Document | Purpose | Location |
|---|
AGENTS.md | Agent capabilities, delegation rules | Repo root |
SCHEMA.md | Database tables, relationships, RLS | Repo root |
SKILL.md | Reusable patterns for specific tasks | /mnt/skills/ |
Rule: AI must check these documents BEFORE generating code. No exceptions.
SCHEMA.md Validation
Before any database operation, agents must verify:
1. Table Exists
❌ Bad: Assume table exists
✅ Good: Check SCHEMA.md for table definition
2. Column Names Match
❌ Bad: Guess column names
✅ Good: Copy exact column names from SCHEMA.md
3. Foreign Keys Are Valid
❌ Bad: Reference arbitrary IDs
✅ Good: Verify FK relationships in SCHEMA.md
4. Types Are Correct
❌ Bad: Use string for UUID
✅ Good: Match types from schema (uuid, timestamptz, etc.)
AGENTS.md Validation
Before delegating work:
1. Agent Can Do This
❌ Bad: Ask Cursor to browse the web
✅ Good: Check agent capabilities, route to ChatGPT
2. Task Is Scoped Correctly
❌ Bad: "Build the app"
✅ Good: Specific file paths, clear requirements
3. Dependencies Are Clear
❌ Bad: No dependency info
✅ Good: Explicit depends_on and parallel_safe flags
Codebase Validation
Before executing:
1. Imports Exist
❌ Bad: Import from imaginary package
✅ Good: Verify package exists in package.json
2. Functions Match Signatures
❌ Bad: Call with wrong arguments
✅ Good: Check function definition before calling
3. Paths Are Correct
❌ Bad: Guess file locations
✅ Good: Verify paths exist in repo structure
Validation Checklist
Before every code generation:
Verified table names against SCHEMA.md
Verified column names and types
Verified foreign key relationships
Checked existing patterns in codebase
Verified import paths exist
Confirmed agent can perform this task
Common Hallucination Patterns
1. Inventing Supabase Methods
// ❌ Hallucinated
await supabase.from('users').findOne({ id })
// ✅ Actual API
await supabase.from('users').select().eq('id', id).single()
2. Wrong Table Names
// ❌ Hallucinated
await supabase.from('posts')
// ✅ Actual table name
await supabase.from('publisher_posts')
3. Missing Columns
// ❌ Hallucinated column
SELECT name, description, tags FROM publisher_verticals
// ✅ Actual columns
SELECT id, name, slug, description FROM publisher_verticals
4. Wrong Package Imports
// ❌ Hallucinated
import { db } from '@/lib/database'
// ✅ Actual import
import { supabase } from '@trendingsociety/db'
Recovery When Things Go Wrong
| Issue | Recovery |
|---|
| Wrong table name | Check SCHEMA.md, fix query |
| Missing column | Verify schema, add migration if needed |
| Import error | Check package.json, install if missing |
| Type mismatch | Regenerate types with pnpm generate:types |
Automation
Pre-Commit Validation
# In package.json
"scripts": {
"precommit": "pnpm typecheck && pnpm lint"
}
Type Generation
# Regenerate types from Supabase
npx supabase gen types typescript --project-id ymdccxqzmhxgbjbppywf > packages/db/types.ts
Schema Sync
# Verify local schema matches production
npx supabase db diff
The Golden Rule
Never trust AI output. Always validate against source documents.
AI is a force multiplier, not a replacement for verification. The validation checkpoints ensure speed without sacrificing reliability.