The fastest way for an AI-augmented agency to lose clients is to ship visibly bad output. Generative tools have raised the floor on production speed, but they have also created a new failure mode that did not exist before: deliverables that look professional at first glance but contain hallucinated facts, brand voice drift, or structural inconsistencies that human producers would not have introduced. Quality control on AI deliverables is now its own operational discipline. This guide covers how serious agencies are running QA on AI-driven work, the rubrics they use, and the systems that catch quality drift before clients do.
Key Takeaways:
- AI quality control needs both human editorial review and automated evaluation harnesses.
- The most common failure modes are hallucination, brand voice drift, and structural inconsistency.
- Build rubrics with five to eight clear dimensions, scored 1 to 5, with examples of each level.
- Sample 5 to 10 percent of weekly output for evaluation; track scores over time by producer, client, and prompt.
- The biggest risk is silent quality drift after model updates; build alerts for it.
This guide covers the QA framework, rubrics, workflows, and tooling that keep AI-driven agency work at a defensible standard.
What Quality Control Means in an AI Workflow
Quality control on AI deliverables is the systematic practice of catching defects before they reach a client. The most common defects:
- Factual hallucinations (made-up statistics, fake quotes, invented sources).
- Brand voice drift (a tone that does not match the client's guidelines).
- Structural inconsistency (sections missing, formatting wrong, headings inconsistent).
- Generic phrasing (output that reads like default model voice, not a specific client).
- Compliance violations (claims that violate regulated-industry rules).
- Accessibility failures (alt text missing, structure that fails screen readers).
Each of these is catchable with the right combination of human review and automated evaluation. None of them is catchable with vibes.
The Two-Layer QA Framework
Mature AI agencies typically run QA in two layers:
Layer 1: Editorial review on every deliverable
A human editor reviews every deliverable against a rubric before client delivery. The editor is not just polishing copy; they are checking for the failure modes above.
Layer 2: Sampled automated evaluation
A separate model scores a sample of weekly outputs against the rubric and flags drift. Human reviewers spot-check the automated scores.
This two-layer pattern catches both individual defects (Layer 1) and systemic drift (Layer 2). The Harvard Business Review has documented similar two-layer quality patterns in other generative AI rollouts (Harvard Business Review on managing generative AI).
Building a Rubric That Works
A useful QA rubric has five to eight clear dimensions. Each dimension should be scored 1 to 5 with explicit examples of what each score looks like. A representative content rubric:
| Dimension | What to Check | Score Anchors | | --- | --- | --- | | Factual accuracy | Claims, stats, quotes, sources verified | 5 = all verified, 1 = multiple unverifiable claims | | Brand voice | Tone, phrasing, approved language | 5 = on-voice throughout, 1 = generic or off-voice | | Structure | Sections, headings, length | 5 = matches brief exactly, 1 = major structural defects | | Clarity | Reading level, sentence length, ambiguity | 5 = clear and direct, 1 = confusing or convoluted | | Originality | Not derivative, says something useful | 5 = distinctive insight, 1 = generic restatement | | Formatting | Markdown, lists, links, emphasis | 5 = consistent and clean, 1 = visibly broken | | Accessibility | Alt text, structure, link text | 5 = passes audit, 1 = multiple violations | | Compliance | Claims, disclosures, regulated language | 5 = no issues, 1 = explicit violations |
Score every reviewed deliverable against this rubric. A deliverable that scores 4 or 5 across all dimensions ships. Anything lower goes back to the producer with specific notes.
Editorial Review Workflow
A repeatable editorial workflow:
- Producer submits the deliverable with the brief, source materials, and prompt version used.
- Editor reviews against the rubric and assigns scores.
- Editor returns the deliverable with notes if any dimension scored below 4.
- Producer revises and resubmits.
- Editor approves when all dimensions score 4 or higher.
- Approved deliverable is logged with rubric scores and routed to client delivery.
Track turnaround time and revision counts per producer over time. Producers whose work consistently needs heavy revision are signal that prompts, training, or workflow needs adjustment.
Automated Evaluation Harness
An automated evaluation harness samples a percentage of weekly output and scores it against the rubric using a separate model. A reliable setup:
- Sample 5 to 10 percent of weekly production output per service line.
- Run the rubric prompt through a separate model from the drafting model.
- Log scores per dimension, per producer, per client, per prompt version.
- Spot-check 10 to 20 percent of automated scores with human reviewers.
- Alert when average scores drop below an agreed threshold for a service line.
This pattern catches systemic drift after model updates, prompt changes, or new producer onboarding.
Fact-Checking Workflows
Hallucination is the highest-stakes failure mode because it can damage client reputation in ways other failures cannot. A reliable fact-check workflow:
- Extract every factual claim from the deliverable (claims, stats, quotes, names, dates).
- Verify each claim against a primary source.
- Cite the source in an internal log even if not in the deliverable.
- Flag unverifiable claims for removal or rewording.
- Use a separate model to extract claims and one to verify if you are scaling this.
For high-stakes work (legal, medical, financial, regulated industries), a human fact-checker should review every deliverable. For lower-stakes work, a sampled automated check with periodic human spot-checks is acceptable.
Brand Voice Verification
Brand voice drift is the most common defect that clients notice. A workflow for catching it:
- Maintain a per-client voice guide with description, examples, words to avoid.
- Score brand voice as a rubric dimension on every editorial review.
- Run periodic side-by-side comparisons of recent output against reference passages.
- Update the voice guide when client feedback indicates drift.
- Train producers on the voice guide quarterly.
The agency client communication guide covers how to capture and document brand voice during onboarding.
Tooling Stack
Five categories of tools support AI QA at scale:
- Editorial workflow: Notion, Coda, Linear, or your project management tool with rubric templates.
- Evaluation harness: Custom scripts using a model API, or platforms like Braintrust, Langfuse, Helicone.
- Fact-checking: Manual workflows plus model-assisted claim extraction.
- Plagiarism and originality: Originality.ai, Copyscape, GPTZero (with healthy skepticism).
- Accessibility: axe, WAVE, Lighthouse for web deliverables.
Mature agencies often build a lightweight internal dashboard pulling rubric scores, cycle times, and revision counts into a single view. The agency dashboard software guide covers options.
Handling Mistakes That Reach the Client
Sometimes a defective deliverable will reach a client. When it does:
- Acknowledge quickly without defensiveness.
- Investigate the root cause (prompt, model update, workflow gap, producer error).
- Fix the systemic issue so it does not recur.
- Communicate the fix to the client with specifics.
- Document the incident in your QA log.
Agencies that handle quality incidents well usually retain the client. Agencies that hide or minimize them lose the client.
Compliance and Regulated Industries
For clients in regulated industries (healthcare, financial services, legal, food, alcohol), QA needs additional layers:
- A documented review policy that satisfies the client's compliance team.
- A reviewer with domain expertise for every deliverable.
- A claims log with sources for every regulated claim.
- A retention policy for deliverables and review records.
- Periodic external audits if the client requires it.
For example, healthcare marketing has specific HIPAA and FDA constraints. The post on healthcare marketing compliance goes deeper. For financial services, see fintech marketing regulations. For broader privacy posture, see the agency data privacy compliance guide.
Measuring QA Effectiveness
Track these metrics monthly:
- Average rubric score per service line, per producer, per client.
- Revision rate (percent of deliverables requiring rework).
- Client-reported defects per month per service line.
- Cycle time including QA loops.
- Sampled automated score versus human spot-check correlation.
- Incidents that reached client, with root cause categorization.
Tracking these over time tells you whether QA is keeping pace with production volume, prompt updates, and model changes. The most common pattern in agencies that scale AI poorly is rising production volume with falling rubric scores. Catching this early prevents reputation damage. McKinsey has documented similar patterns across other generative AI rollouts (McKinsey on the state of AI).
Common Mistakes That Cause Quality Failures
A short list of patterns to avoid:
- Skipping editorial review to compress cycle time.
- No rubric or inconsistent rubrics across editors.
- No sampled automated evaluation so drift goes undetected.
- No alerts for score drops after model or prompt updates.
- No domain reviewer for regulated content.
- Ignoring client feedback when it indicates voice drift.
Frequently Asked Questions
How much QA is enough?
Every deliverable should pass editorial review. 5 to 10 percent of weekly output should pass a sampled automated evaluation. High-stakes or regulated content should pass an additional domain review. Adjust thresholds based on client requirements and the consequences of a defect reaching the client.
How do we catch model updates that degrade quality?
Run an automated evaluation harness weekly and alert when average rubric scores drop below an agreed threshold. Compare scores against a rolling baseline, not a fixed historical number. When scores drop, freeze production temporarily and investigate the prompt, the model, or the workflow.
Who should own QA in an agency?
A senior editorial lead or a head of quality. They own the rubric, the evaluation harness, the prompt library quality, and the periodic training of producers. In smaller agencies, the founder or COO often owns this until volume justifies a dedicated role.
How do we handle hallucinations?
Extract every factual claim, verify each against a primary source, log the source, and flag unverifiable claims for removal or rewording. For high-stakes work, require a human fact-checker on every deliverable. For lower-stakes work, sample with periodic spot-checks. Train producers to recognize the failure mode.
Should we tell clients what part of their deliverables was AI-generated?
Default to yes when asked, and yes proactively if your contract or industry expectations require disclosure. Most clients in 2026 know AI is part of agency workflows. Transparency about which parts are automated and which are human-reviewed builds more trust than vague answers.
Need to operate AI-driven production at scale without losing quality control or visibility into utilization? AgencyPro centralizes project management, capacity planning, and client portals into one operational layer designed for modern agencies. Book a demo to see how editorial and QA workflows fit together with billing and reporting.
