Agency Operations

Quality Assurance for Agencies: A Scalable QA Process

Agency QA at every lifecycle stage — brief, draft, internal review, client review, delivery — with per-discipline checklists, defect tracking, and benchmarks.

Asad Ali
Asad Ali
16 min read
#quality assurance#agency QA#quality control#agency process

A 19-person brand agency in Toronto ships a homepage redesign Friday at 5pm. By Monday morning the client has emailed eleven errors: two typos in the hero, a broken pricing-page link, a logo at 60% opacity on iPad, an SEO title that still says "Untitled Page," and six misalignments at the 768px breakpoint. The agency spends a chunk of Monday and Tuesday on emergency fixes, the account lead absorbs a 90-minute call apologizing, and the client renegotiates the next project quote citing "trust issues." This is what poor QA actually costs at an agency — not a single bug, but the compounding tax on margin, renewals, and reputation. This guide breaks down QA stage-by-stage across an agency's delivery lifecycle, per-discipline checklists for design, copy, dev, and video, and a defect-tracking system that turns every catch into a process improvement.

The bottom line:

  • Agency QA is not one step; it is a 5-stage pipeline from brief through delivery, with progressively tighter checks
  • Per-discipline checklists matter more than universal ones — a copy QA list and a dev QA list share almost no items
  • The cost curve is real: a defect caught at brief stage costs roughly 1 unit; caught after client launch costs 50 to 100 units
  • Most agency defects trace to four root causes: rushed handoffs, unclear briefs, missing checklists, and utilization above 88%
  • A working defect-tracking system is the single highest-leverage investment in agency quality

The agencies that ship clean work consistently are not the ones with more talented people — they are the ones with a process that does not depend on individual heroics.

The Real Economics of Agency Quality Defects

The IBM Systems Sciences Institute data on the cost of fixing defects late is widely cited but not directly tuned for agencies. Here is the agency-specific math.

| Stage where defect is caught | Cost multiplier | Typical example | | --- | --- | --- | | Brief / scope review | 1x | Strategist catches contradiction in client deliverable list | | Self-review (creator) | 2x | Copywriter catches typo before sending to editor | | Peer review (internal) | 5x | Designer catches misalignment in dev's QA | | Final QA (account lead) | 10x | AM catches missing CTA before client review | | Client review | 25x | Client catches off-brand color, requires re-export + re-review | | Post-launch / live | 50 to 100x | Wrong product name on live site, social embarrassment, emergency fix |

The math is brutal: a single typo caught at brief stage might cost 30 seconds; caught after launch, that same typo can require 2 to 4 hours of emergency response, client apology, fix, redeploy, and post-mortem.

A 15-person agency with even 4 to 6 client-caught defects a month is hemorrhaging 40 to 80 hours of capacity into emergency fixes, plus untracked relationship damage.

The 5-Stage Agency QA Pipeline

Effective agency QA is not a step before delivery — it is a pipeline with checks at five stages, each catching different defect classes.

| Stage | Who reviews | Focus | Time budget | | --- | --- | --- | --- | | 1. Brief QA | Strategist + AM | Logic, scope, completeness, success criteria | 30 to 60 minutes | | 2. Draft / self-review | Creator | Spec match, obvious errors, brand basics | 15 to 30 minutes per asset | | 3. Internal peer review | Same-discipline peer | Craft, consistency, technical correctness | 30 to 60 minutes per asset | | 4. Account / final QA | AM or senior reviewer | Client perspective, scope match, polish | 60 to 90 minutes per deliverable | | 5. Pre-delivery / launch QA | QA lead or senior | End-to-end, real environment, edge cases | 1 to 4 hours per launch |

Stage 1: Brief QA — The Defect That Costs Nothing To Fix

Most quality problems originate at the brief, not at execution. A vague brief produces work that misses the mark, which produces revisions, which produces rushed re-work, which produces defects.

Brief QA checklist (every brief, before kickoff):

  • Is the deliverable list specific (count, format, dimensions)?
  • Is success defined in measurable terms?
  • Is the target audience explicit?
  • Are competitive and reference materials included?
  • Is the approval chain explicit?
  • Are out-of-scope items explicitly listed?
  • Is the timeline realistic, with buffer for revisions?
  • Is the responsible owner on each step named?

A 15-minute brief review by the strategist or AM before the team starts production prevents an average of 1 to 2 mid-project pivots, each of which can cost 6 to 12 hours.

Stage 2: Self-Review (The Creator)

The creator runs a discipline-specific self-review before passing work on. This is the cheapest catch in the pipeline and the most frequently skipped under deadline pressure.

Stage 3: Internal Peer Review

After self-review, work goes to a same-discipline peer (designer reviews designer, dev reviews dev). The peer is not imposing taste — they are running a checklist.

Stage 4: Account / Final QA

Before anything goes to the client, an AM or senior reviewer runs a final pass from the client's perspective. They are not checking the same things the peer reviewer did — they are checking scope match, narrative flow, and whether the package feels considered.

Stage 5: Pre-Delivery / Launch QA

For deliverables going live (websites, campaigns, video airings), a final environment-aware QA is non-negotiable. This is where you catch the broken redirect that worked on staging but not production.

Per-Discipline QA Checklists

The single biggest mistake agencies make is using a universal checklist. The checklists below are discipline-specific and represent the minimum viable list per discipline.

Design QA Checklist

Brand and consistency:

  • Colors match approved palette to hex (no eyeballing)
  • Typography hierarchy follows style guide
  • Logo clear-space and minimum-size rules respected
  • Iconography and illustration style consistent
  • Photography matches approved brand direction

Layout and craft:

  • Alignment to 4 or 8 px grid throughout
  • Spacing consistent (no random margins)
  • Typography optical adjustments applied
  • No widows or orphans in headlines
  • Text contrast meets WCAG AA at minimum

Responsive:

  • All specified breakpoints reviewed (typically 1440, 1024, 768, 375)
  • Touch targets minimum 44 x 44 px per Apple HIG
  • No content cut off, hidden, or unreadable at any breakpoint
  • Critical CTAs visible above the fold at every breakpoint

File hygiene:

  • Layers named and organized
  • Components used (not detached copies)
  • Final assets exported at correct DPI and format
  • Naming convention followed in deliverable files

Copy QA Checklist

Accuracy:

  • All facts, stats, claims verified with source links
  • Names, titles, dates, prices verified against source
  • Legal disclaimers and required disclosures included
  • Trademark and registered marks placed correctly

Voice and tone:

  • Match the brand voice guide (cross-checked, not assumed)
  • Industry terminology used correctly
  • Audience reading level appropriate (Hemingway score or similar)
  • No internal jargon that confuses the audience

Mechanics:

  • Spelling and grammar passed through two checks (one human, one automated)
  • Punctuation consistent (Oxford comma decision applied consistently)
  • Capitalization rules followed (title case vs. sentence case decided once)
  • No double spaces, smart quotes consistent

Format-specific:

  • Headlines under defined character limit
  • Meta descriptions 140 to 160 characters
  • CTAs action-verb-led, not "Click here"
  • Body copy paragraphs scannable, not wall-of-text

Web Dev QA Checklist

Functional:

  • All nav links work and go to intended destination
  • All forms submit successfully and trigger expected actions
  • All interactive elements behave per spec
  • 404 page renders correctly for bad URLs

Cross-browser and device:

  • Latest Chrome, Safari, Firefox, Edge on desktop
  • iOS Safari and Android Chrome on mobile
  • One older browser version per stated browser support matrix

Performance:

  • Largest Contentful Paint under 2.5 seconds
  • Cumulative Layout Shift under 0.1
  • First Input Delay under 100ms (or INP under 200ms)
  • Lighthouse score above 85 for performance, accessibility, SEO

Accessibility:

  • Automated scan via axe or WAVE passes with no critical issues
  • Keyboard nav reaches every interactive element
  • Screen reader test on critical flows
  • Alt text on every image that conveys meaning

Security and integrity:

  • SSL valid and HSTS configured
  • No secrets in client-side code
  • Form inputs sanitized
  • Dependencies scanned for known vulnerabilities

Pre-launch only:

  • Redirects mapped and tested
  • Analytics firing on key events
  • Production vs. staging environment confirmed
  • Backup and rollback plan documented

Video and Motion QA Checklist

  • Audio levels normalized (dialog around -12 LUFS, peaks not clipping)
  • Caption file (SRT or VTT) accurate and synced
  • Color graded against brand reference
  • Frame rate and resolution match delivery spec
  • Lower-thirds and supers spelled correctly
  • Logo placements per brand book
  • Final export tested in actual delivery platform (YouTube, broadcast, social)
  • Versioned files named per convention with date and version

Defect Tracking: The System That Turns Catches Into Improvements

A defect log is not bureaucracy. It is how you turn every QA catch into a process improvement, and how you spot patterns across projects.

Minimum defect log fields:

  • Date caught
  • Project / client
  • Stage caught (brief, self, peer, final, client, post-launch)
  • Discipline (design, copy, dev, video)
  • Defect category (typo, alignment, broken link, scope mismatch, etc.)
  • Severity (critical, major, minor, cosmetic)
  • Time to fix (estimated)
  • Root cause (rushed, brief unclear, missing check, etc.)

Run a monthly defect review where you look for patterns:

  • Which categories appear most often?
  • Which stages are leaking the most defects to client review?
  • Which projects had outlier defect counts (and why)?
  • Which root causes recur?

A 20-person agency that runs this for 90 days typically finds that 3 to 4 specific defect categories account for 60% of all client-caught issues — and once named, they can be eliminated almost entirely with targeted checklist updates.

The Four Root Causes of Agency Defects

Across most defect logs we have seen, four root causes dominate.

1. Rushed handoffs. Designer to developer, copy to design, draft to QA. Most handoff defects are caused by missing context, not missing talent. Solution: a mandatory handoff checklist that has to be completed before the task moves states.

2. Unclear or incomplete briefs. Briefs that miss success criteria, scope edges, or approval chains. Solution: Stage 1 brief QA.

3. Missing or unused checklists. Either the checklist does not exist, or it exists and people skip it under deadline pressure. Solution: bake checklists into the project management tool as required subtasks.

4. Utilization above 88%. This is the structural one. Per SPI Research, defect rates climb non-linearly above 85% utilization. When the team is overcommitted, QA is the first thing cut. The fix is staffing, not process.

For utilization specifically, see the agency capacity planning guide.

QA Time Budget by Project Type

QA must be scheduled into the project, not absorbed by the team's evenings.

| Project type | QA as % of production hours | Typical hours | | --- | --- | --- | | Single-page landing | 12% | 4 to 8 hours | | 10-page marketing site | 15% | 30 to 50 hours | | Brand identity package | 12% | 12 to 18 hours | | Multi-channel campaign | 18% | 40 to 80 hours | | Web application / platform | 20 to 25% | Variable, often 100+ | | Video production | 15% | Per minute of final cut |

When QA gets cut to fit a deadline, the defect rate climbs predictably and the rework debt comes due later — usually with a client apology attached.

A Scenario: Cutting Defect Rate by 70%

A 24-person digital agency in Chicago was averaging 9 to 12 client-caught defects per major delivery. The cost was real — they estimated 60 to 90 hours a month in emergency fixes, plus two strained client relationships.

They implemented the 5-stage pipeline over 8 weeks:

  • Week 1 to 2: Built discipline-specific checklists, attached as required subtasks in their PM tool
  • Week 3 to 4: Trained team on stages, started defect log
  • Week 5 to 6: First monthly defect review identified that 64% of defects were in two categories — broken links and brand color mismatches
  • Week 7 to 8: Targeted checklist updates plus a 5-minute pre-handoff "color/link scan" specific to those two categories

By month 4, client-caught defects had dropped to 2 to 3 per delivery, and emergency-fix hours fell by roughly 65 hours a month. The annualized recovery: about $58K in capacity plus material trust-building with clients.

Scaling QA by Agency Size

Under 10 people. QA is embedded in the producer's role, with a senior eye on the final step. Checklists matter most. A "buddy QA" pair model works.

10 to 30 people. A part-time or rotating QA lead role emerges. QA time is budgeted explicitly. Defect log is monthly.

30 to 50 people. Dedicated QA function or specialists per discipline. Automated tooling for dev (Lighthouse, axe, link checkers in CI). Quarterly QA retros.

50+ people. Full QA team, often with discipline-specific leads. QA metrics in performance dashboards. Cross-project quality standards governed by an ops or production director.

Common QA Failure Modes

Checklist theater. Boxes checked without actually checking. Solution: random spot-audits where a senior re-runs a checklist and compares findings.

QA as last-step bottleneck. If everything piles up at QA the week of delivery, you have a planning problem, not a QA problem.

QA without authority. A QA reviewer who cannot block delivery is decorative. Make the sign-off binding.

No defect log. Without a log, every catch is a one-off. With a log, every catch is data.

Blaming individuals. A defect-rich week is a process signal, not a person signal. The Deming "plan-do-check-act" mindset matters more than blame.

Building a Quality Culture That Scales

Process alone does not produce quality. Culture has to value catches over output.

  • Celebrate the catch, not just the ship. The designer who caught the wrong logo at peer review prevented a $4,000 reprint.
  • Run blameless post-mortems on every client-caught defect. The question is "what in our process let this through," not "who did this."
  • Make checklists living documents. Every new defect category adds a line. Stale checklists get pruned.
  • Tie quality metrics to the project P&L. Show the team how rework hours destroy margin.

Quality is a leading indicator of retention. Per the SPI Research benchmark, agencies in the top quartile of delivery quality have 18 to 24% higher client renewal rates than the bottom quartile.

Frequently Asked Questions

How much time should agencies budget for quality assurance?

Plan 12 to 20 percent of project hours for QA, scaled to risk: landing pages near the low end, multi-channel campaigns and web apps near the high end. Below 8% means defects reach clients regularly; above 25% usually signals process inefficiency rather than craft. Bake QA in as line items, not as the buffer that gets squeezed.

Who should do QA on agency deliverables?

Use separate reviewers from the creator at every stage past self-review: a same-discipline peer for craft, then an AM or senior for the client perspective. Self-review catches roughly 30% of defects; second-eye review catches 70 to 85%; third-eye final QA catches another 5 to 10%. Skip any layer and the misses go to the client.

What is the most common cause of agency quality issues?

Utilization above 88% combined with deadline-driven QA cuts. When the team is structurally over-committed, the checklist gets skipped, the handoff is rushed, and the brief is glossed. The fix is capacity planning, not better checklists — checklists do not help when there is no time to run them.

Should QA be a separate role or part of the producer's job?

Under 20 people, embed QA in producer roles with a senior reviewer on final QA. Past 20, a part-time dedicated QA lead pays off. Past 40 to 50, expect at least one full-time QA role per discipline cluster. The agency hiring guide covers the sequencing for production roles.

How do you build a QA culture without slowing the team down?

Celebrate catches, run blameless defect reviews, and make checklists part of the workflow rather than something separate. Pair this with realistic timelines that include QA from the start. The culture follows the system: when leadership protects QA time even under deadline pressure, the team learns it is non-negotiable.


Ready to wire QA checklists directly into your agency's workflow so they cannot be skipped? Try AgencyPro free to attach discipline-specific QA subtasks to every project — and stop letting defect rate drift with utilization.

About the Author

Asad Ali
Asad AliCo-Founder & CTO

Co-Founder & CTO at AgencyPro. Full-stack engineer building tools for modern agencies.

Continue Reading

Ready to Transform Your Agency?

Join thousands of agencies already using AgencyPro to streamline their operations and delight their clients.