Interview Questions / Web Developer

30+ Web Developer Interview Questions for Agency Hiring

A practical bank of questions, what to listen for, and red flags to help you hire an agency web developer who can ship across multiple stacks at quality, partner with design and PM, and hand projects off so they keep running long after launch.

Why these questions?

Agency web developers are not in-house product engineers. They ship across two to four projects in a quarter, navigate stacks picked by clients or pre-sales, defend launch quality against scope and timeline pressure, and increasingly partner with AI tooling without merging unreviewed code. The questions below are built around those realities rather than generic dev trivia. Use the entire bank as a library and pick 8 to 12 that fit the seniority and stack mix you're hiring for.

General & background (5)

Walk me through how you ended up doing web development at agencies.

What to listen for

A coherent arc showing deliberate technical choices: how they picked a stack, why they chose agency over product, and which side of the stack they lean into (front-end, back-end, full-stack, CMS, infra).

Red flags

  • Cannot articulate why they chose agency over product work
  • Cannot describe their stack journey
  • Treats web dev as a fallback after another career

Which part of the stack do you do your best work in, and which side do you lean on others for?

What to listen for

Self-awareness about strengths (front-end craft, integrations, performance, infra, CMS) and honest about gaps. Knows when to pair or escalate.

Red flags

  • Claims to be expert on every layer of the stack
  • Cannot articulate any gap
  • Refuses to acknowledge weaker areas

How has web development changed in the last 18 months that affects agency clients?

What to listen for

Articulate view on AI-assisted dev, the maturation of headless CMS, the rise of edge runtimes, Core Web Vitals as table stakes, and the impact on estimates and team shape.

Red flags

  • No view on AI in their workflow
  • Still defaults to LAMP for everything
  • Repeats vendor talking points with no synthesis

Why are you leaving your current agency?

What to listen for

Honest, growth-oriented reasons. Even when frustrated about technical debt, scoping, or PM workflows, they should speak fairly about previous shops.

Red flags

  • Trash-talks every previous shop
  • Blames designers, PMs, or clients for all problems
  • Has hopped agencies every year with no clear pattern

What kind of agency engineering culture do you do your best work in?

What to listen for

Self-awareness about preferred team size, code review style, deployment cadence, and pace. Tied to evidence from past roles.

Red flags

  • Says they thrive anywhere
  • Describes only freedom and no accountability
  • Preferences clearly mismatch your shop

Role-specific skills (10)

Walk me through how you scope a new website build for a client.

What to listen for

Discovery questions about content, integrations, traffic, growth, and team. Distinguishes must-have from nice-to-have. Builds a phased estimate with assumptions, exclusions, and risks documented.

Red flags

  • Skips discovery and quotes from a template
  • No assumptions or exclusions
  • Cannot defend any line item in their estimate

How do you choose between Next.js, Astro, WordPress, Webflow, Sanity, and other stacks for a given client brief?

What to listen for

Tied to client team capability, content workflow, performance needs, integrations, budget, and long-term maintenance. Has shipped real projects on multiple options.

Red flags

  • Defaults to one stack regardless of brief
  • Has only ever shipped one tool
  • Picks based on what is trending, not fit

Walk me through how you set up CI/CD and deployment for an agency client project.

What to listen for

Branch strategy, preview deploys, automated tests, environment promotion, secrets management, monitoring, and a rollback plan. Tied to handover-friendly tooling.

Red flags

  • Deploys directly from main
  • No preview environments
  • No rollback plan

How do you approach Core Web Vitals and performance on a content-heavy site?

What to listen for

Measures with real-user data, prioritises LCP, INP, CLS work by impact, uses image and font optimisation, lazy loading, edge caching, and budget enforcement in CI.

Red flags

  • Optimises only what Lighthouse complains about once
  • No real-user monitoring
  • No CI-enforced performance budget

How do you architect a CMS so that the client's content team can actually use it?

What to listen for

Component-driven schemas, sensible field grouping, helpful field descriptions, validation, preview, role-based permissions, and a written editor handbook. Has trained content teams in the past.

Red flags

  • Builds CMS schemas only for developer convenience
  • Never trains the content team
  • No preview or validation

How do you handle accessibility (WCAG 2.2 AA) in your day-to-day work?

What to listen for

Treats accessibility as a build requirement, not an afterthought. Semantic HTML, focus management, keyboard testing, contrast, automated checks in CI, manual testing with screen readers.

Red flags

  • Treats accessibility as the designer's job
  • Only relies on automated audits
  • Cannot name basic WCAG criteria

Walk me through how you collaborate with designers in Figma.

What to listen for

Reads design tokens, components, and states. Pushes back constructively on inconsistencies. Keeps a tight feedback loop, not a one-shot handoff. Translates motion and interaction faithfully.

Red flags

  • Treats Figma as pixel-perfect lockdown
  • Never pushes back on inconsistencies
  • Cuts motion or interaction silently

How do you approach security on a client website (forms, auth, dependencies, headers)?

What to listen for

Sensible defaults: dependency scanning, security headers, input validation, CSRF, rate limiting, secrets handling, and clear ownership for patches after handover.

Red flags

  • No view on security headers
  • Stores secrets in the repo
  • No post-handover patching plan

How do you use AI coding tools (Copilot, Cursor, Claude) in your day-to-day work?

What to listen for

Specific use cases (scaffolding, refactors, tests, docs, code review prep) with editorial judgment. Skeptical of fully delegating to the model. Reviews every diff.

Red flags

  • No use of AI at all
  • Merges AI diffs without review
  • Cannot articulate where AI helps and where it hurts

Walk me through how you handover a project to the client team or to internal support.

What to listen for

Documented runbooks, README, architecture overview, env var inventory, training session, recorded loom, and a known-issues log. Plans for ongoing support, not just "throw it over the wall".

Red flags

  • No documentation at all
  • No training session
  • No known-issues or maintenance plan

Agency-specific scenarios (6)

You have three active client builds, two in QA and one in scoping. How do you allocate your week?

What to listen for

Triages by deadline, blockers, and dependency on others. Time-blocks deep work for new code vs review and meetings. Communicates proactively with PMs about trade-offs.

Red flags

  • Reactive all week
  • No time-blocking
  • No proactive communication about trade-offs

Mid-sprint, the client asks for a "small change" that touches your component architecture. How do you respond?

What to listen for

Captures the request, frames the impact for PM and account, distinguishes a true small change from a redesign, ties real change to a change order. Does not silently absorb.

Red flags

  • Just does it without flagging
  • Refuses without offering a path
  • No coordination with PM or account

A site goes down on a Friday afternoon. Walk me through the next 60 minutes.

What to listen for

Calm triage: check monitoring, isolate the cause, communicate to PM and client with a status, roll back if safe, fix root cause, post-incident review.

Red flags

  • Panics or disappears
  • No communication during the incident
  • No post-incident review

You inherit a legacy WordPress or custom PHP project from another agency. How do you approach the first month?

What to listen for

Audit, version control if missing, environments, dependency upgrades, security pass, observability, then a phased modernisation plan with the client's priorities. Avoids rewrite-from-scratch reflex.

Red flags

  • Immediately proposes a full rewrite
  • Refuses to work on it at all
  • No audit before changes

A retainer client asks for "a quick fix" that you know will create technical debt. How do you respond?

What to listen for

Frames the long-term cost, offers two options (quick fix with documented debt, or proper fix with timeline), lets the client choose with full information, captures the debt in a backlog.

Red flags

  • Just does the quick fix silently
  • Refuses without offering options
  • No documentation of the debt

How do you coordinate with designers and PMs on a fixed-fee build that is going over budget?

What to listen for

Surfaces the overrun early, partners with PM on what to cut or change-order, protects launch quality, refuses to silently absorb hours, post-mortem with scoping team.

Red flags

  • Silently absorbs the overrun
  • Surprises everyone at launch
  • No post-mortem with scoping

Behavioral / STAR (5)

Tell me about a build you are most proud of. Situation, your role, outcome.

What to listen for

STAR with specifics: brief, technical decisions, their personal contribution, measurable outcome (performance, conversion, client satisfaction, longevity).

Red flags

  • Cannot separate their contribution from the team's
  • No outcome data
  • Project is purely about tech with no business framing

Describe a project that did not go well technically.

What to listen for

Honest about the failure, names what went wrong (architecture, scope, communication, dependency), what they learned, how they applied that learning since.

Red flags

  • Cannot name a failure
  • Blames designers, PMs, or clients entirely
  • No structured learning afterward

Tell me about a time you delivered bad technical news to a client (delay, scope change, security issue).

What to listen for

Prepared, direct, offered options, took responsibility where warranted, followed up in writing.

Red flags

  • Delegated the conversation
  • Buried the news
  • No written recap

Describe a disagreement with a designer or PM about implementation.

What to listen for

Treats other functions as partners, brings constraints clearly, finds a path that works for both, follows up after build to see if the trade-off held.

Red flags

  • Steamrolled the other function
  • Gave up and built it badly
  • Escalated without aligning first

Tell me about a time you mentored a junior developer.

What to listen for

Specific person, how they paired, what they delegated, how they gave feedback, evidence of growth. Treats mentorship as part of the role.

Red flags

  • Cannot name a mentee
  • Treats juniors as ticket fodder
  • No structured pairing or feedback

Technical & portfolio review (4)

Walk me through a recent project from your portfolio. Architecture, key decisions, what you would do differently.

What to listen for

Articulate explanation of stack and architecture, defends key decisions with reasoning, honest about what they would change. Comfortable diagramming.

Red flags

  • Cannot explain architecture clearly
  • No view on what they would change
  • Cannot separate their work from the team's

Show me a recent pull request you opened. Walk me through your decisions.

What to listen for

Clean diff, clear commit messages, tests where appropriate, well-written PR description, evidence of self-review, responsive to comments.

Red flags

  • Massive PRs with no description
  • No tests on logic that needed them
  • Defensive about review feedback

How do you write tests in an agency context where budgets are tight?

What to listen for

Pragmatic: critical path E2E, unit tests for business logic and edge cases, smoke tests in CI. Knows when to invest and when not to. Refuses to ship critical features untested.

Red flags

  • Writes no tests at all
  • Writes tests for everything regardless of value
  • Cannot articulate test strategy

Which tools, including AI, are you fluent in and how do they fit your workflow?

What to listen for

Specific, opinionated answer that goes beyond a framework. Has a point of view on AI in coding, code review, and infra. Curates rather than collecting.

Red flags

  • No view on AI at all
  • Adopts every tool with no curation
  • Refuses to learn anything new

Culture fit (3)

What kind of technical work do you refuse to do, and why?

What to listen for

Has a clear ethical and craft floor: dark patterns, tracking without consent, deliberate accessibility neglect, security shortcuts. Has acted on this in the past.

Red flags

  • No floor at all
  • Floor is purely about taste
  • Has never had to act on it

When you disagree with a tech lead or PM on architecture or estimates, what do you do?

What to listen for

Direct, private disagreement first, brings evidence, commits publicly when overruled, revisits with results.

Red flags

  • Goes silent and grumbles
  • Never disagrees
  • Lets the disagreement become passive aggression

What would your first 90 days look like in this role?

What to listen for

Listen-and-learn plan, ramp on existing codebases, identify two or three quick wins, build relationships with designers, PMs, and account.

Red flags

  • Arrives with a prescriptive overhaul before listening
  • Plans to rewrite everything in their preferred stack
  • No relationship-building plan

Work-sample evaluation

Strong developer candidates can walk you through artifacts they have produced. Ask for:

  • A recent live build with the architecture and key decisions.
  • A pull request from a real codebase (open source if available).
  • A README or runbook from a project they handed over.
  • An incident write-up or post-mortem they authored.
  • A short narrative of one engagement from scope to launch and beyond.

Refusal to share anything at all is a signal. So is sharing only screenshots with no architecture or code conversation behind them.

Frequently asked questions

How long should a web developer interview process be?

Most agencies run three to four stages over two to three weeks: a recruiter screen, a technical conversation with a senior dev or CTO, a paid take-home or pair-programming session, and a panel with PM and design leads. Anything longer than four weeks tends to lose strong candidates.

Should web developers complete a take-home test?

A short paid task (three to five hours) on a realistic problem is reasonable and predictive. Always pay for it. Pair-programming on a real problem is a strong alternative. Avoid week-long projects that ask for free production work.

What is the biggest predictor of success for an agency developer?

Pragmatism paired with craft. Candidates who can ship on time, document their work, partner with designers and PMs, and pick the right tool for the brief consistently outperform those hired purely on framework expertise.

Should we hire a generalist or a stack specialist?

Most agencies need a strong generalist who can shoulder the bulk of the work, paired with depth in one or two specialisms (performance, infra, headless CMS). Hire for the gap on the team rather than for a "perfect" full-stack engineer who can do everything.

How important is AI tooling fluency when hiring a developer in 2026?

Increasingly central. Candidates should have a clear point of view on where AI helps and where it creates risk in agency work. Lack of curiosity here is a stronger red flag than not knowing one specific framework.

Run your agency like it's 2026

AgencyPro gives engineering teams the project visibility, time tracking, and client reporting they need to ship at quality across multiple stacks without burning out.

Book a demo