The 80% job: how design leads are using AI — and it’s not about mockups

The 80% job: how design leads are using AI — and it’s not about mockups

Design leads spend 80% of their time communicating, aligning, and justifying. That’s exactly where AI helps most.

A drawing of a man looking straight at the camera surrounded by a crowded desk
Image generated with Midjourney

Part 1: The Reality of Design Leadership

Here’s what my job description says I do: set design vision, mentor designers, elevate craft quality, drive innovation.

Here’s what I actually do: run 1:1s, mediate conflicts, write justifications for decisions, align with stakeholders, update Jira, answer Slack, prepare for meetings, sit in meetings, follow up on meetings.

I lead a team of 7 designers at a European bank. I can’t remember the last time I opened Figma for actual design work.

This isn’t a complaint. This is the job. And if you’re a design lead, you probably recognize it.

(Whether your title is Design Lead, Chapter Lead, Head of Design, or Design Manager — if you’re responsible for other designers and their output, this is for you.)

The Myth vs The Reality

When I was an individual contributor, I assumed design leads spent their days reviewing designs, setting creative direction, and occasionally stepping in to solve tough UX problems.

Then I became one.

Megan Schofield Experience Design Manager at Google, put it perfectly in an interview with Abstract:

“I would be inclined to propose that we, as designers, actually spend a significant amount of time, not designing, but in fact communicating, reviewing, justifying and defending.”

Here’s what the typical breakdown actually looks like:

What job descriptions say:

– 30% Design strategy & vision
– 25% Mentoring & craft elevation
– 20% Stakeholder collaboration
– 15% Process improvement
– 10% Hands-on design work

What the calendar says:

– 30% Meetings (syncs, reviews, alignments, 1:1s)
– 25% Communication (Slack, email, status updates)
– 20% People management (coaching, conflict resolution, career conversations)
– 20% Justification (proposals, business cases, defending decisions)
– 5% Operations (Jira, planning, HR, tools, budgets)

That’s 100% before you even get to design.

Notice what’s missing? Exactly.

This isn’t because we’re bad at our jobs. It’s because the job isn’t what we thought it was.

Why This Happens

This isn’t a failure of time management. It’s structural.

Design leads sit at an intersection that no one else occupies. You’re translating between:

– Designers who think in users and flows
– Developers who think in systems and constraints
– Business stakeholders who think in metrics and deadlines

You’re the only one in the room who speaks all three languages. So you become the translator, the mediator, the justifier.

Add to that: most design teams exist inside non-design hierarchies. You report to Product, or Engineering, or in my case, a Tribe Lead responsible for multiple disciplines.

This means every design decision carries a “justification tax” — the time spent explaining, documenting, and defending choices that other disciplines make in a quick conversation.

When your engineering counterpart says “we’ll use microservices,” nobody asks for a slide deck. When you say “we need to simplify this flow,” suddenly you need user data, business impact, and a prototype to prove it.

That’s the job. And once you accept it, you can start optimizing for it.

The 80% Opportunity

Here’s where it gets interesting.

If 80% of the job is communication, documentation, and justification — that’s exactly where AI tools excel.

And yes, AI is already changing design work itself. Tools like Cursor, Lovable, and Figma AI mean we’re not starting from scratch anymore — not on code, not on prototypes, not on layouts.

But here’s what most “AI for designers” content misses: design leads don’t spend their days in Figma. We spend them in Slack, Docs, and meetings.

So while AI-generated mockups get the headlines, the bigger opportunity for leads is quieter: drafting that stakeholder update. Summarizing research into exec-friendly bullets. Turning meeting notes into action items. Writing the first version of a business case.

The stuff that eats your calendar but doesn’t require your craft skills.

The next parts of this series will break down exactly how I use AI for each of these:

– Synthesizing research without drowning in transcripts
– Writing business cases that actually get read
– Communicating with stakeholders in their language
– Enabling my team without becoming a bottleneck

This isn’t about working faster. It’s about reclaiming time for the 20% that actually needs a design lead’s judgment.

A drawing of an old desk with a big red clock above the desk
Image generated with Midjourney

Part 2: AI for Admin — Reclaim Your Time

Before we get to the exciting stuff (building prototypes, validating ideas), let’s address the elephant in the room: the 80% of your week that isn’t design work.

The State of AI in Design Report 2025 by Foundation Capital and Designer Fund found that 89% of designers say AI has improved their workflow, with 84% using it in the research and ideation phase while only 39% use it during delivery.*

*Note: This report comes from VC and investment firms with stakes in the AI design space — it’s useful directional data, not independent academic research.

Design leads? We’re not even in those stats. We’re too busy in meetings.

So let’s fix that.

Start with what hurts most

For me, it was meetings. Six a day, sometimes more. By Friday, I couldn’t remember what we decided on Monday. Sound familiar?

I started using Tactiq— it sits quietly in my Google Meet or Zoom calls, transcribes everything, and at the end gives me a summary with decisions and action items. What used to take me 20 minutes of post-meeting note cleanup now takes 2. Fireflies.ai does something similar and integrates with Slack if that’s where your team lives.

The trick isn’t the tool. It’s breaking the habit of thinking you need to manually capture everything. You don’t. Let the robot do it.

Research doesn’t have to be a black hole

Here’s a scenario: you’ve got 12 user interviews, survey results from 200 people, and a competitor analysis your team put together last quarter. Your stakeholder meeting is Thursday. They want “the key insights.”

This used to mean two days of reading, highlighting, and synthesizing. Now I dump everything into a tool like Dovetail and ask it to find patterns. “What are the top 5 pain points mentioned by at least 3 users? Give me direct quotes as evidence.

Is it perfect? No. You still need to sanity-check the output. But it gets you 70% of the way there in 30 minutes instead of 2 days.

Notion AI works well for this too if you’re already living in Notion. It can summarize messy interview notes, pull themes from a database, and clean up transcripts that look like someone typed with their elbows.

A note on confidentiality

If you work in a regulated industry — banking, healthcare, finance — you need to think carefully about what data you feed into AI tools. I work at a European bank with a security department that takes data classification seriously, and rightfully so.

User interview transcripts, internal research, and anything containing customer data should never go into a public AI tool without clearance. Check with your security and compliance teams first. Many organizations now have approved AI tool lists or enterprise versions with data processing agreements in place.

This isn’t a small thing. In financial services, confidentiality isn’t optional — it’s regulated. Plan your AI workflow around what’s approved, not what’s convenient. The last thing you want is to save 2 hours on research synthesis and spend 2 months explaining a data breach to regulators.

Speaking stakeholder

This one’s personal. I spend a lot of time translating design decisions into language that non-designers understand. “We need to simplify this flow” doesn’t land. “This flow has a 40% drop-off at step 3, and here’s why” does.

I use Claude for this. When I need to justify a timeline change or explain why we’re recommending a different approach, I’ll draft my argument and ask it to pressure-test it. “What would a skeptical product manager push back on here?” Then I refine before the actual conversation.

You can do the same with ChatGPT, though I find it tends to be more… diplomatic. Sometimes too diplomatic. When I ask for honest feedback, I want honest feedback, not a sandwich with compliments on both sides.

For research with sources, I use Perplexity. The key is always asking AI to provide references and data — it keeps hallucinations in check.

Glen Coates, former VP of Product at Shopify (now at OpenAI), put it well on The Product Podcast:

“I’ve now started taking screenshots and prompting AI to create a prototype from that. It speeds up the feedback cycle. It also helps me realize how dumb my ideas are before my team has to waste time on them.”

That’s the mindset. Use AI to stress-test your thinking before you take it to the room.

Documentation (the thing nobody wants to do)

Engineers need specs. You have Figma files and a head full of context that hasn’t made it onto paper.

Here’s a prompt I use regularly:

“I’m handing off a new savings goal widget to engineering. It shows progress toward a target amount, lets users adjust their goal, and sends notifications at milestones. Write three user stories in standard format with acceptance criteria.”

What comes back isn’t production-ready, but it’s a solid draft I can refine in 10 minutes. Compare that to the hour I used to spend staring at a blank Confluence page.

If your organization runs on Microsoft, Copilot does this inside Word and can pull context from your emails and previous docs. Useful if you’re in a corporate environment where everything lives in SharePoint.

Budget for exploration

One thing I’d recommend: set aside a small monthly budget — even €20–30 — specifically for trying new AI tools. Treat it as a learning investment, the same way you’d invest in a course or a design book. Subscribe, experiment for a week, and cancel if it doesn’t add value. Many tools offer free tiers or trials, but the paid features are usually where the real time savings live.

Just remember to cancel the ones that don’t stick. I’ve learned this one the expensive way.

One tool. One week. That’s it.

Don’t try to automate your entire workflow overnight. Pick the one task that consistently eats your time:

– Meetings? Try Tactiq or Fireflies
– Research synthesis? Try Dovetail or Notion AI
– Stakeholder communication? Try Claude or ChatGPT
– Documentation? Use AI prompts for user stories and specs

Use it for one week. See if it actually saves time. Then add another.

The goal isn’t to become an AI power user. The goal is to reclaim 5 hours a week so you can do the work that actually matters.

A drawing of a crowded desk with multiple pictures and sketches on the wall
Image generated with Midjourney

Part 3: AI for Prototyping — How to Create Again

Remember why you got into design?

Not the stakeholder emails. Not the capacity planning spreadsheets. The actual making of things.

Here’s what happened: as you climbed the ladder, you traded Figma time for meeting time. Makes sense — your value shifted from pixels to decisions. But somewhere along the way, most of us stopped creating anything at all.

I’m not here to argue that design leads should be doing hands-on design work. We shouldn’t — not for production anyway. That’s our team’s domain, and I’m fortunate to have an amazing team that handles craft with skill and care. What I don’t want to do is bypass them or create a dynamic where they feel undermined by a lead who keeps jumping into production work.

But there’s a different kind of creating that matters: the quick prototype that proves a point. The functional demo that ends the debate. The working thing that gets stakeholders to finally *see* what you’ve been trying to explain.

That’s what this part is about. Not replacing your team’s work — de-risking ideas before committing their time.

The Real Problem with Figma Prototypes

Traditional Figma prototypes are great for showing flows. Click here, go there, animate this.

But they hit a wall when you need to prove something works, not just looks right.

Try explaining a complex interaction to a VP who keeps asking “but what happens when…” and you’ll know what I mean. Or trying to get engineering buy-in on an approach they think is technically impossible.

Bryce York, a startup product leader, nails this:

“If you’re trying to get buy-in on a big concept, this can massively help your stakeholders really understand what you’re pitching. PMs have to have good imaginations, but most stakeholders’ roles don’t!”

Static prototypes leave too much to imagination. Working prototypes end debates.

Enter “Vibe Coding”

Andrej Karpathy (co-founder of OpenAI) coined the term in early 2025. The idea: describe what you want in plain language, let AI write the code, iterate by running and refining.

You don’t need to become an engineer. You need to be able to say “I want a dashboard that shows account activity with filters for date range and transaction type” and get something functional back in minutes.

For design leads, this isn’t about learning to code. It’s about removing the bottleneck between your vision and something people can actually use.

Carnegie Mellon’s Integrated Innovation is already teaching this approach: “We now require vibe-coded prototypes rather than basic wireframes or sketches — reflecting what the workplace will demand.”

The Tools (And When to Use Each)

There’s no single right tool. Here’s how I think about them:

Figma Make — Best for: staying in your existing workflow

If your designs already live in Figma, this is the lowest friction option. Paste a frame, describe what you want interactive, and it generates working code. The killer feature: it understands your layers, components, and structure — not just the image.

David Kossnick, Head of AI Product at Figma, describes it:

“We’re not just bringing the image when you copy a Figma frame. We’re giving AI the rich, structured data — layers, metadata, and styling details.”

Good for: Quick interactive versions of existing designs, staying in familiar territory, sharing with team

Lovable — Best for: standalone prototypes without touching code

Lovable is probably the most designer-friendly of the bunch. You describe what you want, it generates a complete working app. No code editor, no terminal, no git.

Christine Vallaure at UX Collective shares a practical tip:

“If the first result feels messy or off, don’t try to rescue it. Start fresh with a cleaner prompt. It saves time and sanity.”

The LogRocket team tested it for UX workflows and found it generated a working, mobile-responsive UI with logic in about 20 seconds. Not production code, but enough to validate an idea.

Good for: Proving concepts to stakeholders, testing flows before committing design resources, personal projects

Cursor — Best for: maximum control and flexibility

This one has a steeper learning curve — it’s a code editor, not a design tool. But it’s also the most powerful.

Joel Unger Design Director at Atlassian, uses it to prototype complex Trello interactions:

“AI is helping designers focus on higher-level thinking, communicate better with developers, and push creative boundaries.”

Hardik Pandya, also at Atlassian, spent 60+ hours vibe coding an elaborate product experience with data visualization, multi-device flows, and motion. His advice:

“Your prototype gains the realism that makes stakeholders take it seriously.”

Good for: Complex interactions, data-driven prototypes, designs that need to work with real APIs

When It Goes Wrong

It’s worth being honest about the limitations. Vibe coding can go spectacularly wrong when you trust the output without understanding it.

Tina Singh wrote about this in Bootcamp: AI-generated prototypes often lose context between iterations, behavior exposes gaps that looked fine in a static design, and evolving a prototype through prompts can start to feel more like chance than craft. Her rule is practical: if you need twenty-plus prompt iterations to get something right, stop. At that point, it’s consuming more time than it saves.

I’ve hit this myself. I once spent an hour trying to get an AI-generated dashboard to handle edge cases in filter combinations — something my team could have resolved in a fraction of that time with a proper Figma prototype and a dev conversation. The lesson: vibe coding is excellent for proving *concepts*, not for polishing *details*. Know when to hand off.

Even Karpathy himself — the person who coined the term — has acknowledged the limitations. His later projects were hand-coded because AI agents weren’t reliable enough for what he needed.

The takeaway isn’t “don’t use these tools.” It’s “use them for what they’re good at: quick, disposable prototypes that prove a point and then get thrown away.”

The Design Lead Angle

Here’s what most “vibe coding tutorials” miss: they’re written for individual contributors who want to build side projects.

As a design lead, you’re not trying to ship code. You’re trying to:

1. Win arguments faster — Instead of three rounds of stakeholder review on a concept, show them something working
2. De-risk before committing resources — Test if an idea is even viable before putting your team on it
3. Align engineering early — Nothing gets devs engaged like seeing a working prototype and being asked “is this technically feasible?”

Patrick Neeman tested several tools and concluded:

“These tools are on a path to be game-changing for concept validation and stakeholder buy-in. While it’s not pixel perfect, it gives everyone a good idea how something would work, in some cases eliminating weeks to months of engineering time for realistic testing.”

Start Small

You don’t need to build a complete product. You need to build enough to prove your point.

Pick one upcoming decision that’s stuck in debate. Maybe it’s:

– A new onboarding flow that stakeholders can’t visualize
– A dashboard concept engineering thinks is too complex
– A mobile feature that keeps getting deprioritized because “we’re not sure it’ll work”

Spend an hour with one of these tools. Not to ship anything — just to have something to show.

The goal isn’t to replace your team’s work. It’s to create the artifact that gets everyone aligned before your team does the real work.

What Stays Human

One important caveat: these tools get you 70–80% of the way there. The last mile still needs judgment.

Arpi Chugh, a UX designer who tested Lovable extensively, captures it well:

“I wasn’t using Lovable to come up with the design. I used it to test the design I already had in my head — faster than I ever could with traditional tools.”

The thinking is still yours. The strategy is still yours. The AI just removes the friction between having an idea and being able to show it.

Drawing of an agenda with a big green checkmark and multiple annotations
Image generated with Midjourney

Part 4: Validate Fast — From Prototype to Proof

You built the prototype. It looks good. Stakeholders can click around.

Now what?

Here’s where most design leads make a mistake: they treat the AI-generated prototype as the deliverable. It’s not. The prototype is a hypothesis. What you need is evidence.

This part is about turning “I think this works” into “Here’s proof it works” — fast enough that you don’t lose momentum.

The Justification Tax

In Part 1, I introduced the “justification tax” — the disproportionate amount of time design leads spend defending decisions that other disciplines make with a quick conversation. That tax doesn’t disappear once you have a prototype. If anything, it gets heavier: now stakeholders have something tangible to have opinions about.

Tom Greever, author of Articulating Design Decisions, puts it bluntly:

“The most articulate person often wins.”

That’s the problem. You shouldn’t have to be the most articulate person. You should have data.

AI prototyping gets you the artifact faster. Quick validation gets you the evidence to defend it.

Why AI Prototypes Still Need Testing

There’s a reason Nielsen Norman Group keeps warning about AI-generated designs: they often have visual issues like poor hierarchy, inconsistent spacing, and low contrast. They look plausible but don’t actually work.

Maze’s research on testing AI prototypes found three common problems:

1. Looks right, works wrong — AI creates what’s statistically likely, not what’s strategically correct. Navigation that seems logical might confuse real users.
2. Lacks real context — AI doesn’t understand your business goals, audience history, or product constraints. It generates generic solutions.
3. Breaks design systems — Enterprise teams find AI-generated designs ignore component rules and brand guidelines.

The fix isn’t to avoid AI prototyping. It’s to validate quickly before anyone falls in love with the wrong solution.

The 30-Minute Validation Loop

Here’s the workflow that changed how I approach stakeholder debates:

Step 1: Build competing prototypes (30–60 min)

When your team argues about approaches, don’t debate. Build both versions using Lovable, Figma Make, or Cursor.

Two functional prototypes beat two hours of whiteboard arguments.

Step 2: Set up a quick test (15–30 min)

Maze integrates directly with Figma prototypes and now supports AI-generated prototypes from Lovable and Bolt. Define one clear task: “Find and start a savings goal” or “Complete the first step of onboarding.

Keep it simple. One task. One success metric. Five to ten participants.

Step 3: Let data settle the debate (results in 1–2 hours*)

Maze shows completion rates, time-on-task, and heatmaps in near real-time. You’re not looking for statistical significance — you’re looking for obvious signals.

If Version A has 80% completion and Version B has 45%, the debate is over.

  • Note: The 1–2 hour turnaround depends on using Maze’s paid participant panel. If you’re recruiting your own users, expect longer. The completion rate example above is hypothetical to illustrate the point — in practice, your numbers will vary based on task complexity and audience.

Depersonalizing Design Debates

Here’s something I’ve learned managing 7 designers: ego kills collaboration.

When designers argue about approaches, they’re often defending their taste, not the user. “I think users would prefer X” vs “No, Y is more intuitive” is an unwinnable argument because it’s subjective.

Quick validation shifts the conversation. The prototype becomes neutral ground. It’s not “your idea vs. my idea” — it’s “let’s see what users actually do.”

TEG (The Economist Group) adopted this approach and found they could “set up tests and get data in minutes, reducing the need for lengthy discussions and debates based on assumptions.”

As a lead, you’re not always the tiebreaker. You’re the one who creates conditions for evidence-based decisions.

Tools for Quick Validation

Maze — The fastest path from prototype to data

Best for: Quantitative validation (completion rates, heatmaps, paths). Integrates with Figma, supports AI-generated prototypes from Lovable, Bolt, and Figma Make. Results arrive in 1–2 hours with their panel, or share with your own users.

Pricing: Free tier available, paid starts at $99/month.

Attention Insight — Predict attention before testing

Uses AI to generate eye-tracking heatmaps based on visual saliency. Good for quick gut-checks on hierarchy and CTA visibility before you run real tests.

Hotjar — For live products

If your prototype is deployed (Lovable and Bolt can do this), Hotjar gives you session recordings and heatmaps of real behavior.

Lightweight alternatives:

  • 5-second tests (first impressions, what users notice)
    – First-click tests (where users expect to tap)
    – Quick surveys right after tasks (“How confident did you feel?”)

The Prompt Formula for Clear Results

Speaking of prompts — they matter for testing too. Borrowed from the ADPList AI Design ebook, here’s a formula worth bookmarking:

[Context] + [Task] + [Format] + [Tone] + [Constraints]

Applied to test task writing:

Bad: “Explore the app and tell us what you think.”

Better: “You want to start saving €50/month for a vacation. Find where to set up a savings goal and complete the first step.”

The second version has context (user goal), clear task (specific action), and implicit success criteria (completing the step).

When NOT to Validate

Quick validation isn’t always the answer. Skip it when:

The decision is reversible — If you can ship and iterate, ship and iterate
Stakeholders aren’t actually blocking — Don’t create process where you don’t need it
– You’re optimizing prematurely— Early concepts need room to breathe

Use validation strategically: when there’s real disagreement, real risk, or real investment at stake.

From Evidence to Alignment

The goal isn’t just to win arguments. It’s to create shared understanding.

When you bring test results to a stakeholder meeting, you’re not saying “I was right.” You’re saying “Here’s what we learned together.

Data-driven design research shows that “data provides a common language for teams and stakeholders to discuss design decisions.” The evidence becomes the foundation, not your opinion.

Drawing of three papers on a desk with multiple stamps and signatures
Image generated with Midjourney

Part 5: The Pitch — Prototype + Data = No Debate

You’ve built the prototype. You’ve tested it. You have data.

Now comes the moment that separates design leads who get things built from those who get stuck in endless review cycles: the pitch.

This isn’t about presentation skills or PowerPoint templates. It’s about bringing the right artifacts to the right people and making decisions inevitable.

Why Most Design Pitches Fail

Here’s the pattern I see constantly:

Designer presents beautiful mockups. Stakeholder says “I like it, but…” followed by personal preferences. Engineering says “That looks hard.” Meeting ends with “Let’s iterate and reconvene.”

Three weeks later, same meeting.

The problem isn’t the design. It’s the format.

Static mockups invite opinion. Working prototypes invite reaction. Data invites alignment.

Todd Zaki Warfel, former design executive at Twitter, Cisco, and Workday, puts it simply:

“When you learn to present your work with the right balance of proof and persuasion, you can win over your stakeholders.”

The balance matters. Proof without persuasion is a data dump. Persuasion without proof is just opinion vs. opinion.

The Three-Part Pitch Stack

After years of pitching designs (and watching many fail), here’s the structure that works:

1. The Working Prototype

Let them click. Let them experience. Don’t explain what it does — show them.

AI-generated prototypes from Lovable, Figma Make, or Cursor change the dynamic immediately. Instead of imagining what something might feel like, stakeholders interact with it directly.

PixelFreeStudio’s research on prototype presentations found:

“When stakeholders are actively involved in the presentation, they’re more likely to feel a sense of ownership over the project, which can lead to stronger support and enthusiasm for your ideas.”

The prototype creates shared experience. Everyone saw the same thing.

2. The Validation Data

This is where Part 4’s work pays off.

Don’t just say “users preferred this approach.” Show them:

– Completion rates (80% vs. 45%)
– Time-on-task differences
– Heatmaps showing where attention goes
– Direct user quotes from testing

Data doesn’t eliminate disagreement. But it changes the nature of the conversation from “I feel” to “users showed us.”

3. The Engineering Alignment

Here’s where most design leads drop the ball: they pitch to stakeholders before talking to engineering.

Then engineering raises feasibility concerns in the meeting. Stakeholder confidence drops. The pitch stalls.

Flip the order. Before your stakeholder meeting:

– Share the prototype with engineering leads
– Ask: “Is this technically feasible? What would be hard?
– Identify constraints and incorporate them into your pitch

When you present, you’re not saying “here’s what we want.” You’re saying “here’s what we’ve validated with users and confirmed with engineering.

That’s a different conversation entirely.

Tailoring the Pitch to Your Audience

Different stakeholders care about different things. Obvious, but most design leads present the same deck to everyone.

For Executives (VP+):

– Lead with business impact, not design rationale
– Keep it short — they’re comparing your pitch to ten other priorities
– Show you’ve de-risked it: “Users validated this, engineering confirmed feasibility”
– Have a clear ask: approval, resources, timeline

For Product Managers:

– Connect to roadmap and OKRs
– Show user evidence that supports their metrics
– Be clear about dependencies and trade-offs

For Engineering Leads:

  • Show the prototype early (before the big meeting)
    – Ask genuine questions about feasibility
    – Demonstrate you’ve thought about edge cases
    – The AI prototype’s code isn’t production-ready — acknowledge that

As Greg Becker notes:

“They don’t speak your language — you must speak theirs.”

The “No Surprises” Rule

The biggest pitching mistake I see design leads make: surprising people in meetings.

Stakeholders hate surprises. If your VP learns about a major design direction for the first time in a review meeting, you’ve already lost — even if the design is perfect.

Here’s my pre-meeting checklist:

• Engineering has seen the prototype and raised concerns (which I’ve addressed)
• Product knows how this connects to their goals
• Key stakeholders got a heads-up about the direction
• I know what objections are coming and have responses ready

The meeting becomes confirmation, not revelation.

Handling the “But What About…” Questions

Even with great prep, you’ll get questions. Here’s how to handle common ones:

”Can we also add [feature X]?”

Don’t say no. Say: “We can test that. Let me add it to the next validation round and show you what users think.”

You’ve shifted from defending to learning.

”Engineering says this is too complex.”

If you’ve done pre-alignment, this shouldn’t happen. If it does: “Let’s schedule a technical review. I want to understand the constraints so we can find a solution that works.”

”I just don’t like [specific element].”

This is the hardest one — pure preference. Your response: “I hear you. We tested this with [X] users and [specific result]. Happy to test an alternative if you’d like to see the comparison.”

You’re not dismissing their opinion. You’re offering to put it to the same test everything else went through.

The AI Advantage in Pitching

Here’s what changes with AI prototyping in your pitch toolkit:

Speed of alternatives: Stakeholder suggests a different approach? You can often build and test it before the next meeting. That used to take weeks.

Tangibility over imagination: Working prototypes bridge the imagination gap. Most stakeholders’ roles don’t require them to envision complex interactions from mockups alone — so don’t ask them to.

Lower stakes for testing: When prototypes take days, testing every idea feels expensive. When they take hours, you can test stakeholder suggestions without derailing your timeline.

Code as communication: If your prototype was built in Cursor with React components, engineering can see exactly what you’re proposing. It’s not “can you build something that looks like this?” — it’s “here’s a working version, how close is this to production-ready?”

The Follow-Through

The pitch doesn’t end when the meeting ends.

Within 24 hours:

– Send a summary of decisions made and next steps
– Share the prototype link (so stakeholders can revisit)
– Note any open questions and when you’ll address them
– Thank engineering for their input (publicly, if appropriate)

This isn’t just courtesy. It creates a paper trail that prevents “I don’t remember agreeing to that” in future meetings.

From Pitch to Build

When your pitch succeeds, the transition to development should be smooth because you’ve done the work:

– Prototype demonstrates the interaction model
– Test data shows user acceptance
– Engineering has already reviewed feasibility
– Stakeholders are aligned on the direction

Figma’s design handoff guide emphasizes:

“By working together with a shared language, designers and developers can avoid misalignment, reduce re-work, and confidently build a well designed and technically sound product together.”

The prototype isn’t the deliverable. The prototype was the tool that got everyone to “yes.”

The same character from the beguining of the article looking at a selfie camera and smiling
Image generated with Midjourney

Part 6: The Future of Design Leads — What Stays Human

Let’s end where we started: with the reality of the job.

Design leads don’t spend their days designing. They spend them communicating, aligning, justifying, and managing. The 80% job.

AI is getting very good at the 20% — the mockups, the prototypes, the visual production. What it can’t do is the 80%.

That’s not a bug. That’s your future.

The Skill Shift Is Already Happening

Figma’s 2025 AI Report surveyed thousands of designers and found something interesting: 52% of AI builders say design is more important for AI-powered products than traditional ones, not less.

The reason? AI outputs need curation, judgment, and quality control. Someone has to decide what’s good. Someone has to connect it to strategy. Someone has to make sure it actually solves the user’s problem.

That someone is still human.

Nielsen Norman Group noted in 2025 that while generative tools can speed up tasks like asset creation and copy generation, “they still can’t replicate the insight of human designers.”

The tools are changing. The need for design thinking isn’t.

What AI Cannot Do

Let me be specific about what stays human — not in theory, but from experience.

Judgment and taste.

AI generates options. Humans choose. Andrea Grigsby in UX Collective put it simply: “AI may automate technical skills in design, but it can’t replicate human taste.” Taste isn’t preference — it’s the accumulated wisdom of knowing what works, what feels right, what aligns with brand and user and context simultaneously. It’s the thing that makes you say “this doesn’t feel right” before you can articulate why. AI can produce a thousand variations. You’re the one who knows which one is right.

Stakeholder navigation.

No AI is sitting in your meeting reading the room. Understanding that the VP is skeptical because of a failed project last quarter. Knowing that engineering is stretched and needs a simpler solution. Sensing that the PM is worried about timeline but won’t say it directly. I learned this lesson deeply at a previous job, when I had to present a product vision directly to the multinational CEO. Before the meeting, I did extensive research — his decision-making style, his priorities, what resonated with him. I talked to people who’d presented to him before. Then I spent weeks quietly getting his most trusted advisors on my side, making sure they understood and supported the vision before I ever walked into that room. The presentation was a success — it defined a product we worked on for two years, led to hiring two new development teams from Portugal, and more than doubled the UX team size. No AI could have navigated that. The research, the relationship-building, the reading of organizational dynamics — those are fundamentally human skills that determine whether your work gets built or gets stuck.

Team development.

You manage people. Each has different strengths, motivations, and career goals. AI can’t coach someone through a confidence crisis. It can’t mediate a conflict between two designers who have different working styles. It can’t help a junior designer see the gap between where they are and where they want to be. Research from the Top Employers Institute shows that 68% of employees believe non-work-related training that supports their overall well-being is vital. Leadership is about the whole person, not just their output.

Ethical decision-making.

Should we use this dark pattern that increases conversions but frustrates users? Should we collect this data even though users probably don’t understand what they’re agreeing to? Should we ship this feature knowing it’s not accessible? AI doesn’t have values. You do. McKinsey’s research found that only 1% of companies feel “mature” in AI integration, and the biggest barriers are not technical but leadership and organizational factors.

Sense-making in ambiguity.

When the strategy is unclear, when stakeholders disagree, when the data points in multiple directions — that’s when human judgment matters most. AI excels with clear inputs and defined parameters. Real design leadership happens when neither exists.

The Design Lead of 2028

Based on everything we’ve covered, here’s my bet on what the design lead role looks like in a few years.

AI transformation of creative work is accelerating — the tools we’ve discussed in this series were barely functional two years ago, and they’re now reshaping workflows daily. I don’t know exactly how fast or how far this goes, but the direction is clear: the tactical parts of design work will increasingly be handled by AI, and the strategic, human parts will become more valuable, not less.

Design leads will spend less time on creating production-ready mockups, writing documentation from scratch, synthesizing research manually, and building presentation decks. They’ll spend more time on evaluating and curating AI outputs, coaching team members through AI-augmented workflows, stakeholder alignment and organizational navigation, defining what “good” looks like for AI tools, and ethical oversight and quality judgment.

The title might stay the same. The job changes significantly.

Maya Brennan 2026 predictions in Bootcamp capture it:

“It’s an undoubtedly exciting time to be a Product Designer: the technical foundations have been built — and it is now up to us to re-imagine what the visual and interactive frameworks for these AI experiences will look like.”

The people defining those frameworks won’t be AI. They’ll be design leaders with taste, judgment, and organizational skill.

Your Competitive Advantage

If you’ve made it through this series, you have something most design leads don’t: a practical framework for using AI as a lead, not just as an individual contributor.

Let’s recap what we covered:

Part 1: The reality of design leadership — the 80% job is communication, not creation.

Part 2: AI for admin work — reclaim time through meeting notes, research synthesis, stakeholder communication, and documentation.

Part 3: AI for prototyping — vibe coding tools that let you create again, not to ship production work, but to prove ideas faster.

Part 4: Quick validation — testing tools to turn prototypes into evidence that ends debates.

Part 5: The pitch — prototype + data + engineering alignment = decisions that stick.

Part 6: What stays human — judgment, taste, leadership, ethics, and the ability to navigate ambiguity.

This isn’t about AI replacing you. It’s about AI handling the friction so you can focus on the work that actually requires a design lead.

What To Do Next

If you take one thing from this series:

Start using AI for one task this week.

Not everything. Not a complete workflow transformation. One task.

Maybe it’s meeting notes. Maybe it’s drafting a stakeholder email. Maybe it’s spinning up a quick prototype for an idea that’s been stuck in your head.

Build the muscle. See what works. Iterate.

That’s how design leads have always improved — by doing, learning, and adjusting.

The tools are different now. The approach is the same.

I’m a Chapter Lead managing designers at a European bank. I write about the messy intersection of AI, design leadership, and organizational reality — the stuff that doesn’t fit into neat tutorials. If this resonated with you (or you disagree with any of it), I’d genuinely like to hear from you. The best conversations I’ve had about this topic started with someone telling me I was wrong about something.

Find me on LinkedIn or Medium

Resources & referrals:


The 80% job: how design leads are using AI — and it’s not about mockups was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Schreibe einen Kommentar