Which parts of your work do you actually want to keep?


Which parts of your work do you actually want to keep?

By Hannah Baker


This one's a few days late; life got in the way. Back to our regular scheduled broadcast next week.

For a long time, I was using Claude the same way most people do. As a chat function. A thinking partner. Something to help me get things done.

But I kept running into the same problem. Every new conversation, I'd have to re-explain everything, my tone, my formatting, what I needed the output to look like. So I'd stay in the same chat for weeks just to hold onto the context.

And then the chat would get so long it would start losing it anyway. I tried projects, which helped, but the formatting problem remained. And what if I needed to pull from two projects at once?

I was frustrated. And then someone told me about skills.

I want to be upfront about something before I go further. Some of you are already using Claude, or something like it, in much more complex ways.

Coding, building things in your terminal, and working with agents. If that's you, this might be old news, and that's fine. But for those of you who aren't there yet, I think this will open some doors in some interesting ways.

So, this isn't a hype piece, and it's not a tutorial. It's something I've been working out in my own practice, and I think there's a frame in it worth sharing.

What a Claude Skill Actually Is

A few months ago I started building what Claude calls “skills”, saved sets of instructions that tell it how to do a specific task your way, every time, without you re-explaining yourself.

The analogy I keep coming back to: it’s the difference between briefing a new person from scratch every single time, versus having a team member who already knows your standards.

What surprised me wasn’t how useful they turned out to be. It was what I had to figure out about my own work to build them. Because writing the instructions meant answering questions I’d never really sat with:

  • What does good actually look like here?
  • What’s the structure?
  • What do I always include, and what do I always leave out?

Most of us have strong instincts about quality. We know good when we see it. But we rarely have to articulate it from scratch, for something with no context and no assumptions.

That process, of making your own standards explicit, turned out to be the more interesting thing.

Not Everything Is a Good Candidate

This is the part most AI content skips. Not every task is worth systematising, and building a skill for the wrong thing will produce confidently formatted garbage.

The tasks that work well tend to share a few characteristics.

  1. They’re repeatable, same shape, different content, over and over.
  2. You can describe what good looks like in terms of structure, length, tone, what to include, and what to leave out.
  3. They’re more synthesis than invention, meaning Claude is organising and formatting thinking you’ve already done, not generating ideas from scratch.
  4. They have a logical sequence to them that you can actually write down.

If you have past examples of what great looks like, even better, you can give those to Claude, and it will learn from them rather than working purely from description.

What This Looks Like In Practice

As of right now, the most common uses are:

  • Coding and development — enforcing team standards, code review, and commit formats so Claude follows your process rather than guessing
  • Document creation — producing reports, decks, and briefs in a specific format every time, without rebuilding from scratch
  • Frontend design — encoding your visual standards so outputs don’t default to the same generic AI aesthetic
  • Research synthesis — turning data and interviews into structured documents in your format
  • Meeting and session notes — taking a transcript and producing summaries, decisions, and follow-ups the way you’d write them

The coding side has the most mature ecosystem by far. But the document and workflow skills are where it’s growing fast, and where I think it’s most interesting for people like you.

Here’s what it looks like from my own work. I facilitate a program, and after every session, I need to write a Slack message and enter a summary into a database.

Same structure every time, different content. The skill takes my notes and transcript and produces both outputs in my voice in about thirty seconds. I still do the session. I still do the thinking. The skill handles the part that was just execution.

For those of you building interfaces or prototypes without writing code: if you’ve been using Claude for this, you’ve probably noticed it defaults to the same visual signature every time, same fonts, same layout, same aesthetic people now immediately clock as AI-generated.

A skill is how you give it your design standards before it touches anything. The code is still Claude’s. The design judgment is still yours.

A note if you’ve heard the term “agentic design system” floating around, that’s something different. It’s a company-level infrastructure where AI behaves consistently across an entire organisation.
More complex, more expensive, and a different conversation entirely. Skills are personal. They live in your Claude, and they’re about how you work, not how a whole organisation works.

You Don’t Build It From Scratch

You don’t write the instructions yourself. You describe what you’re trying to do in ordinary, imperfect language, the way you’d explain it to someone over coffee, and Claude asks you questions, learns what good looks like for your task, and drafts the skill file for you. Using, of course, the Skills Creator Skill already in your system.

What you have to bring is clarity about the output. Claude handles the rest.

The Part I’m Still Thinking About

There are more and more skills out there now, including ones that go well beyond coding. I found one that pulls insights and findings from research. And it works.

But I don’t use it.

Because finding the insights is where I get genuinely immersed in the data, where I develop the context and judgment to know what actually matters and what doesn’t.

If I hand that to Claude, I might get the findings, but I won’t have the foundation to know if they’re right.

The output would be there.

The understanding wouldn’t.

That’s made me much more deliberate about what I build and what I don’t. Some things are worth systematising because the execution is getting in the way of the thinking.

Other things, the ones where the doing is the thinking, those stay with me.

The question worth sitting with isn’t “what can a skill do?”

There’s increasingly very little it can’t.

The question is: which parts of your work do you actually want to keep?


If you’re curious about this and want to explore it further, I’m considering running a small workshop on building and using Claude skills, specifically for designers, researchers, and facilitators.

Nothing confirmed yet. If that’s something you’d want to join, you can add your name to a list here, and I’ll be in touch if it comes together.


COURSE: Facilitating Workshops
Learn to turn meetings into momentum and clear decisions.
Next cohort: Autumn 2026
Join Waitlist


COURSE: Defining UX Strategy
Learn to design a winning strategy that aligns design with business.
Buy a Seat


Until next time!

Hannah Baker
Facilitator & Co-Founder
The Fountain Institute

The Fountain Institute

The Fountain Institute is an independent online school that teaches advanced UX & product skills.

Read more from The Fountain Institute
Purple gradients, an AI-generated tell

7 Tells that a UI is AI-Generated by Jeff Humble Dear Reader, You can see a vibe-coded app from a mile away, if you know what to look for. Here are seven design patterns that scream amateur vibe coder. Learn them, avoid them, and stay above the rising tide of slop, my friends. 1. Neon color palette from IceWhistle If it's vibe-coded, it's gotta be neon. To slop this one up to the max, use 5+ neon colors and never pick a single one to focus. Why AI loves it: Neon-on-dark is overrepresented in...

The brief that keeps changing By Hannah Baker Dear Reader, There’s a particular kind of exhaustion I keep hearing about. It’s not burnout, exactly. It’s not being overworked. It’s something more specific, the feeling of being asked to plan something when the thing you’re planning for keeps shifting underneath you. I’ve been hearing it a lot lately. And more and more, it has AI somewhere in the middle of it. Here’s a version of a situation I keep encountering. Someone is working on two large...

OpenClaw Part 2: The 🦞 didn't replace Claude. It made me laugh instead. by Jeff Humble Dear Designer, In Part 1, I spent €590 on a Mac Mini, two days in Terminal, and $3.14 in API tokens I didn't mean to burn. I ended with a list of seven things I was going to automate with my OpenClaw agent 🦞. I only got to one of them. Getting an AI agent from zero to useful takes longer than any article will tell you. Most of the time since then has gone into figuring out how to make it reliable, not into...