Workflow Commands
By the sixth feature, Friday has a rich set of infrastructure: worktrees, scheduled tasks, a project switcher, a persistent ACP session. Starting a new feature still means typing out the same long instructions every time.
- "Create a new branch called feature/007-whatever, switch to the worktree, make a feature doc."
- "Plan phased work. Tests in the same phase as the code they target."
- "Do each individual phase. After each phase's work is done, run pre-commit hooks, update the feature doc, and commit."
These prompts aren't complicated. They're just long and easy to get wrong. Miss a step and you end up with a feature branch that has no worktree, or a plan where tests land in a separate phase from the code they cover, or a commit that skips the feature doc update. The cost isn't a crash – it's drift.
The planning prefix sometimes includes the testing instruction, sometimes doesn't. The implementation prompt sometimes mentions feature doc updates, sometimes forgets. There is no canonical version – the source of truth is whatever I remember to type.
For a personal assistant that's supposed to reduce cognitive overhead, that's a failure of the tool.
The fix is five skills that encode the full feature lifecycle: start a feature, plan it, implement it, open a PR, merge it. Each one is a markdown file with a fixed prompt. Each one fires that prompt into the ACP session with a single POST. The skills themselves do almost nothing – and that's the point.
Five Skills, Five POSTs
The pattern is the same for all five commands:
- Compose a prompt (sometimes with user input, sometimes hardcoded).
- POST it to
localhost:3100/prompt. - Confirm to the user that it's queued.
That's it. The skill file is the prompt template. Claude reads it, follows the instructions, and uses the tools listed in allowed-tools.
Take custom-merge – its entire implementation is a markdown file that says "compose this fixed prompt, POST it to localhost:3100/prompt, confirm to the user." No runtime. No build step. The skill file is both the source of truth and the executable.
In Claude Code's CLI, these would be slash commands – /custom-new-feature, /custom-plan, and so on. Through Telegram, they're just messages. You type "new feature" or "plan" and Friday recognises the intent, loads the matching skill file, and follows it:
- New feature – asks what the feature is for, sanitises the answer into a branch-safe slug, queues "create a branch + worktree, make a feature doc, then switch to the worktree."
- Plan – asks for a brief, queues "plan phased work, tests in the same phase as the code they target, {brief}."
- Work – queues "do each phase, run pre-commit hooks, update feature doc, commit after each."
- PR – queues "push and submit a PR assigned to me, watch it until ready to merge."
- Merge – queues "merge the PR, switch to main, pull, clean up."
Each one fires a single POST, produces a confirmation, and gets out of the way. The prompts are always the same – no drift, no forgotten steps, no variations.
Notice the ordering in "new feature." The feature doc has to be created before the switch, because switching the working directory resets the Claude session. We covered this in the previous chapter – moving executionCwd clears the session ID and starts fresh.
Anything Claude knew about the current turn – the slug it just sanitised, the branch it just created – is gone. If the switch came first, the follow-up prompt would land in a blank session with no idea what feature doc to create or where.
So the skill does all the work it can in the current session, then switches last. The new session starts clean, with the uncommitted feature doc sitting in the worktree path the user now finds themselves in.
Habits Encoded
The value of these workflows isn't automation for its own sake. It's the habits they encode. Before Friday, I built these habits using Claude Code's CLI.
- Planning in a separate step, committed to the repo as a feature doc – not just so I could review it before any code got written, but so there's a paper trail. Anyone picking up the work can see what was planned and why. The feature doc is an artefact, not a throwaway prompt.
- Phased implementation, where each phase has its own tests and its own commit. Review between phases, so a bad direction gets caught early instead of compounding across the whole feature.
These aren't novel ideas. They're just good practice that's easy to skip when you're moving fast. The workflow commands make them the default. Planning is always thorough because the planning prompt says so. Work is always phased because the implementation prompt says so. Tests always land in the same phase as the code they cover because the prompt enforces it.
The habits came first. The skills just mean I don't have to type them out every time.
Phased implementation works especially well with an AI agent. Each phase is a small, reviewable chunk – a few files, a focused test suite, a clear commit. If something goes wrong, you know which phase introduced it.
POST to /prompt vs. Direct Invocation
You may be wondering why we don't just pass the message directly to the ACP session. Injecting the prompt via POST /prompt seems like we're doing the more complicated thing, but there's a good reason for it.
When you type "add retry logic to the scheduler," that's all you want to think about. You don't want to remember to say "plan phased work" or "tests in the same phase as the code they target." The skill takes your brief and wraps it inside a much more specific prompt – all the boilerplate instructions you'd otherwise have to type from memory every time. The composed prompt is what gets injected into the next turn, not your original words.
Under the hood, POST /prompt during a live streaming turn doesn't interrupt anything – it queues the message. Once the current turn finishes, the bridge picks up the queued prompt and fires it as the next turn. One at a time, no race condition.
It's the same principle behind the message queue from chapter four. Don't try to do everything in one shot. Queue the next step, finish the current one, let the system pick it up. The architecture was already built for this – the skills just use it.
Markdown as Executable
If you haven't used Claude Code skills before, here's the idea. A skill is a markdown file with YAML frontmatter at the top – a name, a description, and an allowed-tools list that controls what Claude can do while running the skill. Everything below the frontmatter is the prompt. Claude reads it and follows the instructions.
Most people put skills in .claude/skills/ inside a project. I keep mine in a git repo at ~/.claude/skills/ that syncs across machines.
These workflow commands needed to be available in every Claude session – not just when I'm in the Friday project. So they live in ~/.claude/commands/ as symlinks. That's not where skills normally go, but it means they're global. Any project, any session, you can say "new feature" and the skill is there.
Symlinks come with a maintenance cost. Because Friday uses worktrees, the symlinks point at a specific worktree path. When a branch merges and its worktree gets cleaned up, the symlinks break. Friday's own CLAUDE.md has an "After a Merge" section that reminds it to check that all ~/.claude/commands/ symlinks still resolve and retarget any that point at a deleted worktree.
It's a small chore, but forgetting it means the next "new feature" silently fails.
Here's custom-merge, the simplest of the five:
---
name: custom-merge
description: Merge the open PR, switch to main, pull, and clean up.
allowed-tools:
- Bash
---
Compose this exact prompt and POST it to http://localhost:3100/prompt:
"Merge the open PR using gh CLI, switch to main, pull, and clean up
the worktree. Confirm each step."
Confirm to the user: "Queued: merge and clean up."
This is from
.claude/commands/custom-merge.md(symlinked globally)
That's the entire implementation. Claude reads it, follows it, and the feature works.
custom-new-feature is more involved. It asks what the feature is for, sanitises the answer into a slug (lowercase, spaces to hyphens, strip non-alphanumeric, collapse consecutive hyphens, trim), and composes a prompt that includes the slug. All of that logic lives in the markdown instructions. Claude reads the rules and applies them.
This sounds fragile until you remember that the documentation is the implementation. If the skill file says "lowercase and replace spaces with hyphens," that's what Claude does. The specification and the executable are the same file.
Interactive vs. Fire-and-Forget
Two of the five skills need user input. Three don't.
custom-new-feature asks "What is this feature for?" and sanitises the answer into a branch-safe slug. custom-plan asks for a brief. The other three – work, PR, merge – have fixed prompts with no variables. They fire and forget.
This split is visible in the allowed-tools frontmatter. The two interactive skills list AskUserQuestion and Bash. The three fire-and-forget skills list only Bash. Claude Code enforces this – a skill that doesn't list AskUserQuestion can't ask questions. A merge command that suddenly prompts "Are you sure?" would be annoying. The frontmatter makes that impossible.
There's a deeper pattern here. Decisions happen at the beginning of the lifecycle – what to build, what to plan. Everything after that is mechanical. Once the plan exists, execution follows a fixed sequence: implement, PR, merge. No human input needed. The skills mirror that reality.
This is the unattended workflow – kick it off, walk away, come back to a PR. It works well for features where the brief is clear and the implementation is straightforward.
But not every feature is like that. Sometimes I want to guide Claude through implementation step by step – reviewing each phase's output before moving on, or steering it away from a bad approach mid-flight. For those, I skip the implementation skill and prompt manually.
The rest of the lifecycle still uses the skills. It's not all-or-nothing. You can swap any step from automated to attended and back.
Tests That Verify Documentation
You'd expect tests to run code and check behaviour. These don't run anything – they read each skill file, parse the YAML frontmatter, and assert on the content:
describe('custom-new-feature skill', () => {
const content = readFileSync(
join(__dirname, '../../.claude/commands/custom-new-feature.md'),
'utf-8'
);
const { data: frontmatter } = matter(content);
it('includes AskUserQuestion in allowed tools', () => {
expect(frontmatter['allowed-tools']).toContain('AskUserQuestion');
});
it('includes Bash in allowed tools', () => {
expect(frontmatter['allowed-tools']).toContain('Bash');
});
it('posts to the prompt endpoint', () => {
expect(content).toContain('localhost:3100/prompt');
});
it('includes slug sanitisation rules', () => {
expect(content.toLowerCase()).toContain('lowercase');
expect(content).toContain('hyphens');
});
});
This is from
tests/skills/custom-new-feature.test.ts
These are structural assertions. They can't verify that Claude will follow the instructions correctly – that's a property of the model, not the file. What they can catch is drift. If someone edits a skill and accidentally removes the POST URL, or strips AskUserQuestion from a skill that needs it, the test fails.
Conclusion
If you go looking for these skills in the codebase today, you won't find the names used in this chapter. A later feature added a custom-workflow- prefix to all of them, and custom-work became custom-workflow-implement – a better name for what it actually does. The skills themselves didn't change. Just the namespace.
The custom- prefix is deliberate. In Claude Code's CLI, typing /custom lists every skill I've added – project-level and global. It's a quick way to see what's available without scrolling past the built-in commands. The prefix is a namespace hack for discoverability.