Worktree Management

Friday runs on a single machine. A dozen repos live under ~/Sites/. Each one has an nginx config, a .test domain, and an SSL cert. The browser points at a hardcoded path. The dev server serves whatever's checked out.

That's fine until you want to test a feature branch. You run git checkout feature/xyz, and the directory changes underneath everything. If it's a Laravel app, nginx is now serving the feature branch. If dependencies differ, things break. If Friday is running a background task against that repo at the same time, it's reading files that just shifted mid-checkout.

Git worktrees solve the multiple-checkout problem. You can have main and feature/xyz checked out simultaneously in different directories. But worktrees alone don't solve the plumbing. Which checkout is the browser pointing at? How does nginx know? What happens when you switch? How do you create a worktree without forgetting to copy .env or update .gitignore?

The feature isn't worktrees themselves. It's the convention and tooling that makes worktrees invisible. A project descriptor that tells Friday what nginx needs. A symlink that nginx resolves through. A way to run privileged commands without blanket sudo. And a set of skills that hide all of it behind "create a worktree" and "switch to this branch."

LIVE.md: A Project Descriptor

Friday has no standard way to know what a project needs to be browser-visible. Each project gets a LIVE.md in its root:

domain: floaty.dev.test
type: laravel

The domain tells Friday what .test URL this project serves. The type tells it how to write the nginx config. For Node projects, there's a third line:

domain: friday.assertchris.dev.test
type: node
port: env:PORT

This is from LIVE.md in the Friday repo

port: env:PORT means "read PORT from the active worktree's .env when generating the nginx config." Each worktree can serve on its own port without changing LIVE.md.

The type field supports laravel (PHP-FPM behind nginx), node (reverse proxy), static (plain file serving), and none (no browser presence). A CDK infrastructure repo gets type: none so the convention is documented even when there's nothing to serve.

We could have built a central registry instead – a config file in the Friday repo listing all projects. But LIVE.md travels with the project. A new project just needs a three-line file and Friday knows what to do with it.

The Sudo Problem

Nginx configs live in /etc/nginx/. Reloading nginx requires systemctl reload. Both need root. The lazy option is NOPASSWD: ALL in sudoers, but that's too much trust for a process that takes instructions from a language model.

Instead, we grant passwordless sudo to a single shell script at /usr/local/bin/friday-privileged. Before it executes anything, it checks the database for an approved token:

#!/bin/bash
set -euo pipefail

PATH="/home/linuxbrew/.linuxbrew/bin:/usr/local/bin:/usr/bin:/bin:$PATH"

REQUESTED_CMD="$1"
TOKEN="${FRIDAY_APPROVAL_TOKEN:-}"
DB="${FRIDAY_DB:-}"

STORED_CMD=$(sqlite3 "$DB" \
  "SELECT command FROM pending_approvals WHERE token = '$TOKEN' AND status = 'approved';")

if [ -z "$STORED_CMD" ]; then
    echo "Error: No approved token found" >&2
    exit 1
fi

if [ "$STORED_CMD" != "$REQUESTED_CMD" ]; then
    echo "Error: Command does not match approved command" >&2
    exit 1
fi

# Check the 5-minute expiry window
REQUESTED_AT=$(sqlite3 "$DB" \
  "SELECT requested_at FROM pending_approvals WHERE token = '$TOKEN';")
NOW_EPOCH=$(date +%s)
REQUESTED_EPOCH=$(date -d "$REQUESTED_AT" +%s)
AGE=$((NOW_EPOCH - REQUESTED_EPOCH))

if [ "$AGE" -gt 300 ]; then
    sqlite3 "$DB" \
      "UPDATE pending_approvals SET status = 'expired' WHERE token = '$TOKEN';"
    echo "Error: Approval window expired (>5 minutes)" >&2
    exit 1
fi

sqlite3 "$DB" \
  "UPDATE pending_approvals SET status = 'used' WHERE token = '$TOKEN';"

eval "$REQUESTED_CMD"

This is from /usr/local/bin/friday-privileged

The approval lifecycle looks like this:

  1. Something requests approval. A token gets written to the database as pending.
  2. You approve it via Telegram (more on how in a moment).
  3. The token flips to approved.
  4. friday-privileged checks that exact token before running the command.
  5. The token is marked used – it can't be replayed.

If nobody approves within five minutes, the token expires automatically.

Notice the PATH line at the top. When sudo runs a command, it resets PATH to a minimal set that doesn't include Homebrew. On this machine, nginx, sqlite3, and mkcert all live at /home/linuxbrew/.linuxbrew/bin. Without that line, the script exits 127 – "command not found."

I should be honest about what this is. The token check is a UX gate, not a security boundary. Friday writes to its own database, so a compromised process could forge a token and execute anything.

The approval flow keeps you informed – you see what's being run and can say no. That's the right trade-off for a personal dev machine.

The sudoers fragment is two lines:

Defaults env_keep += "FRIDAY_DB FRIDAY_APPROVAL_TOKEN"
assertchris ALL=(ALL) NOPASSWD: /usr/local/bin/friday-privileged

env_keep passes the database path and token through sudo's environment sanitisation. Only friday-privileged gets passwordless execution. Everything else still requires a password.

Two Approval Paths

The approval flow from the previous section was designed for background tasks. A task needs to run something privileged, so it inserts a pending row, sends a Telegram message, and waits for the user to tap /approve or /deny. That works when Friday's main process is running and can receive Telegram callbacks.

But skills run in a different context. When Friday runs /custom-site-setup, it spawns a subprocess – a separate process with no access to the Telegram bot. It can't send a message and wait for a callback.

We ended up with two approval paths sharing the same friday-privileged wrapper.

Background tasks use the async Telegram flow. Here's the heart of it:

export async function requestApproval(
    command: string,
    context: string,
    timeoutMs = 5 * 60 * 1000
): Promise<string> {
    const token = randomBytes(16).toString('hex');
    const db = getDb();
    db.prepare(
        `INSERT INTO pending_approvals (token, command, requested_at, status)
         VALUES (?, ?, ?, 'pending')`
    ).run(token, command, new Date().toISOString());

    await _sendApproval(token, command, context);

    const taskCtx = approvalContextStorage.getStore();
    taskCtx?.onPending();

    return new Promise((resolve, reject) => {
        const timer = setTimeout(() => {
            pendingCallbacks.delete(token);
            // ...mark expired in DB...
            taskCtx?.onTimeout?.();
            reject(new Error('Approval window expired'));
        }, timeoutMs);

        pendingCallbacks.set(token, {
            resolve: (t) => {
                clearTimeout(timer);
                taskCtx?.onResolved();
                resolve(t);
            },
            reject: (err) => {
                clearTimeout(timer);
                taskCtx?.onResolved();
                reject(err);
            },
        });
    });
}

This is from src/system/privileged.ts

approvalContextStorage is an AsyncLocalStorage instance carrying three callbacks through the async chain:

  • onPending pauses the background task runner.
  • onResolved resumes it when you respond.
  • onTimeout fires if nobody answers within five minutes.

On timeout, onResolved is intentionally not called – the task stays paused. You get a Telegram notification to re-queue it.

The original plan used Telegram inline keyboard buttons for approve and deny. That hit a wall – inline callbacks are dispatched to the bot process, but the approval might come from a background task in a separate process. The callback map is in-memory, so the background process has no way to receive it. We dropped inline keyboards and reused the existing /approve and /deny text commands instead.

Interactive skills use the direct path. User consent is implied by running the skill manually, so we skip the Telegram dance:

export function runPrivilegedDirect(
    command: string,
    dbPath = process.env.FRIDAY_DB ?? 'meta/friday.db',
    executor: DirectExecutor = defaultExecutor
): string {
    const db = getDb(dbPath);
    const token = randomBytes(16).toString('hex');
    const now = new Date().toISOString();

    db.prepare(
        `INSERT INTO pending_approvals (token, command, requested_at, approved_at, status)
         VALUES (?, ?, ?, ?, 'approved')`
    ).run(token, command, now, now);

    return executor(command, token, dbPath);
}

This is from src/system/run-privileged-direct.ts

It generates a token, writes it directly as approved, and calls friday-privileged immediately. No waiting. The wrapper script doesn't know or care which path created the token – it just checks the database for a valid, approved, unexpired row.

The isMain guard at the bottom of the file lets it run as both a library import and a CLI tool:

const isMain =
    process.argv[1] === fileURLToPath(import.meta.url) ||
    process.argv[1]?.endsWith('run-privileged-direct.ts') ||
    process.argv[1]?.endsWith('run-privileged-direct.js');

This is from src/system/run-privileged-direct.ts

Skills call it from the command line: node --import tsx src/system/run-privileged-direct.ts "nginx -t && systemctl reload nginx". The Friday server imports it as a module when it needs the direct path internally.

Two paths, one token table, one wrapper script. The complexity is in the approval flow, not the execution.

The Symlink Pattern

Every project gets a symlink at ~/Sites/<project-name>--live. nginx always points at the symlink. Switching the live worktree means updating where the symlink points – a single ln -sfn call. No sudo needed, because the symlink lives in the user's home directory.

Here's how it looks for a Laravel project:

~/Sites/floaty.dev--live  →  ~/Sites/floaty.dev

nginx's root directive points at ~/Sites/floaty.dev--live/public. When you switch to a feature branch, the symlink changes:

~/Sites/floaty.dev--live  →  ~/Sites/floaty.dev/.worktrees/feature-auth

nginx doesn't need to be reconfigured. It still reads ~/Sites/floaty.dev--live/public, but now that resolves to the worktree directory. One reload to clear any cached paths and you're serving the feature branch.

The alternative was updating the nginx config on every switch – changing the root or proxy_pass directive and reloading. That's more fragile. It requires sudo on every switch (because the config lives in /etc/nginx/). It differs per project type.

A symlink works identically for PHP, static, and Node projects. For Node, the symlink points at the directory containing .env, so the port is resolved from there.

The ln -sfn flags matter:

  • -s makes it a symlink.
  • -f removes the existing one.
  • -n treats the target as a file rather than dereferencing it – critical for atomic symlink updates.

Without -n, if the symlink already exists and points to a directory, ln would create the new symlink inside that directory instead of replacing it.

When there are no worktrees – the default state for most projects – the symlink just points at the project root. ~/Sites/writing.assertchris.dev--live~/Sites/writing.assertchris.dev. nginx serves main through the symlink without knowing or caring that the indirection exists.

Later revision: the symlinks were removed. They cluttered ~/Sites/ with --live entries next to every project, and when nginx returned a 404 you had to chase the symlink before you could chase the files. Switching now regenerates the nginx config directly with the new worktree path and reloads. More work per switch (it needs friday-privileged), but the system state is always visible in the config itself.

Nginx Templates

The custom-site-setup skill reads LIVE.md and generates an nginx config from a template. Three templates, one per project type. Here's the Laravel one:

server {
    listen 80;
    server_name floaty.dev.test;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl;
    server_name floaty.dev.test;
    root ~/Sites/floaty.dev--live/public;
    ssl_certificate ~/.local/share/mkcert/floaty.dev.test.pem;
    ssl_certificate_key ~/.local/share/mkcert/floaty.dev.test-key.pem;
    index index.php;
    location / { try_files $uri $uri/ /index.php?$query_string; }
    location ~ \.php$ {
        fastcgi_pass unix:/run/php/php-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

The root directive points at the --live symlink, not the project directory. That's the whole trick. The config never changes when you switch worktrees. nginx resolves through the symlink on every request.

The Node template swaps the PHP-FPM block for a reverse proxy:

location / {
    proxy_pass http://localhost:3000;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
}

The port comes from the worktree's .env at config generation time. If you switch to a worktree that serves on a different port, the skill regenerates the nginx config with the new port and reloads. That's the one case where switching does require a config change – but it's automated, and it only applies to Node projects.

The static template is the simplest: root points at the symlink, try_files serves what's there, no application server involved.

One skill call sets up everything a project needs to be browser-accessible:

  1. Check if the mkcert CA is trusted (mkcert -install is a one-time operation that requires friday-privileged).
  2. Generate a per-domain SSL cert if one doesn't exist.
  3. Add the domain to /etc/hosts.
  4. Write the nginx config to /etc/nginx/sites-available/.
  5. Symlink it into sites-enabled.
  6. Reload nginx.

Every privileged step goes through run-privileged-direct.ts. The skill never calls sudo directly.

Worktree Plumbing

The worktree system is split across custom-worktree-manage for create and delete, custom-worktree-list for showing what exists, and custom-worktree-switch for changing what the browser points at.

Worktrees nest inside the project at .worktrees/<branch-name> rather than sitting as siblings in ~/Sites/. Sibling directories would break the fuzzy matching from the project-switching chapter – "floaty" would match floaty.dev, floaty.dev-feature-auth, and floaty.dev-fix-nav equally. Nesting them inside .worktrees/ keeps the ~/Sites/ directory clean and the fuzzy matcher unambiguous.

Create

Creating a worktree handles three branch cases: new branches, existing local branches, and existing remote branches. Each needs a different git worktree add invocation. The skill figures out which case applies by checking git branch --list and git ls-remote --heads origin.

After creating the worktree, the skill copies .env from the project root if one exists. You'll probably need to change the port for Node projects, but having a working .env with all the other variables pre-filled beats starting from scratch.

It also appends .worktrees/ to .gitignore on first create if it's not already there. Worktree directories should never be accidentally staged.

List

The list skill parses git worktree list --porcelain, which outputs machine-readable blocks:

worktree /home/assertchris/Sites/floaty.dev
HEAD abc123...
branch refs/heads/main

worktree /home/assertchris/Sites/floaty.dev/.worktrees/feature-auth
HEAD def456...
branch refs/heads/feature-auth

It cross-references each worktree path against the --live symlink to mark which one is currently serving. It reads LIVE.md for the domain. It checks git status --short in each worktree to report dirty or clean state. The output looks like this:

project: floaty.dev  (~/Sites/floaty.dev)
live → ~/Sites/floaty.dev  (main)  https://floaty.dev.test

LIVE   main              ~/Sites/floaty.dev                    clean
       feature/auth      ~/Sites/floaty.dev/.worktrees/auth    dirty

Switch

Switching updates the --live symlink and reloads nginx:

ln -sfn <target-path> ~/Sites/<project-name>--live

For Node projects, the skill reads the port from the target worktree's .env and compares it to the current live worktree's port. If they differ, it regenerates the nginx config with the new port before reloading. It also warns you to stop the old process: "Check that the old process on port 3000 is stopped before starting the new one on port 3001."

For Laravel and static projects, no config regeneration is needed. The symlink changes, nginx gets a reload to clear cached paths, and you're done.

Delete

Deleting checks whether the target worktree is currently live. If it is, the skill switches the symlink back to the project root first – you never end up with a broken live site because you deleted what was serving. If the worktree has uncommitted changes, the skill refuses unless you explicitly pass --force. Then git worktree remove cleans up the directory and git worktree prune removes stale references.

One Live Slot

Only one worktree is browser-reachable at a time. The others are headless – fine for background tasks that run tests or read files.

We could have supported multiple live worktrees on different ports or subdomains. But that means port allocation, multiple nginx upstreams, and subdomain routing. I don't need that. One branch in the browser, two or three headless for parallel background tasks. That's the actual workflow.

The Migration

Building the tools is the easy part. Making them real means migrating every existing project.

Each project needed a LIVE.md and a --live symlink. Projects that already had an nginx config also needed that config rewritten to resolve through the symlink.

The per-project steps were mechanical:

  1. Inspect the root to determine the type (artisan means Laravel, package.json means Node, otherwise static).
  2. Write LIVE.md with the domain and type.
  3. Create the --live symlink pointing at the project root.
  4. Regenerate the nginx config through custom-site-setup.
  5. Verify the .test URL still loads.

For projects with no browser presence, LIVE.md gets type: none and steps 4 and 5 are skipped.

Conclusion

The workflow commands from the previous chapter all use worktrees under the hood now. Starting a new feature creates a branch and a worktree. Merging a PR switches back to main and cleans up after itself.

You never think about worktrees directly. You say "start a new feature" and a worktree appears. You say "merge it" and the worktree disappears. The .test domain follows along. The browser always shows whatever you're working on.

Background tasks get their own headless worktrees. Friday can run tests on a feature branch while main keeps serving in the browser. Neither knows the other is there.