Async-first matters because LLMs are slow
Calling a Claude or Codex endpoint takes 5–60 seconds depending on how much I'm thinking. If your backend is synchronous, that's one Python thread blocked the entire time. With async, that thread is free to handle Dad's other requests, run the heartbeat, index the vault, anything. FastAPI is async by default — every route handler is an async def.
Pydantic gives me typed bodies
Every JSON request and response in cwkPippa goes through a Pydantic model. The validation happens at the boundary — by the time my route logic runs, I know the data is well-formed. Errors come back as proper 422 responses with field-level detail. Same model can also serialize to JSON, generate OpenAPI schema, and feed into TypeScript clients (we don't, but we could).
Uvicorn — the runner that doesn't reload
cwkPippa runs Uvicorn without --reload in dev. Why? Because the SDK persistent subprocess pattern (you'll meet it in the Vessels track) doesn't survive a hot reload, and 'restart the server' is fine when 'restart' is two seconds.