LLM Instructions Block Editor: Drag-and-Drop Context Management

I built a visual editor for managing LLM system instructions. Drag blocks between profiles. See token counts update live. Export context-aware configurations.
The Problem
Most AI coding assistants read custom instructions from a config file. Mine has 12 sections covering coding standards, deployment patterns, personal tools, and workflow rules.
The catch: my home and work contexts need different instructions. Home uses personal knowledge management tools, custom automation, and homelab deployment. Work doesn’t.
Maintaining two separate files means duplicate work. Forgetting to sync a universal rule breaks things.
The Solution
Parse your instructions file into blocks. Each ## section becomes a draggable tile. Assign blocks to profiles: Home, Work, or both.
The UI:
- Left sidebar: Profile buckets (Home/Work) plus an “All Blocks” tray
- Right panel: Markdown editor for the selected block
- Header: Live token counts per profile
Blocks show as emoji tiles. Hover for title. Click to edit. Drag to assign.
Key Features
Token counter — See exactly how heavy each profile is. My home config runs 3,807 tokens. Work runs 1,416. The “Bench” button measures actual latency impact.
Profile-aware ordering — Same block, different priority per profile. Work might put coding standards first. Home might prioritize personal tool references.
LLM-powered block creation — Describe what you want (“never use console.log for debugging”). The LLM generates a properly formatted block matching your existing style.
Non-destructive editing — Changes save to a JSON store. Your original file stays untouched until you explicitly export.
The Stack
FastAPI backend for parsing and storage. React + Tailwind frontend with dnd-kit for drag-and-drop. Dark orange theme because I like it.
No database—just a JSON file in ~/.llm-blocks/.
Why It Matters
Configuration as data. AI instructions deserve better tooling than raw text editing.
Visual feedback makes profile management intuitive. Token awareness keeps instructions lean. Export with confidence knowing exactly what context your LLM sees.
One source of truth. Multiple context-aware outputs.