Agent profiles
12 specialized system prompts — pick one manually, or let the agent auto-switch per message.
Why profiles
A profile is a specialized system prompt that shapes how the agent thinks and writes. Twelve ship built-in, each tuned for a different kind of work. You pick one manually, or let auto-mode route per message via regex triggers.
Built-in profiles
| Display name | ID | When to use |
|---|---|---|
| Default | default | General-purpose coding assistant |
| Systems | permissive | Treats you as a competent professional — no false refusals on legitimate technical work |
| Local | local-unrestricted | Default for Ollama. Direct answers, no hedging. |
| Free-tier | free-tier | Tuned for small local models (Gemma, Qwen, Phi) |
| Architect | architect | System design, trade-offs, patterns. Talks before coding. |
| Debugger | debugger | Hypothesis → test → observe → root cause |
| Tester | tester | TDD — failing test first, minimum impl to pass |
| Refactor | refactor | Surgical, behavior-preserving edits |
| Explainer | explainer | Pedagogical — WHY before WHAT |
| Security research | security | Threat modeling, vulnerability classes, defensive analysis |
| Research | reverse-eng | IDA / Ghidra / r2 / x64dbg workflow on binaries you own |
| Kernel-mode | kernel | WDK / WDM / KMDF / Linux kernel / BSOD debugging |
Display names appear in the UI. The ID column is the value you use in hypex.agent.profile settings.
Selecting a profile
Manual
Hypex: Select Agent Profile → pick. Setting persists to hypex.agent.profile. The status bar + chat header pill update immediately.
Auto
Set hypex.agent.mode = "auto". Every user message is regex-matched against each profile's triggers; the first match wins. The chat log drops a subtle chip when the switch fires: "→ switched to Debugger profile".
Per-provider default
hypex.agent.ollamaDefaultProfile sets the default when the provider is Ollama. Ships as local-unrestricted (display: Local) so local models don't burn cycles on filler text.
What's in each prompt
Every profile starts with a role statement, then lists scope (what's explicitly in-scope), style rules (terse, match user's language, no emojis), and a short "how to format output" section.
Systems (id: permissive)
Treats the user as a competent professional doing legitimate work. Explicitly in-scope: reverse engineering binaries you own, defensive security analysis, your-own-driver development, single-player game modding, OSINT on public data, malware analysis in a research context. Out-of-scope (hard-declines): specifically-named real victims, CSAM, WMD, named-individual fraud.
Local (id: local-unrestricted)
Default for Ollama. Direct answers without filler. The model runs on your own hardware with no cloud moderation pipeline. Hard-decline list shrinks to three items: CSAM, mass-casualty WMD synthesis, live targeted doxxing.
Kernel-mode (id: kernel)
IRQL-aware. Windows (WDM / WDF / KMDF): respects PASSIVE / APC / DISPATCH, NonPagedPool vs PagedPool, spinlock vs mutex. Linux: copy_from_user, RCU, sysfs / procfs. BSOD debugging with !analyze -v + minidump decoding.
Writing your own profile
Profiles live in src/agent/profiles.ts and are compiled into the extension. For workspace-level overrides, use steering rules — those layer on top of any profile per-file-glob.
Per-workspace custom profile support is on the roadmap; for now the 12 built-ins + steering rules cover the common ground.
Self-disclosure
All profiles include a self-disclosure clause: if you ask the agent for its system prompt / profile / instructions, it shares them verbatim. These aren't secret — they're plain-text config files on your machine.