Remote development
SSH, Dev Containers, WSL. The agent runs on the remote extension host.
What works out of the box
Hypex is built on Code OSS, so the standard remote-dev extensions work unchanged:
- Remote-SSH — install from the Open VSX marketplace built into Hypex. Connect to any SSH-accessible machine.
- Dev Containers —
.devcontainer/devcontainer.jsonsupport. - WSL — Windows Subsystem for Linux.
Where the agent runs
Hypex's run_command tool runs on the extension host. In a remote workspace that's the remote machine — which is usually what you want: ls lists the remote directory, npm install installs on the remote, etc.
The IDE warns explicitly when a run_command fires in a remote workspace so you don't confuse local vs remote shells.
API keys in remote sessions
SecretStorage is scoped per-machine. When you SSH to a new remote for the first time, your Anthropic / OpenAI / xAI key isn't copied over — you'll need to paste it once via Hypex: Set Provider API Key. The key then lives in the remote's keychain (or the Code Server's secrets store).
Ollama on a remote GPU box
You don't need to use Remote-SSH to route LLM calls through a GPU machine. Just set:
"hypex.provider": "ollama",
"hypex.ollamaBaseUrl": "http://gpu-box.local:11434/v1" Hypex sends OpenAI-compatible requests to that URL. Any server speaking the dialect works — Ollama, llama.cpp, LM Studio, vLLM, OpenLLM.
Claude Code CLI on a remote
The claude-cli provider spawns the claude binary on whatever machine the extension host is on. Install claude on the remote + authenticate once, and the provider works the same as locally.
Dev Containers specifics
When opening a folder in a container, Hypex copies its extension into the container so the agent runs inside. Your keys and settings follow via the standard Code OSS sync. Note: ollama inside a container usually requires --gpus all + the NVIDIA container toolkit.