LLM Keys
Agents on your mesh run inference against keys you provide. Appgrammar never holds your inference credentials, never proxies your model calls, and never sees your prompts or completions — the gram agent talks to your chosen providers directly from each node.
Supported providers
- Anthropic
- OpenAI
- Google (Gemini)
- Together
- Nebius
- Any OpenAI-compatible endpoint (self-hosted, local LM Studio, alternative gateways)
You can mix providers across a single mesh — different nodes, different models, different vendors — or standardize on one.
Configuring keys
In the mesh dashboard, open LLM Providers. For each provider you want to enable:
- Paste the API key.
- Optionally set a default model.
- Use the Test button to verify the key reaches the provider successfully.
Keys are stored encrypted at rest and pushed to each mesh node as environment variables at session start. Rotating a key in the dashboard rotates it on every node at the next session boundary — no redeploy.
Scoping
- Per-mesh keys — the default. Every agent on the mesh uses the same keys.
- Per-node overrides — set a different provider or model for a specific node (for example, a high-throughput node running a cheaper model for batch work).
Cost visibility
Usage is aggregated per provider and per node in the mesh dashboard, so you can see which nodes and which models are driving your spend without cross-referencing invoices.
Why BYO keys
- Your budget, your contract. You own the relationship with your LLM provider.
- Compliance. If your data must stay under a specific vendor agreement (zero data retention, BAA, regional residency), use that vendor's key directly.
- No markup. You pay provider rates.
Next
- The Gram Agent — The agent that uses these keys
- Loops — Scheduled agent behaviors also run on your keys