appgrammar_swarm_partition
Axiom: II — Context | Category: Execution
This tool uses a multi-turn protocol. The initial call returns a prompt and token. Run the prompt on your LLM, then submit the structured output via appgrammar_tool_submit (or the tool-specific submit variant) to advance the operation.
Partition an appgrammar into parallel execution swarms. Analyzes module dependencies to find optimal parallelization boundaries.
Because an appgrammar pre-calculates exact API contracts and data models early in its sequence, execution can be parallelized perfectly. This tool slices the blueprint into independent shards that can be executed by specialized AI agents simultaneously — a DB agent, a UI agent, an Auth agent, and an Infrastructure agent, for example. Each agent holds only its own shard in context, so quality stays high. The shards meet cleanly because the contracts were defined before any agent started writing code.
This is a multi-turn operation. The server returns a prompt and token; run the prompt on your LLM and submit the result via appgrammar_tool_submit.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
source_appgrammar_id | string (UUID) | Yes | ID of the appgrammar to partition. |
params | object | No | Optional tool-specific parameters. |
Example
{
"method": "tools/call",
"params": {
"name": "appgrammar_swarm_partition",
"arguments": {
"source_appgrammar_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
}
}
}