Skip to content

runTask()

Run a coding task inside a sandboxed container.

Signature

ts
function runTask(config: RunConfig, options?: RunOptions): Promise<TaskHandle>
ts
interface RunOptions {
  onProgress?: (message: string) => void;
  onEvent?: (event: ParsedLogEntry) => void;
  onComplete?: (result: RunResult) => void;  // fired when container exits
  onError?: (error: Error) => void;          // fired for infrastructure failures (spawn, volume setup)
}

interface TaskHandle {
  taskId: string;
  logPath: string;
  shadowVolumes: string[];   // dep cache volumes — available immediately after spawn
  wait(): Promise<RunResult>;
  stop(): Promise<void>;
}

runTask returns a TaskHandle immediately after the container spawns — the container may still be running. Use handle.wait() to block until completion (same as the old behaviour), or pass onComplete to be notified asynchronously.

onError fires only for infrastructure failures before/during spawn (can't create volume, sandbox won't start). handle.wait() always resolves — it never rejects. Task-level failures (max turns, agent abort, non-zero exit) come through onComplete/wait() with the appropriate status and failure_reason.

Minimal example

ts
import { runTask } from "@ysa-ai/ysa/runtime";

const handle = await runTask({
  taskId: crypto.randomUUID(),
  prompt: "refactor the database connection pool",
  branch: "refactor/db-pool",
  projectRoot: "/home/user/myapp",
  worktreePrefix: "/home/user/myapp/.ysa/worktrees/",
});

const result = await handle.wait();

Non-blocking example

ts
const handle = await runTask(config, {
  onComplete: (result) => {
    console.log("done:", result.status);
  },
  onError: (err) => {
    console.error("spawn failed:", err.message);
  },
});

// handle.shadowVolumes available here — before container finishes
console.log("volumes:", handle.shadowVolumes);

// Stop from a signal handler or timeout
process.once("SIGINT", () => handle.stop());

RunConfig fields

FieldTypeDefaultDescription
taskIdstringrequiredCaller-assigned UUID for this task
promptstringrequiredInstructions for the agent
branchstringrequiredBase branch to create the worktree from. The actual worktree branch is always task/<taskId[:8]>
projectRootstringrequiredAbsolute path to the project root
worktreePrefixstringrequiredDirectory where worktrees are created (e.g. <root>/.ysa/worktrees/)
providerstring"claude"Provider name. See Providers
modelstringprovider defaultModel ID within the provider
maxTurnsnumber60Maximum agent turns before stopping with failure_reason: "max_turns"
allowedToolsstring[]provider defaultOverride the tool whitelist
resumeSessionIdstringResume an existing session (for refine/continue)
resumePromptstringCustom prompt when resuming a session
resumeWorktreestringReuse an existing worktree path (skips creation)
networkPolicy"none"|"strict""none"Container network policy. See Network guide
promptUrlstringURL the container fetches the prompt from (used by the platform)
shadowDirsstring[]["node_modules"]Directories shadowed with per-task volumes
depInstallCmdstringCommand to install dependencies before starting the agent (e.g. "bun install"). Runs in an isolated container and installs into the shadow volume, so dependencies are available when the agent starts
depsCacheKeystringStable cache key for the deps shadow volume. When set, the volume is named shadow-<dir>-<depsCacheKey> and reused across tasks with the same key — skipping reinstall if the volume already exists. Pass a hash of your lockfiles to invalidate the cache when deps change
miseVolumestringPre-populated mise-installs volume to mount
worktreeFilesstring[]Untracked files to copy from project root into the worktree
extraEnvRecord<string, string>Extra environment variables injected into the container
extraLabelsRecord<string, string>Additional Podman labels on the container. Used by stopContainer/teardownContainer to target specific containers
proxyRulesScopedAllowRule[]Per-task proxy allow rules. Each rule has host and pathPrefix fields
serverPortnumberHost server port to bypass in the network proxy (e.g. dashboard port)
allowCommitbooleantrueWhether the agent can commit to git

RunResult fields

FieldTypeDescription
task_idstringThe task UUID
statusTaskStatusFinal status: "completed", "failed", or "stopped"
session_idstring | nullAgent session ID (useful for resumeSessionId in a follow-up)
errorstring | nullError message if status === "failed"
failure_reason"max_turns" | "infrastructure" | "agent_aborted" | nullStructured failure reason
log_pathstringAbsolute path to the NDJSON log file
duration_msnumberWall-clock duration in milliseconds

Streaming output

ts
await runTask(config, {
  onProgress: (msg) => {
    // Lifecycle messages: "creating worktree", "starting container", etc.
    console.log("[progress]", msg);
  },
  onEvent: (event) => {
    // Structured log entries from the agent
    if (event.type === "assistant" && event.text) {
      process.stdout.write(event.text);
    }
    if (event.type === "tool_call") {
      console.log(`[tool] ${event.tool}`);
    }
  },
});

ParsedLogEntry has type: "assistant" | "tool_call" | "tool_result" | "system", plus optional text and tool fields.

Container lifecycle

Two utilities let you manage running containers from outside runTask().

stopContainer()

Stop and remove a running container, returning the agent session ID (for later resume).

ts
import { stopContainer } from "@ysa-ai/ysa/runtime";

const sessionId = await stopContainer(taskId, {
  logPath: "/path/to/task.log",   // used to extract sessionId
  provider: "claude",              // defaults to "claude"
  labels: { issue: "42", project: "my-project" },  // match by labels
});

// sessionId can be passed as resumeSessionId in a follow-up runTask()
ParamTypeDescription
idstringTask ID (used as fallback label filter if labels not provided)
opts.logPathstringPath to the task log file — used to extract the session ID
opts.providerstringProvider name for session ID extraction (default "claude")
opts.labelsRecord<string, string>Match containers by these Podman labels. If omitted, filters by label=task=<id>

teardownContainer()

Remove a stopped or running container and its associated volumes.

ts
import { teardownContainer } from "@ysa-ai/ysa/runtime";

await teardownContainer(taskId, {
  labels: { issue: "42", project: "my-project" },
});
ParamTypeDescription
idstringTask ID — also used to match volumes (volumes named *-<id>)
opts.labelsRecord<string, string>Match containers by these Podman labels. If omitted, filters by label=task=<id>

Using extraLabels for lifecycle management

Pass extraLabels to runTask() so you can later target that container by your own identifiers:

ts
const result = await runTask({
  taskId,
  // ...
  extraLabels: { issue: "42", phase: "analyze", project: "my-project" },
});

// Later, stop just the analyze container for issue 42:
await stopContainer(taskId, {
  labels: { issue: "42", phase: "analyze" },
});

Containers always have a task=<taskId> label set automatically. extraLabels are additive.