Windmill Labs
Windmill

Build AI agents with full control and observability

For platform teams who deploy AI agents to production and need full control over permissions, monitoring and audit logs.

Build agent with any LLM provider including self-hosted models
Give your agents tools via MCP or custom scripts in 20+ languages
Run full Claude Code, Codex or Opencode in sandboxes at their full power
Track every decision, tool call and code execution with full observability

Trusted by 4,000+ organizations, including 300+ EE customers at scale:

ZoomZoomKahootInvesting.comCFA InstituteAxiansAxiansPhotoroomPavePanther LabsNocd

Two ways to build AI agents

Use the visual AI builder with tools for structured agent loops, or run full Claude Code, Codex or Opencode inside sandboxes for maximum autonomy. Both can be used as steps of a flow.

Visual AI builder with tools

Build AI agents as loops with tools using Windmill's visual builder. The agent reasons, picks tools (scripts or MCP servers) and executes them. Use agent steps in any flow alongside branching, approvals and error handling.

AI Agent step

Windmill flows are DAG workflows made of steps. The AI agent step is a ready-to-use node you drop into any flow. It handles LLM calls, tool execution and reasoning loops out of the box, so you focus on your business logic.

AI Agent step

Any LLM provider

Use AI agent steps with the LLM of your choice, including self-hosted models. Just provide an API key as a resource and switch providers or models without changing your agent logic.

Any LLM provider

Built-in presets

Each agent step exposes configurable presets: system_prompt, user_message, output_schema, memory, temperature, max_tokens and more. Set them in the UI or pass them dynamically from previous steps. No boilerplate code to write.

Built-in presets

Scripts as tools

Write tools in 20+ languages (TypeScript, Python, Go, Bash, SQL, and more). Each script's typed parameters and descriptions are automatically used as the tool schema. No separate SDK or adapter needed.

Scripts as tools

MCP integration

Connect to MCP servers and auto-discover tools. Supports OAuth and custom auth. Use MCP tools alongside native Windmill scripts in the same agent step.

MCP integration

Deploy in one click

Build your agent flow in the editor, test it with live previews and deploy to production in one click. Every deployment is versioned with full history, so you can compare changes and roll back instantly. Integrate with Git sync for CI/CD workflows.

Deploy in one click

Built on a production-grade workflow engine

Windmill is not an AI framework. It is an open-source workflow engine already used by teams to orchestrate scripts, APIs and infrastructure at scale. AI agents plug directly into that existing platform, with enterprise-grade security, observability and integration built in.

Part of your orchestration layer

Part of your orchestration layer

  • AI agents are flow steps alongside HTTP calls, database queries and approval gates
  • Retries, error handling and conditional branching apply to agent steps like any other
  • Schedule agents, chain them with existing workflows, trigger them from webhooks or events
Full observability out of the box

Full observability out of the box

  • Every agent run is a job with structured logs, timing and resource usage
  • Track each tool call, LLM request and token count in the run detail view
  • Audit trails, run history and flow-level debugging across all steps
Enterprise-grade security

Enterprise-grade security

  • SSO, RBAC and granular permissions control who can build and trigger agents
  • Secrets and API keys are managed centrally and never exposed to the LLM
  • Self-host on your infrastructure, air-gapped deployments supported

Frequently asked questions

Start building AI agents today

Get started for free on Windmill Cloud or self-host the open-source version.