Windmill vs Kestra
Eight questions teams ask during a bake-off, with an honest answer on each one for where Windmill or Kestra is the better pick.
- 01What you can buildWhich solution fits for your specific use case
- 02TargetWho the platform is built for
- 03Build experienceHow you build on each platform
- 04IntegrationsHow the platform integrates with your existing stack
- 05Migration & lock-inHow hard to get in, how hard to get out
- 06Enterprise requirementsAudit logs, observability, security, performance
- 07Licensing & pricingOpen source, pricing, self-hosting
- 08VerdictThe verdict
An open-source workflow engine and developer platform to build and orchestrate all your internal software: scripts, workflows, data pipelines, AI agents and internal apps. Made for engineers who want full code flexibility and local dev experience, both to build internal software and to manage the underlying infrastructure.
An open-source workflow orchestrator for data, AI and infrastructure pipelines. Declarative YAML with a 1,200-plugin catalog. Best for teams that prefer configuration over imperative code and already think in YAML DAGs (Airflow, dbt, Kubernetes).
Which internal software can you build and orchestrate?
Windmill is built to centralize and orchestrate all your internal software in one place. Scripts, workflows, data pipelines, full-code apps, AI agents and operator UIs run on a single runtime, with shared resources, auth and observability. Kestra focuses on workflow orchestration, so you bring your own stack for dashboards, internal apps and agent front-ends.
| Primitive | ||
|---|---|---|
Chain scripts into flows with branching, retries and approval steps | ||
ETL, syncs and scheduled data jobs with parallel branches | ||
Standalone functions exposed as APIs, webhooks or cron jobs without a wrapping workflow | ||
Build agents that call tools, branch on outputs and run as workflows | ||
Isolated environments with persistent volumes for running agents | ||
Drag-and-drop dashboards and admin tools with built-in components | Enterprise only | |
Custom dashboards and admin tools built in React or Svelte | ||
Cron jobs with retries, error handling and alerting |
Who is each platform built for?
Windmill is built for developer-led teams where engineers own the platform end-to-end. Kestra is designed for teams that prefer declarative YAML over imperative code, including data, ops and SRE engineers used to config-driven tools.
Primary audience
Developer-led teams. Engineers own the platform end-to-end: code, Git, local dev, workspace forks, code review, CI/CD, AI coding tools, infrastructure as code.
Teams that prefer declarative YAML over imperative code. Data, ops and SRE engineers already comfortable with config files (Kubernetes manifests, dbt configs, Airflow DAGs) onboard faster than they would on a code-first platform.
Reviewing and editing flows
Non-developers consume through auto-generated UIs, custom apps and an operator role. Reading or editing the flow logic itself still requires comfort with code.
Anyone comfortable reading YAML can inspect flows. Meaningful edits still require understanding plugin task types, Pebble templating and trigger configs, which is its own technical skill.
How do you build on each platform?
Windmill gives you full code flexibility with a local dev loop and whole-workspace Git sync. Scripts, flows, apps, resources, permissions and infrastructure all live in Git like application code. Kestra uses a declarative YAML layer familiar to teams that work with config files (Kubernetes, dbt, Airflow DAGs).
Compose steps visually as a DAG with branches, loops and approval steps. Each step can be written in 20+ languages.
Authoring
Build scripts in 20+ languages in a dedicated script editor. Windmill parses the function signature to auto-generate the input UI, JSON schema and argument types. Dependencies resolve automatically with a per-script lockfile, and relative imports work across Python and TypeScript scripts in the same workspace. Arguments pass between flow steps as direct return values (no templating). Any public or private package is a first-class import.
Kestra also has a script editor for Python, Node.js, R, Shell and other language scripts, saved as namespace files. But a script can't run on its own: every execution happens as a task inside a YAML flow, which defines inputs, outputs, dependencies and triggers. No standalone script primitive with its own UI or API endpoint.
Local dev & IDE
Run scripts locally with the Windmill CLI. VS Code extension, native language tooling (LSP, type-checking, linting) and AI coding tools (Claude Code, Codex).
YAML schema with validation in VS Code and JetBrains plugins. Local server for running flows before deploy.
Language runtimes
TypeScript (Deno, Bun, Node.js), Python, Go, Bash, SQL, PowerShell, PHP, Rust and more (20+ languages total). Any npm, PyPI, Go, Maven or Cargo package is a first-class import with automatic dependency resolution.
Python, Node.js, R, Shell, PowerShell, Julia and Ruby via io.kestra.plugin.scripts.*plugins. Each runs inside a Docker container with a configurable image, so packages install via pip / npm / gem / etc. during the task.
Data handling
First-class data tables with SQL queries, versioning and a browse / edit UI, plus native S3 integration for large payloads, files and persistent storage across scripts, flows and apps.
S3 access goes through the io.kestra.plugin.aws.s3.* plugin. Per-task input/output files and a built-in KV store cover smaller values. No dedicated data tables primitive.
Resources & secrets
Resources (typed JSON for credentials, connection info, configs) and Variables (individual secrets) are first-class and reusable across scripts, flows and apps. They're encrypted at rest, scoped via folders and groups for access control, and versioned in Git. External secret backends (Vault, AWS Secrets Manager) are Enterprise only.
Open source ships basic secrets as base64-encoded environment variables. The UI secret manager, namespace- and tenant-scoped secrets, and external backends (Vault, AWS Secrets Manager) are Enterprise only.
Git & CI
Full IaC: scripts, flows, apps, resources, variables, secrets, schedules, folders, groups and permissions are all files in Git, deployed via the CLI and Git sync. Internal tooling runs like application code: PRs, reviews, rollbacks, CI/CD. Git sync is free for up to 2 users; beyond that is Enterprise only.
kestractl + Git sync deploy flows and namespace files, available in open source. Users, RBAC and tenant settings still live in the UI or behind the API.
How does the platform integrate with your existing stack?
Windmill lets you import any public package (npm, PyPI, Go or Maven) or your own, and call the vendor's real SDK directly. Kestra ships a curated catalog of 1,200+ plugins for mainstream data tools like Snowflake, dbt, Airbyte and Databricks.
Type `import stripe` in the script editor and run. Windmill detects the import, resolves the version and pins it in a per-script lockfile automatically. No venv, no pip install, no restart: the platform manages dependencies for you.
Connecting out
Any npm, PyPI, Go or Maven package is a first-class import with automatic per-script dependency resolution and lockfiles, and no plugin layer in between: you call the vendor's real SDK with full type inference and auto-completion in the editor. 50+ pre-built resource types cover common databases (Postgres, Snowflake, BigQuery), SaaS (Slack, Stripe, GitHub, Notion, OpenAI) and infrastructure (S3, Redis, Kafka): paste an API key into a typed form and the resource is shared across scripts, flows and apps, encrypted at rest. A community Hub has reusable scripts and flows for common tasks.
A curated catalog of 1,200+ plugins with official connectors for Snowflake, BigQuery, dbt, Airbyte, Salesforce, Databricks and most common data-stack tools. Each plugin bundles tasks, triggers and connection types referenced from the flow YAML, with credentials supplied through pluggable secret backends and configuration that lives in Git alongside the flow. Internally a task can wrap SQL, Python, shell or an API call; from the flow definition's perspective the surface is uniform YAML.
Receiving events
Native triggers for HTTP, cron, Kafka, NATS, Postgres CDC, SQS, MQTT, SMTP and WebSocket. Each trigger is a typed primitive in the UI: configure topic, subject or path, attach it to a script or flow, and Windmill handles the connection, replay and consumer-group state. Every script also gets an HTTP endpoint and a webhook URL for free, so most receive paths need zero glue code.
Five core trigger types: Schedule (cron), Flow (run on another flow's completion), Webhook, Polling (check an external system on a cadence) and Realtime (react to events with millisecond latency). Plugin-based triggers extend the same model to Kafka, AMQP, JMS, MQTT and S3 events.
Extending
A new integration is a script with a new import: save and run, no restart, no image rebuild. The loop is hot-reload all the way through, in the web editor, the CLI and Git sync. Custom logic can be exposed as a typed script, packaged as a resource type for shared use, or published to the private workspace Hub for the team.
Three escape hatches. Write a full Java plugin against the Kestra SDK and ship it as a JAR alongside the worker for first-class integration. Drop to an inline Python, Node or shell script inside a flow for one-off logic. Or save code as namespace files and call them from any flow in that namespace, the closest analogue to shared scripts.
How hard to get in, and how hard to get out?
Windmill keeps switching cost low. Step code is standard TS, Python, Go, Bash or SQL that runs anywhere, including after you leave Windmill. Kestra fits naturally if you already think in YAML DAGs (from Airflow or similar) and plan to stay.
Getting in
Paste a function body into the script editor, Windmill infers args, generates a UI, handles dependencies. No YAML to learn. Triggers, schedules, variables and apps migrate one by one.
If you already think in YAML DAGs (Airflow), the mental model transfers. Each task becomes a plugin reference; scripts inline or as files.
Getting out
Step logic is already standard code (TS, Python, Go, Bash, SQL) that runs anywhere. The CLI exports the full workspace as plain files. What you lose leaving Windmill is the runtime, not the step code.
Migration difficulty depends on what your flows use. Inline Python, Node or Shell scripts port cleanly, since the code body is standard. Ecosystem plugins (io.kestra.plugin.snowflake.*, stripe.*, etc.) need rewriting against the vendor's SDK. Built-in flow primitives (subflows, retries, conditionals) and Pebble templating ({{ inputs.x }}) also need reimplementing on the new platform.
Audit logs, observability, security, performance
Both cover the enterprise basics: RBAC, SSO, audit logs and SOC 2. Windmill is faster for short, high-volume workloads like webhooks, alerts and small scripts, with roughly 3× lower per-task overhead. Kestra is a good fit for long-running data tasks where per-task overhead doesn't matter.
Observability
Real-time streaming logs, per-run inputs / outputs / duration, built-in worker queue metrics and a Prometheus exporter. Trace ID on every job.
Execution logs, task-level timings, metrics via OpenTelemetry. Dashboards per namespace.
Audit logs
Full trail of who ran / edited / deployed what. Extended retention is Enterprise only.
Tracks platform events and user activity. Enterprise only.
Security
SOC 2 Type II compliant. RBAC, SSO (up to 10 users), encrypted secrets at rest and sandboxed script execution in open source. Uncapped SSO, audit logs and advanced access controls (SCIM, SAML) are Enterprise only.
SOC 2 compliance, RBAC, SSO (SAML / OIDC) and secret manager integrations are all Enterprise only.
Multi-tenancy & isolation
Multiple isolated workspaces on the same instance, each with their own users, resources, secrets and access controls. Free tier is capped at 3 workspaces; unlimited is Enterprise only.
Single tenant in open source. Namespaces organize flows but share the same users, secrets and access controls across the whole instance. True tenant isolation is Enterprise only.
Performance
~10ms cold starts. Dedicated-worker mode Enterprise only keeps runtimes and dependencies pre-warmed. Roughly 3× faster than Kestra on lightweight tasks on equivalent hardware.
Queue-based, horizontally scalable. Per-task overhead from JVM startup, plugin loading and state persistence. Negligible for multi-minute tasks.
Open source, pricing, and self-hosting?
Windmill publishes upfront per-seat and per-worker Enterprise pricing, no sales call needed. Kestra's OSS core is Apache 2.0, more permissive than Windmill's AGPLv3 for redistribution. Both ship Enterprise features in separate proprietary codebases and both are free to self-host the core.
Open-source license
AGPLv3 core, free and unlimited self-hosted. Enterprise features (SSO, dedicated workers, audit logs, external secret backends) ship in a separate proprietary codebase. Managed cloud available.
Apache 2.0 core, free and unlimited self-hosted. Enterprise features (RBAC, SSO, audit logs, secret manager backends, multi-tenancy) ship in a separate proprietary codebase. Managed cloud available.
Enterprise pricing
Enterprise adds SSO, audit logs, dedicated workers, advanced worker groups. Public per-seat and per-worker pricing on the pricing page.
Enterprise adds SSO, multi-tenancy, audit logs, advanced RBAC. Pricing not public; sales conversation required.
The verdict
Windmill and Kestra solve orchestration with two different grammars. Kestra models a workflow as a declarative YAML flow where every task is a plugin reference. Windmill models a workflow as a graph of real scripts where each script is also a standalone primitive with its own UI, API endpoint and scheduler.
Kestra can be a good fit if your team already thinks in YAML DAGs, your workload is mainstream data orchestration that its plugin catalog already covers, and Apache 2.0 licensing of the core matters for redistribution.
Windmill also works very well for those use cases, and goes considerably deeper. The developer experience is built around real language runtimes with a local-dev loop, AI coding tools against real source files, and whole-workspace Git sync for everything the platform manages. The runtime scope is wider too: the same platform that runs your workflows also powers data pipelines, AI agents and both low-code and full-code internal apps, with shared auth, secrets and observability across all of them. For short, high-volume workloads, Windmill is roughly 3× faster than Kestra on equivalent hardware. More of the enterprise foundation ships in open source, and Enterprise pricing is published upfront instead of gated behind a sales call.
The switching cost is also asymmetric: Windmill step logic is standard code that runs anywhere after you leave, while Kestra flows built on ecosystem plugins and Pebble templating have to be rewritten on any other platform. If you're deciding between the two, the fastest way to judge is to spend an afternoon in each.
Frequently asked questions
Build your internal platform on Windmill
Scripts, flows, apps, and infrastructure in one place.