Declarative workflows. Sub-second observability. No more 4am DAG forensics.
Most data orchestration platforms are 'Airflow but easier.' That's not enough. Logiciel is built for engineering teams that need declarative pipelines, real lineage, multi-language tasks (SQL, Python, Spark, dbt), and observability that actually works - without inheriting Airflow's operational debt.
If your orchestration setup is held together by tribal knowledge:
Teams shopping orchestration typically need:
Real lineage and observability - not pip-installed plugins held together with hope. Real lineage and observability require platform support, not pip-installed plugins held together with hope and best intentions.
Multi-language support - SQL, Python, Spark, dbt - without separate orchestrators. Multi-language support in one orchestrator is structurally different from running three orchestrators with shared metadata; the integration costs of the latter are non-trivial.
A managed control plane that scales without an SRE on the team. Managed control planes eliminate a class of operational risk that DIY orchestration creates without obvious benefits to most data teams.
Orchestration that engineers actually want to use.
Trading data, risk models, regulatory reporting - sub-second SLAs and audit-ready governance.
Listing data, transaction pipelines, geospatial analytics - multi-source consolidation.
EHR integration, claims pipelines, clinical analytics - HIPAA-aware infrastructure.
Product analytics, customer 360, usage-based billing - embedded and operational data.
Inventory, pricing, order, and customer pipelines - real-time and high-throughput.
IoT, project, and supply-chain data - operational analytics on hybrid stacks.
| Dedicated Pod | Staff Augmentation | Project-Based Delivery |
|---|---|---|
| Embedded data engineering pod aligned to your sprint cadence - typically 3–6 engineers + a US lead. | Senior data engineers, architects, and SMEs slotted into your team to unblock specific work. | Fixed-scope, milestone-driven engagements with clear deliverables and outcomes. |
We map your stack, workloads, team, and constraints in a working session - not an RFP response.
Reference architecture grounded in your reality, with capacity, cost, and migration plans.
Iterative implementation with weekly demos, code reviews, and your team in the loop.
Managed operations or knowledge transfer - your choice. Both with US-aligned coverage.
Continuous tuning of cost, performance, and reliability against measurable SLAs.
Asset-based orchestration with code-as-config.
Auto-generated, queryable, dashboard-ready.
Define, monitor, alert on per-asset and per-pipeline SLAs.
SQL, Python, Spark, dbt, shell - same scheduler, same observability.
One-click reruns with idempotency and partitioning.
Tooling and playbooks to migrate from Airflow to Logiciel in weeks.
Airflow is the incumbent - capable, mature, but operationally heavy and DAG-centric, which doesn't match how modern data teams think about assets. Dagster pioneered asset-first orchestration with strong lineage; Prefect simplified the operational model with a managed control plane. Logiciel combines the asset-first model of Dagster with the managed operations of Prefect, plus first-class multi-language support (SQL, Python, Spark, dbt, shell) and built-in observability that none of the three offer in their base tier. For US teams running >100 pipelines, Logiciel typically replaces Airflow plus a separate observability tool (Monte Carlo) plus a separate dbt runner - three line items become one.
Yes - dbt models become Logiciel assets with shared lineage, observability, SLA tracking, and testing. Drop your existing dbt project (manifest, profiles, tests) into Logiciel and the platform orchestrates dbt runs natively, no Airflow shim required. Column-level lineage extends from dbt models out into upstream ingestion sources and downstream BI dashboards, giving you a unified lineage graph instead of dbt-only lineage. dbt tests run as part of Logiciel's pipeline DAG with failures routing through unified incident management. About 80% of our customers run dbt; we treat it as a peer technology, not a tool we're trying to replace.
Per active asset, not per task run - predictable when you backfill, replay, or scale. An 'asset' is a managed table, a model, a feature, or a dataset that Logiciel orchestrates and observes. Mid-market customers (5-30 data engineers, 200-500 assets) typically pay $40-90K ARR. Enterprise tiers (1,000+ assets, advanced governance, dedicated TAM) start at $200K ARR. There are no surprise overage bills when you trigger a 12-hour backfill or replay a year of historical data. Pricing is published transparently with workload-grounded TCO comparisons against Airflow + Monte Carlo + dbt Cloud at evaluation time.
Yes - up to 50 assets in the free tier, no credit card required. The free tier includes asset-based orchestration, declarative pipelines, lineage, basic observability, and Slack alerting. Free-tier limits: no enterprise governance (advanced RBAC, audit logging), no dedicated support, single environment (no separate dev/staging/prod), and a 7-day data retention on observability metrics. Most customers start free, prove the pattern on a meaningful workload, and upgrade when their asset count outgrows 50 or when they need enterprise features. About 40% of free-tier users convert to paid within 6 months. The free tier is genuinely useful, not a crippled trial.
Yes - we provide tooling that translates Airflow DAGs into Logiciel assets, typically 80% automated. The remaining 20% is judgment work: collapsing DAG-style dependencies into asset graphs, resolving operator-specific behaviors, and modernizing patterns that Airflow forced (e.g., XCom hacks, manual idempotency). Migration is incremental - most teams move 10-20 DAGs in the first sprint, prove the pattern, then accelerate. Full migration of a 200-DAG Airflow installation typically takes 8-16 weeks. Critically, we don't force a hard cutover; Airflow stays live alongside Logiciel until each DAG's parity is signed off. Migration is fixed-fee per-DAG-bucket so the budget is predictable.
Native - submit Spark jobs to EMR, Databricks, Glue, AWS-managed Spark, or your own on-prem cluster, all orchestrated as Logiciel assets with full task-level observability. We handle Spark's operational quirks (executor sizing, dynamic allocation, shuffle tuning) through declarative configuration rather than imperative tuning code. Lineage extends through Spark transformations into upstream sources and downstream sinks, including PySpark, Spark SQL, and Scala Spark patterns. For ML workloads, we integrate with feature stores and model serving infrastructure on top of Spark. Customers with heavy Spark workloads (typically Databricks-centric) report Logiciel adds the operational discipline Databricks Workflows lacks at scale.
Yes - run tasks on Kubernetes (EKS, AKS, GKE, on-prem), ECS, Lambda, your own Docker host, or our managed compute pool. Custom executors are written in Python or Go using our open SDK, with full local testing support. Common patterns: GPU workloads on EKS for ML inference, Lambda for low-cost batch jobs, dedicated K8s for high-throughput streaming, on-prem for regulated data that can't leave the perimeter. Executor selection is per-asset, not per-pipeline, so you can mix executor types within a single workflow. Custom executors maintain Logiciel's observability and governance contract - you don't lose lineage or SLA tracking by going custom.
Bring 10 of your most-painful Airflow DAGs. We'll convert them into Logiciel assets in a 2-week working session - and you'll see what asset-first orchestration actually feels like.