Tests + anomaly detection + lineage. Built for data that the C-suite actually trusts.
Data quality isn't 'not enough tests.' It's that the tests you have don't catch what actually goes wrong. Logiciel combines rule-based testing, anomaly detection, and lineage-aware alerting - so when something breaks upstream, the right team finds out in minutes, not when the CMO opens the Monday dashboard.
What 'data quality' really looks like in most US teams:
Teams shopping data quality software typically have:
A 1,000+ dbt test suite that catches schema issues but misses business-meaningful changes. Massive dbt test suites that miss business-meaningful changes are a sign that rule-based coverage has hit its ceiling.
An ad-hoc set of SQL alerts in Slack that everybody mutes. Muted Slack alerts indicate a tool that produces volume without signal; the fix is fewer, higher-quality alerts, not more rules.
An exec who's asked 'why don't we have data quality monitoring?' - three quarters in a row. Repeated executive concern about data quality monitoring is the trigger most leaders need to move from documentation-only to enforcement-grade quality.
Quality that travels with the data.
Trading data, risk models, regulatory reporting - sub-second SLAs and audit-ready governance.
Listing data, transaction pipelines, geospatial analytics - multi-source consolidation.
EHR integration, claims pipelines, clinical analytics - HIPAA-aware infrastructure.
Product analytics, customer 360, usage-based billing - embedded and operational data.
Inventory, pricing, order, and customer pipelines - real-time and high-throughput.
IoT, project, and supply-chain data - operational analytics on hybrid stacks.
| Dedicated Pod | Staff Augmentation | Project-Based Delivery |
|---|---|---|
| Embedded data engineering pod aligned to your sprint cadence - typically 3–6 engineers + a US lead. | Senior data engineers, architects, and SMEs slotted into your team to unblock specific work. | Fixed-scope, milestone-driven engagements with clear deliverables and outcomes. |
We map your stack, workloads, team, and constraints in a working session - not an RFP response.
Reference architecture grounded in your reality, with capacity, cost, and migration plans.
Iterative implementation with weekly demos, code reviews, and your team in the loop.
Managed operations or knowledge transfer - your choice. Both with US-aligned coverage.
Continuous tuning of cost, performance, and reliability against measurable SLAs.
Schema, freshness, row-level, custom SQL - versioned in Git.
Domain-specific rules co-owned with stewards.
Source-to-warehouse reconciliation for revenue, inventory, customer counts.
Volume, distribution, freshness anomalies trained per-dataset.
Lineage-aware alert routing with severity tiers.
Per-domain quality SLAs reported to business owners.
We include their rule primitives (schema, custom SQL, freshness, distribution) plus ML-based anomaly detection, lineage-aware routing, stakeholder dashboards, and steward workflows - all managed, not self-hosted. Great Expectations is open-source and capable but operationally heavy; you run the runtime, the metadata store, and the alerting yourself. Soda is managed but rule-only; the issues that hurt are rarely the ones you wrote rules for. Logiciel layers anomaly detection on top of rules, catching the 'this number changed 30% and nobody knows why' patterns that pure rule-based systems miss entirely. For US mid-market and enterprise customers, Logiciel typically replaces Great Expectations + a separate alerting layer + a separate stakeholder dashboard.
Per-domain dashboards and signoff workflows let stewards co-own quality without writing SQL. Stewards see their domain's quality SLAs (freshness, accuracy, completeness, timeliness), can author business-rule quality checks via templates (no SQL required for common patterns), approve anomaly investigations, and sign off on schema changes. Engineering writes the technical primitives; stewards govern the meaning. This split - instead of forcing stewards to learn SQL or forcing engineers to manage business rules - is what makes data quality programs sustainable. For regulated customers (SOX, HIPAA, GDPR), steward signoff is auditable evidence of data governance.
Quality monitoring runs on metadata (schema, row counts, distributions over hashed values, freshness timestamps) and sampled non-PII data; PII stays masked or in-place. For deeper analysis on PII-containing fields, we support customer-managed encryption keys and tokenization patterns where the platform sees only obfuscated values. Auto-classification identifies PII columns (name, email, SSN, payment data) and applies appropriate masking automatically. For HIPAA, GDPR, CCPA, and other regimes, we configure region-specific PII rules by default and provide auditable evidence of masking enforcement. PII handling is a frequent regulated-customer concern and we have specific reference architectures for healthcare and financial services.
24 hours to first anomaly detected on your top datasets. Connect your warehouse, we auto-profile the top 100 datasets and establish 30-60 day baselines from query history (no waiting period required), and anomaly detection starts immediately. The first surfaced issue typically arrives within 48-72 hours and often catches a real problem the team hadn't noticed. Week 1 is baseline stabilization; weeks 2-4 are routing and stakeholder dashboards; by day 30, most teams have eliminated 60-80% of 'is the data right?' Slack threads and have measurable improvement in stakeholder trust. ROI in the first quarter is typically expressed as engineer hours regained plus financial close cycle time reduction.
Yes - Logiciel runs your existing dbt tests as part of unified pipeline monitoring and adds anomaly detection on top. Drop your dbt project (manifest, tests, profiles) into Logiciel and the platform orchestrates dbt runs, surfaces test results in unified observability, and routes failures through lineage-aware alerting. dbt's `not_null`, `unique`, `accepted_values`, and custom tests all flow through naturally. Logiciel adds the layer dbt tests can't reach: anomaly detection on volumes and distributions, schema drift detection, freshness lag monitoring, and stakeholder SLA dashboards. About 80% of our customers run dbt; we make dbt's quality story complete rather than competing with it.
Yes - operational databases (Postgres, MySQL, MongoDB, SQL Server, Oracle), data lakes (S3, ADLS, GCS with Iceberg/Delta/Hudi), streaming sources (Kafka, Kinesis, Pub/Sub), and SaaS source systems (Salesforce, HubSpot, NetSuite) all support quality monitoring. The depth of monitoring depends on the source: operational DBs and lakes get full anomaly detection; streaming sources get latency and throughput SLAs; SaaS sources get schema and freshness monitoring. Many customers monitor source-system quality (catching upstream issues before they propagate to the warehouse) in addition to warehouse-side monitoring - this is one of the patterns that materially shifts the failure mode from reactive to proactive.
Yes - up to 25 datasets monitored free, forever, with full anomaly detection, freshness monitoring, schema drift, lineage routing, and Slack alerting on those datasets. No credit card, no time limit, no feature crippling within the free tier scope. About 30% of free-tier users upgrade within 6 months when dataset count outgrows 25 or when enterprise governance becomes important. The other 70% stay on free, which is the design goal - making data quality accessible to teams that can't budget enterprise tooling but still need their pipelines to be trustworthy. Free tier is functionally complete for small teams, not a crippled marketing trial.
25 datasets, anomaly detection included, no credit card. See whether the next data quality issue gets caught by us before your stakeholders see it.