Coalesce Automation Inc.

03/09/2026 | Press release | Distributed by Public on 03/09/2026 13:58

Coalesce Quality: Data Observability as Part of the Data Operating Layer

Today, Coalesce announced the acquisition of SYNQ. As part of the acquisition, SYNQ becomes Coalesce Quality, bringing data observability into the Coalesce data operating layer.

The announcementexplains why the companies are coming together. This post focuses on what that looks like inside the product.

Coalesce Quality now operates alongside Coalesce Transform and Coalesce Catalog. Together, these allow teams to build pipelines, discover and govern data assets, and monitor their reliability within a single platform.

Transform handles the development and execution of data pipelines that power analytics and AI workloads. Catalog documents datasets, tracks lineage, and captures ownership and governance metadata so teams can understand how data is produced and used. Quality monitors those data assets, detecting anomalies and helping teams investigate potential root causes.

Bringing these capabilities together allows teams to build, understand, and operate their data with clearer ownership, faster issue detection, and without the gaps that come from stitching tools together.

A unified data operating layer

Instead of relying on separate systems for pipeline development, governance, and monitoring, these workflows operate with shared context.

This allows teams to understand how data flows through the platform, which assets are most important, who is responsible for them, and how issues propagate across upstream pipelines and downstream consumers.

[Link]
Here are a few workflows this enables for data teams:

Monitoring under one roof - sync your key data assets and products from the catalog and define monitoring blueprints that specify expectations. These blueprints automatically deploy the relevant monitors and tests across your data platform.

Centrally managed ownership model- use ownership definitions from the catalog to control who gets notified, and how. Test failures, anomaly alerts, and other issues can be routed alongside other data incidents without creating fragmented alerting workflows.

Always-on status of assets- consumers of data assets can see a live status overview directly in the catalog, including whether there are active quality issues or ongoing incidents affecting the asset.

Manage data incidents- treat test failures or anomaly detections as incidents, track downstream impact, and manage communication from detection through resolution.

Building reliable data with built-in data observability

As part of building pipelines in Transform, teams already define tests alongside their transformation logic to catch issues such as null values appearing in a column, missing relationships between tables, or violations of business rules.

These tests are good at catching issues that can be expressed explicitly in SQL. You can think of these as catching "known unknowns."

However, many real-world data issues are harder to express as static tests. Examples include a drop in order volume for a specific segment, unexpected changes in row counts over the weekend, or maintaining freshness thresholds across dozens of data sources with different update patterns.

These types of issues are better understood as "unknown unknowns."

This is where Coalesce Quality's built-in time-series anomaly detection comes in. Instead of relying only on predefined rules, it continuously analyzes patterns across datasets and detects changes that would otherwise be difficult to capture through tests alone.

[Link]

In practice, monitoring can be deployed automatically across critical assets. For example, a set of "P1" assets tagged in Catalog can automatically have anomaly monitors deployed through centrally managed deployment rules.

Because Quality understands lineage across pipelines built in Transform, monitoring can be applied across the upstream dependencies feeding those assets rather than isolated tables or models. If a monitor detects a change in expected patterns, such as a drop in weekend order volume caused by a pipeline failure, the issue is surfaced immediately, allowing teams to investigate the pipeline producing the data rather than debugging individual downstream datasets.

Making data quality visible across the organization

All data quality issues, whether detected by anomaly monitors or by failing tests, are automatically surfaced in Catalog.

Assets with active issues display a real-time warning badge. Anyone searching for or using a dataset can immediately see whether the asset currently has known data quality issues.

[Link]

Instead of discovering problems downstream in dashboards or reports, users can view an asset's health status before using it. When an issue is detected, the relevant owner can be automatically notified via tools such as Slack, Teams, or PagerDuty using the ownership definitions stored in Catalog, while the issue remains visible to anyone viewing the asset.

Shared metadata unlocks data quality AI workflows

Because Quality operates alongside Transform and Catalog, it has access to much richer context when issues occur. This includes lineage relationships, transformation logic, asset tags, historical monitor results, recent Git commits, and the code structures and documentation stored alongside assets in Catalog. This context is what makes AI-assisted investigation possible. When an issue is detected, data engineers are guided directly to the transformation or dependency most likely responsible, rather than having to trace it manually.

[Link]

The same context can also be used proactively. By analyzing transformation logic, schema structure, lineage relationships, and historical incidents, the system can recommend testing and monitoring strategies for new datasets or highlight gaps in existing coverage.

How to get started

We're excited to begin integrating Coalesce Quality into the Coalesce platform and will share more details on rollout and availability in the coming weeks.

For existing customers, Coalesce Quality will become an extension of the workflows already used in Transform and Catalog, allowing teams to monitor pipelines, investigate issues, and manage reliability without leaving the platform.

If you'd like to see how Coalesce Quality works in practice, you can request a demo, reach out to your Coalesce account team for early access, or join one of our upcoming virtual sessions.

Coalesce Quality: Closing the Gap Between Data Execution and Observability
The Data T Podcast | March 31, 2026 | 9 a.m. PT

Join Coalesce CEO & Co-Founder Armon Petrossian as he sits down with SYNQ CEO & Founder Petr Janda to discuss Coalesce's acquisition of SYNQ and what it signals for the future of modern data platforms.

SAVE YOUR SEAT >

Stop Chasing Data Incidents: Built-In Observability With Coalesce Quality
Live Demo Dive | April 9, 2026 | 9 a.m. PT

Join us for a hands-on walkthrough to see how built-in data observability helps teams detect issues sooner, resolve them faster, and deliver more reliable data products.

SAVE YOUR SEAT >

Coalesce Automation Inc. published this content on March 09, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 09, 2026 at 19:58 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]