04/28/2026 | Press release | Distributed by Public on 04/29/2026 10:03
MCPs are incredibly useful, but taking them from experimental to something that's used across teams requires a more deliberate approach.
At Coalesce, with the acquisition of SYNQ, the toolset now spans the full data operating layer. Transform for building data models, Catalog for governance and discovery, and Quality for data observability. We've shipped MCPs for all three, and some are already in production with customers.
This post kicks off a three part series on using them in practice. Each part covers a different level of how you can work with the MCPs:
Each level adds structure, trades some flexibility for reliability, and solves a different class of problem. This post covers Level 1.
Getting started with prompt data engineering in Coalesce
The power of the Coalesce MCPs lies in their ability to understand context and metadata across the entire DataOps lifecycle, from transformations to governance to quality monitoring. If you use only one of the three products, you can still get started with that MCP, but the more context and metadata you add, the more powerful the workflows become.
How to understand MCP tools
Before we dig in, it's key to understand the basics of how MCPs work. In short, MCPs have access to the inner workings of the platforms using 'tools'. A tool is an API call the MCP can make on your behalf. It can fetch data, change state, or trigger a workflow. The set of tools defines the boundary of what the MCP can and can't do. Knowing what's available helps you understand where you can get most out of the MCP.
Every Coalesce MCP tool ships with a description. For day-to-day prompting you don't need to read every description, but it's worth knowing how to go a step deeper when you start using these tools in production workflows. The descriptions tell you not just what each tool does, but where its limits sit. Take from the Catalog MCP. It walks the column lineage graph exhaustively with no depth cap, but caps out at 10,000 nodes by default to avoid pathological graphs. Knowing that ceiling exists can help understand why the tool behaves as it does in some edge cases.
The same logic applies to how the tools themselves are designed. The more specific the instructions baked into a tool, the more deterministic the output. Open-ended API calls leave all the interpretation to the LLM, which can quickly take things off the rails.
Managing permissions and data access
If you're using the Coalesce MCPs in production workflows, it's worth being deliberate about permissions, especially around read vs. write access. Read access lets the MCP inspect lineage, catalog metadata, quality status, and code. Write access lets it make changes directly, whether that's assigning owners, creating tests and monitors, or committing model code.
A good starting point is read-only. You can manage this at the token access level to entirely block write access, and we'd recommend keeping it like that for most users. Write scopes are worth granting more deliberately, to specific users or workflows rather than as a blanket setting. A useful rule of thumb is to treat MCP write tokens the same way you'd treat production database credentials. The same person who approves one should approve the other.
On the client side, MCP-compatible tools like Claude Code let you allow, deny, or gate individual tools behind per-call approval. Approval gates are especially worth it for tools that query raw warehouse data rather than metadata. An unbounded can run up warehouse costs or pull customer PII into a context window where you didn't expect it.
Extend and chain MCPs to automate key workflows
The Coalesce MCPs become more useful when you combine them with other MCPs your team already uses. Most clients let you connect several at once, and the model will call across them in a single prompt.
A common pattern we've seen is pairing Coalesce Quality with the Slack MCP. You can ask for a summary of open issues older than seven days and have it posted to a specific channel. Or you can have the model investigate an anomaly, write up the root cause, and drop the summary into the on-call thread. The investigation work stays in Coalesce, and the communication work moves to where your team already is.
The same approach works with Linear, GitHub, Notion, or whatever else your team runs on. The Coalesce MCPs handle the data lifecycle, and the other MCPs handle everything that surrounds it.
Putting MCPs into practice
Below we'll look at three common use cases where our MCPs can come in handy: Planning a change, debugging a data issue, and assigning owners to assets.
Planning a change
A common scenario is wanting to change a field and assessing the downstream impact ahead of time. In this case, in a fact table.
By having access to a wide range of metadata, this is made much easier. Column-level lineage connects to downstream tables and dashboards. Catalog metadata shows which dashboards are important. Quality checks show affected assets. Stitching the parts together by hand across hundreds of downstream dependencies is tedious.
The MCP can significantly speed up this workflow. It traverses the lineage, reads the catalog metadata, checks the quality status, and gives you a single, readable impact summary directly in Slack.
Root cause analysis
Following the example below, a natural next step to ask would be understanding the root cause of the data issue. In this example, we've defined an anomaly monitor in SYNQ that tracks revenue for each vendor in the field. Recently, the revenue for one vendor dropped well below the expected range, triggering an anomaly alert.
The MCP runs the investigation akin to what a data engineer would run. It inspects upstream issues, looks for recent code changes, and patterns. It then works backward to the change that caused the anomaly.
Data governance workflows
MCPs are also well suited for common data governance workflows such as: Which tier-1 assets are missing owners? Which have no tests? Which are flagged important but undocumented? You can easily get to the answers of these questions, but you can also ask the MCP to fix the issue directly in the catalog.
This is where the read-vs-write distinction from earlier becomes relevant. While you can roll out ad-hoc fixes like this safely, you should be more cautious with larger fixes that may impact many data assets at once and build relevant guardrails. We'll cover this more in section 3 where we're introducing playbooks with built-in guardrails.
Limitations of prompting with MCPs
Starting with basic prompting (level 1) is a good way to get familiar with MCP capabilities and seeing results right away. It's worth keeping an eye on a few common limitations:
Level 1 works best when the person asking can verify the answer themselves. For example, an engineer who can open the model code in Transform or a governance lead who can check the Catalog. Most teams we've seen implementing MCP use cases stop here. This is leaving a lot of value untapped. In Part 2, we'll look at how to go further with agent skills and in Part 3, how to build repeatability with playbooks.
Get started with Coalesce MCPs
If you're already a Coalesce customer, reach out to your Coalesce account lead or submit a contact request to learn more about the MCPs.
If you're not a Coalesce customer, book a demo to see the platform in action.