New in Confluent Cloud: Making Data & Pipelines Accessible for AI-Ready Streaming | Learn More

AI Tools for Builders — Confluent's MCP Server & Agent Skills

Written By

Your AI coding assistant just learned to speak Confluent.


Developers live in their editors. The best platform tools meet them there—and increasingly, that means their AI assistants meet them there too. AI coding tools are already reshaping how developers build, debug, and operate software, but most of them are generalists. They can write an Apache Kafka® producer, but they won't know your Schema Registry subjects. They can set up a change data capture (CDC) connector, but they won't know how to wire it through Apache Flink® and Tableflow to land cleanly in your data lake.

What if your AI coding assistant actually understood Confluent? Not just generic code completion but real operational knowledge, such as how to discover your topics, diagnose a lagging consumer group, or build a CDC pipeline that follows Confluent best practices?

Today we're announcing three generally available capabilities that make this real: an open source local Model Context Protocol (MCP) server, a managed MCP server hosted by Confluent, and Agent Skills that package Confluent domain expertise for any AI coding tool. Together, they give AI agents direct access to your streaming platform—with the tools to act on it and the domain knowledge to reason about it.

Quick Refresher: MCP and Agent Skills

Before diving into the specifics, here's how the two capabilities work—and why you need both.

MCP: The Connection

MCP is an open standard that connects AI coding tools to external systems. Any platform—a database, a cloud provider, a software-as-a-service (SaaS) API—can expose an MCP server, and any AI assistant that speaks the protocol can connect to it. MCP provides a structured, bidirectional bridge: The assistant can discover what's available, read data, and take action, all through a standardized interface.

Agent Skills: The Knowledge

Connection alone isn't always enough. MCP gives AI assistants access to a platform but not the expertise to use it well. Agent Skills fill that gap. They're plugins that package domain knowledge, best practices, guardrails, and guided workflows for a specific platform so that your AI assistant produces correct, production-ready output instead of generic suggestions.

MCP provides the connection to your environment. Skills provide the knowledge to use it correctly. Together, they turn a general-purpose AI assistant into a platform-aware one.

Local MCP Server: Open Source, Run Anywhere

The MCP is an open standard that lets AI assistants connect to external tools and data sources. Confluent's open source MCP server implements this protocol to give AI agents direct access to your Confluent environment—both Confluent Cloud and local Kafka clusters.

Install the server locally and point it at your Confluent environment, and your AI assistant can do the following.

  • Discover: List topics, schemas, connectors, and Flink compute pools

  • Build: Create topics, produce messages, deploy Flink SQL statements, configure connectors

  • Manage: Update topic configurations, alter connector settings, manage schemas and environments

  • Debug: Query metrics, check connector status, profile Flink statement performance, diagnose issues

The local MCP server supports both read and write operations, giving developers and platform engineers full control over their environments from their editors. Every tool is annotated with structured metadata, including descriptions, input schemas, and operational context, so that AI assistants can assess the impact of an action before executing it. Your AI coding tool's built-in permission model provides an additional layer of control over which operations are allowed.

The server supports both Confluent Cloud and Apache Kafka, so whether you're building against a local Docker cluster or a production Confluent Cloud environment, the same tools work. And because it's open source, the community can contribute by submitting issues, proposing new tools, and helping shape the project's direction.

The server is open source and ready to install via  npx -y @confluentinc/mcp-confluent -e /path/to/.env, or you can visit github.com/confluentinc/mcp-confluent.

Managed MCP Server: Zero Configuration, Hosted by Confluent

The managed MCP server is hosted directly in Confluent Cloud and complements the local server with a read-only entry point, which is ideal for exploration, inspection, and diagnostics. Connect your AI assistant to Confluent Cloud and start querying your environment immediately, with tools that stay current as the platform evolves. Tools are organized into two tiers:

  1. Global server: Tools that span your entire Confluent Cloud organization include environment and cluster discovery, telemetry metrics, and a full connector troubleshooting suite with connector status, configuration, offsets, error summaries, fix recommendations, logs, and connector-level metrics.

  2. Regional server: Tools scoped to a specific cluster include list and inspect topics, browse schema subjects, and consume messages.

The connector troubleshooting suite is purpose-built for diagnosing and resolving connector issues. When a connector fails, your AI assistant can read its status, inspect its configuration, review logs, and surface error summaries with fix recommendations—all without leaving your editor.

The managed server handles authentication, rate limiting, and API versioning so you don't have to. As Confluent ships new platform capabilities, corresponding MCP tools will appear automatically.

The managed server is available now in Confluent Cloud. Connect your AI assistant and start exploring.

Agent Skills: Domain Knowledge for AI Agents

AI coding assistants are powerful generalists, but they don't know the specifics of your platform. They'll generate Flink SQL that works in open source Apache Flink but fails on Confluent Cloud because it's missing required table properties. They'll build a Kafka producer without Schema Registry serialization, leaving you with ungoverned schemas in production. They'll configure a connector with parameters that were deprecated two versions ago.

Agent Skills solve this by packaging Confluent domain expertise into reusable knowledge modules that any AI coding tool can use. Each skill encodes best practices, guardrails, and workflows for a specific domain. And for complex operations like infrastructure provisioning, Agent Skills present a detailed execution plan for your review before taking any action. They work standalone, with no MCP server required —but when one is connected, they can combine domain knowledge with live environment access for even more powerful workflows. 

Four skills are generally available at launch.

  1. Schema Registry: Scans your project, extracts schemas from your data models, tags personally identifiable information (PII) fields, and generates Terraform to register them in Schema Registry. Takes you from ungoverned schemas to proper governance in one workflow.

  2. Kafka Streams: Designs, builds, and debugs Kafka Streams applications end to end, from topology design and pattern selection to troubleshooting production issues.

  3. Python Kafka Client: Scaffolds a production-ready Python producer/consumer project with Schema Registry serialization configured for your target environment.

  4. CDC to Tableflow: Builds end-to-end CDC pipelines on Confluent Cloud—from database source through Debezium, Flink, and Tableflow—to Apache IcebergTM or Delta Lake tables.

Agent Skills work with any AI coding tool that supports them, including Claude Code, Cursor, Windsurf, and others. Install them once, and every conversation with your AI assistant becomes Confluent-aware.

You can browse the full set of available Agent Skills at github.com/confluentinc/agent-skills or simply install them via npx skills add confluentinc/agent-skills.

Let’s See It in Action: Agent Skills and MCP Server

Here's what Agent Skills and the MCP server look like working together. In this demo, a developer goes from exploring their Confluent Cloud environment to building a full CDC pipeline from MySQL to Iceberg, entirely through natural-language prompts.

Building a CDC Pipeline to Iceberg with Natural Language

Explore and Inspect

The developer starts by asking what's available. The MCP server inventories the environment—10 environments, topics, schemas, connectors—and returns the full picture without ever leaving the editor.

"What resources are available to me in Confluent Cloud?"

Then they inspect live data:

"Show me the most recent data in stock_trades."

The agent consumes messages from the topic and displays them in a table—simulated trade data with symbols, accounts, and users. Two prompts, instant situational awareness.

Build a CDC Pipeline 

Now the developer asks for something ambitious: an end-to-end CDC pipeline from MySQL to an Iceberg data lake.

"I want to CDC data from MySQL into Iceberg using Confluent Cloud."

The CDC to Tableflow skill activates based on trigger words (CDC + Iceberg + Confluent Cloud), without requiring the user to directly invoke the skill, and provides an outline and execution plan for the full pipeline architecture: 

It asks for Confluent Cloud and MySQL connection details. The developer points to local config files, and the skill reads them, discovers the existing environment, and presents a detailed execution plan—connector config, Flink SQL strategy, and Tableflow settings—for review before taking any action. Once approved, the agent executes the full build, with no manual intervention required.

Monitor and Diagnose

With the pipeline running, the developer checks on it:

"Can you please check on the health of the pipeline?"

The agent runs health checks across all five components—CDC connector, two Flink INSERT jobs, and two Tableflow materializations—and reports status in a single table. Then the developer digs deeper:

"What's the associated throughput for target_pageviews?"

The agent queries the Telemetry API for received bytes, received records, sent bytes, and retained bytes. It identifies three distinct phases: steady-state traffic before the pipeline, a gap during connector provisioning, and snapshot bursts peaking at ~240K records/minute as Flink processes the initial MySQL snapshot.

Seven natural-language prompts, one end-to-end workflow—from exploration to pipeline build to production monitoring within minutes. The MCP server provided the live environment access; the CDC to Tableflow skill provided the domain expertise to build it correctly.

Get Started

All three capabilities are generally available today. Pick the path that fits your workflow.

Whether you're a developer who wants to stop context-switching to the cloud user interface or a platform engineer setting up shared tooling for your team, pick the path that fits and build from there.


To learn more about Confluent's AI developer tools, visit confluent.io or join the discussion in our community forum.

Apache®, Apache Kafka®, Kafka®, Apache Flink®, Flink®, Apache IcebergTM, and IcebergTM are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.

  • Erick Lee is a Product Manager at Confluent focused on Connect. Prior to product management, he worked as a Solutions Engineer partnering with Digital Native customers on building out their data streaming platforms. He started his career at Oracle within the analytics and data warehouse space. Outside of work you'll often time find him rock climbing and enjoying the outdoors.

Did you like this blog post? Share it now