[Blog] New in Confluent Cloud: Queues for Kafka, new migration tooling, & more | Read Now
The ideal data streaming platform empowers you to handle every type of data movement with confidence. In our first Confluent Cloud launch of 2026, we’re delivering on the promise of Apache Kafka® for any workload, at any scale.
Whether you’re an enterprise needing to consolidate queuing workloads with data streaming, a retailer preparing for massive Black Friday–level traffic spikes, or a startup looking to power artificial intelligence (AI) applications with real-time data, Confluent is the platform to satisfy your data needs. With additional scalability, cost savings, and analytics features, we’re making it easier than ever to bring every workload home to Confluent Cloud.
Check out the Kafka Copy Paste migration demo video to see how you can migrate to Confluent Cloud and start taking advantage of these new features.
We’re excited to announce the general availability of Queues for Kafka (KIP-932) on Confluent Cloud, coinciding with the release of Apache Kafka 4.2.
Historically, organizations may have maintained two separate technology estates—a modern data streaming platform for real-time workloads and a legacy queuing estate for task distribution—which led to infrastructure sprawl and fragmented governance. By introducing native queue semantics to Kafka, Queues for Kafka eliminates the need for organizations to manage these separate platforms, allowing them instead to consolidate their messaging infrastructure onto a single, governed data streaming platform.
As a result, organizations leveraging Queues for Kafka can reduce total cost of ownership (TCO) while maintaining the durability and scalability they’ve come to expect from Kafka and Confluent.
The magic sauce that brings queues to Kafka is the new share group abstraction that enables scaling and the new share consumer that provides queue semantics. Unlike traditional consumer groups, which are restricted by a strict 1:1 partition-to-consumer mapping, share groups allow multiple consumers to cooperatively process messages from the same topic regardless of the number of partitions. This enables consumers to scale elastically to meet the demands of bursty, parallel workloads.
We recommend that you use share groups for operational and application workloads that require per-message acknowledgment, parallel partition consumption, and elastic scaling beyond partition count—including command invocation, service communication, task execution, work queues, and job processing.
Queues for Kafka on Confluent Cloud takes this further with a dedicated share group user interface (UI) directly in the Confluent Cloud console, providing deep visibility into consumer health and group management that isn't available in open source distributions. Confluent Cloud also provides critical, queue-specific metrics through our Metrics API, allowing you to make autoscaling decisions. Combined with programmatic management via the Confluent CLI and REST API, you get a production-ready queuing service that’s fully integrated into our data streaming platform.
Consolidate workloads from traditional messaging systems onto a single, unified Kafka platform
Improve scalability with share groups that solve Kafka's traditional partition-based scaling limitations by allowing dynamic consumer scaling independent of partition assignments
Unlock new use cases, including work distribution and task queue processing patterns that were previously difficult or impossible with traditional Kafka consumer groups
Enhance observability with real-time monitoring of consumer health and queue-specific metrics through the Metrics API
Queues for Kafka is available for Enterprise and Dedicated clusters on Confluent Cloud, with support for Standard clusters coming in the second half of 2026. Read the documentation, download and run the qfk-demo, and join the Apache Kafka community discussion about KIP-932. For questions, reach out to your account team.
We’re thrilled to introduce Kafka Copy Paste (KCP), a free open source CLI tool designed to automate migration from hosted Kafka (and eventually open source Kafka) to Confluent Cloud. This new migration tooling dramatically streamlines the entire process by eliminating a significant amount of the manual work traditionally involved. This cuts migration times from months to days and helps you achieve a stress-free, near–zero-downtime migration.
Moving workloads from hosted Kafka to a fully managed, cloud‑native data streaming platform can unlock major cost savings and agility. With KCP, that process is easier than ever. KCP orchestrates the full journey from hosted Kafka to Confluent Cloud, including:
Discovery and Planning: KCP scans your existing Kafka environments to detect cluster configuration, gather costs based on actual usage, and provide accurate inputs for Confluent’s TCO calculator.
Provisioning Infrastructure: KCP generates pre-filled Terraform scripts to automatically provision the equivalent Confluent Cloud clusters, networking, and necessary migration infrastructure.
Data Migration: KCP enables end-to-end automation for replicating data via secure external Cluster Linking and automates the conversion and migration of associated components, such as access control lists (ACLs), connectors, and Schema Registry data.
Client Migration: Coming soon, KCP will make use of Confluent Cloud Gateway, a cloud-native Kafka proxy solution, to simplify client cutovers during migration.
Explore KCP on GitHub, try out the migration workshop to see it in action, and check out the deep-dive blog post to walk through the key steps of migration with KCP.
Confluent Cloud is introducing new capabilities for Kafka clusters to help you scale confidently while balancing cost efficiency and predictability.
Operators can now configure a capacity limit on elastic Confluent units (eCKUs) across all serverless cluster types (Basic, Standard, Enterprise, and Freight) for better cost control. Teams can experiment and onboard without the risk of exceeding budget.
Enterprise clusters on all major clouds can now autoscale up to 32 eCKUs, delivering more than 7.5 GB/s of combined throughput—more than 3x the previous capacity. All clusters retain exceptionally fast scaling (in seconds) for up to 10 eCKUs. Scaling beyond this threshold shifts to an on-demand model that may take up to 20 minutes per eCKU. If your workload requires rapid expansion at these higher volumes, contact your account team to enable faster scaling.
We’re extending client quotas to Enterprise and Freight clusters, matching the functionality that previously existed only in Dedicated clusters. Client quotas enable you to enforce precise ingress and egress throughput limits on specific principals, making it possible to safely consolidate diverse workloads onto shared resources to optimize costs.
By establishing these guardrails, you prevent “noisy neighbor” applications from monopolizing throughput or degrading overall cluster performance. The result is a cost-effective, multi-tenant environment where every application maintains predictable performance regardless of traffic spikes from others.
Fetch-from-follower (KIP-392) is now available for Enterprise clusters in addition to Freight clusters. For Enterprise clusters using Private Network Interface (PNI), you can configure your clients to consume from the closest replica in the same availability zone (AZ) rather than a leader replica in a different AZ, cutting out cross-AZ traffic and slashing egress charges on Amazon Web Services (AWS) networking bills.
Last year, we introduced Confluent Intelligence, our fully managed service for building real-time, context-rich, trustworthy AI. Today, we’re expanding Confluent Intelligence with new capabilities across Streaming Agents, built-in ML functions, and Model Context Protocol (MCP) server support. These features enable you to connect existing agents, detect anomalies more accurately, use more vector stores for retrieval-augmented generation (RAG), secure networking, and standardize how agents access real-time data on Confluent Cloud.
Learn about the latest in Confluent Intelligence in the deep-dive blog post or check out the demo video: