分享

阿帕奇·卡夫卡

 极客云曦 2023-08-19 发布于北京

文档

卡夫卡 3.5 文档

以前的版本: 0.7.x, 0.8.0, 0.8.1.X, 0.8.2.X, 0.9.0.X, 0.10.0.X, 0.10.1.X, 0.10.2.X, 0.11.0.X, 1.0.X, 1.1.X, 2.0.X, 2.1.X, 2.2.X, 2.3.X, 2.4.X, 2.5.X, 2.6.X, 2.7.X, 2.8.X3.0.X3.1.X, 3.2.X3.3.X3.4.X.

1. 入门

1.1 引言

什么是事件流式处理?

事件流是人体中枢神经系统的数字等价物。它是 为“永远在线”世界奠定技术基础,企业越来越多地由软件定义 和自动化,并且软件的用户更多的是软件。

从技术上讲,事件流是从事件源实时捕获数据的做法 如数据库、传感器、移动设备、云服务和流形式的软件应用程序 事件;持久存储这些事件流以供以后检索;操纵、处理和反应 实时和回顾事件流;并将事件流路由到不同的 根据需要使用目标技术。因此,事件流可确保连续的流动和解释 数据,以便在正确的时间将正确的信息放在正确的位置。

我可以使用事件流做什么?

事件流适用于众多行业和组织的各种用例。它的许多例子包括:

  • 实时处理付款和金融交易,例如在证券交易所、银行和保险中。
  • 实时跟踪和监控汽车、卡车、车队和货物,例如在物流和汽车行业。
  • 持续捕获和分析来自物联网设备或其他设备(如工厂和风电场)的传感器数据。
  • 收集客户互动和订单并立即做出反应,例如零售、酒店和旅游业以及移动应用程序。
  • 监测住院治疗患者并预测病情变化,以确保在紧急情况下及时治疗。
  • 连接、存储和提供公司不同部门生成的数据。
  • 作为数据平台、事件驱动架构和微服务的基础。

Apache Kafka®是一个事件流媒体平台。那是什么意思?

Kafka 结合了三个关键功能,因此您可以使用单个经过实战测试的解决方案实现端到端的事件流用例

  1. 发布(写入)和订阅(读取)事件流,包括连续导入/导出 您来自其他系统的数据。
  2. 根据需要持久可靠地存储事件流。
  3. 在事件发生时或回顾性地处理事件流。

所有这些功能都是以分布式、高度可扩展、弹性、容错和 安全的方式。Kafka 可以部署在裸机硬件、虚拟机和容器上,也可以部署在本地 以及在云中。您可以选择自我管理 Kafka 环境和使用完全托管环境 由各种供应商提供的服务。

简而言之,卡夫卡是如何运作的?

Kafka 是一个由服务器客户端组成的分布式系统, 通过高性能 TCP 网络协议进行通信。 它可以部署在本地和云中的裸机硬件、虚拟机和容器上 环境。

服务器:Kafka 作为一个或多个服务器的群集运行,这些服务器可以跨越多个数据中心 或云区域。其中一些服务器形成了存储层,称为代理。其他服务器运行 Kafka Connect 以持续导入和导出 数据作为事件流,用于将 Kafka 与您现有的系统(如关系数据库)集成,以及 其他卡夫卡集群。为了让您实施任务关键型用例,Kafka 集群具有高度可扩展性 和容错:如果其任何服务器发生故障,其他服务器将接管其工作以确保 连续操作,没有任何数据丢失。

客户端:它们允许您编写读取、写入、写入的分布式应用程序和微服务 即使在网络的情况下,也能并行、大规模和容错地处理事件流 问题或机器故障。Kafka附带了一些这样的客户端,这些客户端由Kafka提供的数十个客户端增强。 社区:客户端可用于Java和Scala,包括更高级别的Kafka Streams库,用于Go,Python,C / C++和 许多其他编程语言以及 REST API。

主要概念和术语

事件记录了世界或您的业务中“发生了某些事情”的事实。它在文档中也称为记录或消息。当您向 Kafka 读取或写入数据时,您将以事件的形式执行此操作。从概念上讲,事件具有键、值、时间戳和可选的元数据标头。下面是一个示例事件:

  • 事件键:“爱丽丝”
  • 事件值:“向 Bob 支付了 200 美元”
  • 事件时间戳:“25 年 2020 月 2 日下午 06:<>”

生产者是向 Kafka 发布(写入)事件的客户端应用程序,使用者是订阅(读取和处理)这些事件的客户端应用程序。在 Kafka 中,生产者和消费者是完全解耦的,彼此不可知,这是实现 Kafka 闻名的高可扩展性的关键设计元素。例如,生产者永远不需要等待消费者。Kafka 提供了各种保证,例如能够精确地处理一次事件。

Events are organized and durably stored in topics. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder. An example topic name could be "payments". Topics in Kafka are always multi-producer and multi-subscriber: a topic can have zero, one, or many producers that write events to it, as well as zero, one, or many consumers that subscribe to these events. Events in a topic can be read as often as needed—unlike traditional messaging systems, events are not deleted after consumption. Instead, you define for how long Kafka should retain your events through a per-topic configuration setting, after which old events will be discarded. Kafka's performance is effectively constant with respect to data size, so storing data for a long time is perfectly fine.

Topics are partitioned, meaning a topic is spread over a number of "buckets" located on different Kafka brokers. This distributed placement of your data is very important for scalability because it allows client applications to both read and write the data from/to many brokers at the same time. When a new event is published to a topic, it is actually appended to one of the topic's partitions. Events with the same event key (e.g., a customer or vehicle ID) are written to the same partition, and Kafka guarantees that any consumer of a given topic-partition will always read that partition's events in exactly the same order as they were written.

Figure: This example topic has four partitions P1–P4. Two different producer clients are publishing, independently from each other, new events to the topic by writing events over the network to the topic's partitions. Events with the same key (denoted by their color in the figure) are written to the same partition. Note that both producers can write to the same partition if appropriate.

To make your data fault-tolerant and highly-available, every topic can be replicated, even across geo-regions or datacenters, so that there are always multiple brokers that have a copy of the data just in case things go wrong, you want to do maintenance on the brokers, and so on. A common production setting is a replication factor of 3, i.e., there will always be three copies of your data. This replication is performed at the level of topic-partitions.

This primer should be sufficient for an introduction. The Design section of the documentation explains Kafka's various concepts in full detail, if you are interested.

Kafka APIs

In addition to command line tooling for management and administration tasks, Kafka has five core APIs for Java and Scala:

  • The Admin API to manage and inspect topics, brokers, and other Kafka objects.
  • The Producer API to publish (write) a stream of events to one or more Kafka topics.
  • The Consumer API to subscribe to (read) one or more topics and to process the stream of events produced to them.
  • The Kafka Streams API to implement stream processing applications and microservices. It provides higher-level functions to process event streams, including transformations, stateful operations like aggregations and joins, windowing, processing based on event-time, and more. Input is read from one or more topics in order to generate output to one or more topics, effectively transforming the input streams to output streams.
  • The Kafka Connect API to build and run reusable data import/export connectors that consume (read) or produce (write) streams of events from and to external systems and applications so they can integrate with Kafka. For example, a connector to a relational database like PostgreSQL might capture every change to a set of tables. However, in practice, you typically don't need to implement your own connectors because the Kafka community already provides hundreds of ready-to-use connectors.

Where to go from here

1.2 Use Cases

Here is a description of a few of the popular use cases for Apache Kafka®. For an overview of a number of these areas in action, see this blog post.

Messaging

Kafka works well as a replacement for a more traditional message broker. Message brokers are used for a variety of reasons (to decouple processing from data producers, to buffer unprocessed messages, etc). In comparison to most messaging systems Kafka has better throughput, built-in partitioning, replication, and fault-tolerance which makes it a good solution for large scale message processing applications.

In our experience messaging uses are often comparatively low-throughput, but may require low end-to-end latency and often depend on the strong durability guarantees Kafka provides.

In this domain Kafka is comparable to traditional messaging systems such as ActiveMQ or RabbitMQ.

Website Activity Tracking

The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds. This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type. These feeds are available for subscription for a range of use cases including real-time processing, real-time monitoring, and loading into Hadoop or offline data warehousing systems for offline processing and reporting.

Activity tracking is often very high volume as many activity messages are generated for each user page view.

Metrics

Kafka is often used for operational monitoring data. This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.

Log Aggregation

Many people use Kafka as a replacement for a log aggregation solution. Log aggregation typically collects physical log files off servers and puts them in a central place (a file server or HDFS perhaps) for processing. Kafka abstracts away the details of files and gives a cleaner abstraction of log or event data as a stream of messages. This allows for lower-latency processing and easier support for multiple data sources and distributed data consumption. In comparison to log-centric systems like Scribe or Flume, Kafka offers equally good performance, stronger durability guarantees due to replication, and much lower end-to-end latency.

Stream Processing

Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing. For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic; further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic; a final processing stage might attempt to recommend this content to users. Such processing pipelines create graphs of real-time data flows based on the individual topics. Starting in 0.10.0.0, a light-weight but powerful stream processing library called Kafka Streams is available in Apache Kafka to perform such data processing as described above. Apart from Kafka Streams, alternative open source stream processing tools include Apache Storm and Apache Samza.

Event Sourcing

Event sourcing is a style of application design where state changes are logged as a time-ordered sequence of records. Kafka's support for very large stored log data makes it an excellent backend for an application built in this style.

Commit Log

Kafka can serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing mechanism for failed nodes to restore their data. The log compaction feature in Kafka helps support this usage. In this usage Kafka is similar to Apache BookKeeper project.

1.3 Quick Start

Step 1: Get Kafka

Download the latest Kafka release and extract it:

$ tar -xzf kafka_2.13-3.5.0.tgz
$ cd kafka_2.13-3.5.0

Step 2: Start the Kafka environment

NOTE: Your local environment must have Java 8+ installed.

Apache Kafka can be started using ZooKeeper or KRaft. To get started with either configuration follow one the sections below but not both.

Kafka with ZooKeeper

Run the following commands in order to start all services in the correct order:

# Start the ZooKeeper service
$ bin/zookeeper-server-start.sh config/zookeeper.properties

打开另一个终端会话并运行:

# Start the Kafka broker service
$ bin/kafka-server-start.sh config/server.properties

成功启动所有服务后,您将拥有一个基本的 Kafka 环境运行并可供使用。

卡夫卡与克拉夫特

生成集群 UUID

$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"

格式化日志目录

$ bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties

启动 Kafka 服务器

$ bin/kafka-server-start.sh config/kraft/server.properties

一旦 Kafka 服务器成功启动,您将拥有一个基本的 Kafka 环境运行并可供使用。

步骤 3:创建主题以存储事件

Kafka 是一个分布式事件流平台,可让您读取、写入、存储和处理事件(在文档中也称为记录消息) 跨多台机器。

示例事件包括付款交易、手机的地理位置更新、运输订单、传感器测量 来自物联网设备或医疗设备等等。这些事件在主题中组织和存储。 非常简化,主题类似于文件系统中的文件夹,事件是该文件夹中的文件。

因此,在编写第一个事件之前,必须创建一个主题。打开另一个终端会话并运行:

$ bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092

Kafka 的所有命令行工具都有其他选项:在没有任何情况下运行命令 用于显示使用情况信息的参数。例如,它还可以显示详细信息,例如新主题的分区计数kafka-topics.sh

$ bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092
Topic: quickstart-events        TopicId: NPmZHyhbR9y00wMglMH2sg PartitionCount: 1       ReplicationFactor: 1	Configs:
    Topic: quickstart-events Partition: 0    Leader: 0   Replicas: 0 Isr: 0

步骤 4:将一些事件写入主题

Kafka 客户端通过网络与 Kafka 代理通信,用于写入(或读取)事件。 收到后,代理将以持久和容错的方式存储事件,只要您 需要——甚至永远需要。

运行控制台创建者客户端,将一些事件写入主题。 默认情况下,您输入的每一行都会导致将单独的事件写入主题。

$ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
This is my first event
This is my second event

您可以随时停止创建者客户端。Ctrl-C

第 5 步:阅读事件

打开另一个终端会话并运行控制台使用者客户端以读取刚刚创建的事件:

$ bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
This is my first event
This is my second event

You can stop the consumer client with at any time.Ctrl-C

Feel free to experiment: for example, switch back to your producer terminal (previous step) to write additional events, and see how the events immediately show up in your consumer terminal.

Because events are durably stored in Kafka, they can be read as many times and by as many consumers as you want. You can easily verify this by opening yet another terminal session and re-running the previous command again.

Step 6: Import/export your data as streams of events with Kafka Connect

You probably have lots of data in existing systems like relational databases or traditional messaging systems, along with many applications that already use these systems. Kafka Connect allows you to continuously ingest data from external systems into Kafka, and vice versa. It is an extensible tool that runs connectors, which implement the custom logic for interacting with an external system. It is thus very easy to integrate existing systems with Kafka. To make this process even easier, there are hundreds of such connectors readily available.

In this quickstart we'll see how to run Kafka Connect with simple connectors that import data from a file to a Kafka topic and export data from a Kafka topic to a file.

First, make sure to add to the property in the Connect worker's configuration. For the purpose of this quickstart we'll use a relative path and consider the connectors' package as an uber jar, which works when the quickstart commands are run from the installation directory. However, it's worth noting that for production deployments using absolute paths is always preferable. See plugin.path for a detailed description of how to set this config. connect-file-3.5.0.jarplugin.path

Edit the file, add or change the configuration property match the following, and save the file: config/connect-standalone.propertiesplugin.path

> echo "plugin.path=libs/connect-file-3.5.0.jar"

然后,首先创建一些种子数据进行测试:

> echo -e "foo\nbar" > test.txt
或者在 Windows 上:
> echo foo> test.txt
> echo bar>> test.txt

接下来,我们将启动两个在独立模式下运行的连接器,这意味着它们在单个本地专用模式下运行 过程。我们提供三个配置文件作为参数。第一个始终是 Kafka Connect 的配置 进程,包含通用配置,例如要连接到的 Kafka 代理和数据序列化格式。 其余配置文件分别指定要创建的连接器。这些文件包括唯一的连接器名称,连接器 要实例化的类,以及连接器所需的任何其他配置。

> bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties

Kafka 附带的这些示例配置文件使用之前启动的默认本地群集配置 并创建两个连接器:第一个是源连接器,用于从输入文件中读取行并将每个行生成到 Kafka 主题 第二个是接收器连接器,它从 Kafka 主题读取消息,并将每个消息生成为输出文件中的一行。

在启动期间,你将看到许多日志消息,包括一些指示连接器正在实例化的消息。 Kafka 连接过程启动后,源连接器应开始从 和 将它们生成到主题 ,接收器连接器应开始从主题读取消息并将它们写入文件。我们可以验证数据是否已通过整个管道交付 通过检查输出文件的内容:test.txtconnect-testconnect-testtest.sink.txt

> more test.sink.txt
foo
bar

注意数据存储在 Kafka 主题 ,所以我们也可以运行一个控制台消费者来查看 主题中的数据(或使用自定义使用者代码进行处理):connect-test

> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
{"schema":{"type":"string","optional":false},"payload":"foo"}
{"schema":{"type":"string","optional":false},"payload":"bar"}
...

连接器继续处理数据,因此我们可以将数据添加到文件中,并查看它在管道中移动:

> echo Another line>> test.txt

应会看到该行出现在控制台使用者输出和接收器文件中。

步骤 7:使用 Kafka 流处理事件

一旦您的数据作为事件存储在 Kafka 中,您就可以使用 Kafka Streams 客户端库处理数据 Java/Scala。 它允许您实现任务关键型实时应用程序和微服务,其中输入 和/或输出数据存储在 Kafka 主题中。Kafka Streams 结合了编写和部署的简单性 客户端的标准 Java 和 Scala 应用程序,具有 Kafka 服务器端集群的优势 使这些应用程序具有高度可扩展性、弹性、容错性和分布式的技术。图书馆 支持恰好一次处理、有状态操作和聚合、窗口化、联接、基于处理 在事件时间等等。

为了让您初步体验一下,以下是实现流行算法的方法:WordCount

KStream<String, String> textLines = builder.stream("quickstart-events");

KTable<String, Long> wordCounts = textLines
            .flatMapValues(line -> Arrays.asList(line.toLowerCase().split(" ")))
            .groupBy((keyIgnored, word) -> word)
            .count();

wordCounts.toStream().to("output-topic", Produced.with(Serdes.String(), Serdes.Long()));

Kafka Streams 演示和应用程序开发教程演示了如何从头到尾编写和运行此类流应用程序。

步骤 8:终止 Kafka 环境

现在,你已结束快速入门,请随时拆除 Kafka 环境,或者 继续玩。

  1. 使用 停止创建者和使用者客户端,如果尚未这样做。Ctrl-C
  2. 使用 停止 Kafka 代理。Ctrl-C
  3. 最后,如果遵循了带有 ZooKeeper 的 Kafka 部分,请使用 停止 ZooKeeper 服务器。Ctrl-C

如果您还想删除本地 Kafka 环境的任何数据,包括您创建的任何事件 在此过程中,运行以下命令:

$ rm -rf /tmp/kafka-logs /tmp/zookeeper /tmp/kraft-combined-logs

祝贺!

你已成功完成 Apache Kafka 快速入门。

要了解更多信息,我们建议执行以下步骤:

1.4 Ecosystem

There are a plethora of tools that integrate with Kafka outside the main distribution. The ecosystem page lists many of these, including stream processing systems, Hadoop integration, monitoring, and deployment tools.

1.5 Upgrading From Previous Versions

Upgrading to 3.5.1 from any version 0.8.x through 3.4.x

All upgrade steps remain same as upgrading to 3.5.0
Notable changes in 3.5.1
  • Upgraded the dependency, snappy-java, to a version which is not vulnerable to CVE-2023-34455. You can find more information about the CVE at Kafka CVE list.
  • Fixed a regression introduced in 3.3.0, which caused configuration values to be restricted to upper case only. After the fix, values are case insensitive. See KAFKA-15053 for details. security.protocolsecurity.protocol

Upgrading to 3.5.0 from any version 0.8.x through 3.4.x

Notable changes in 3.5.0
  • Kafka Streams has introduced a new state store type, versioned key-value stores, for storing multiple record versions per key, thereby enabling timestamped retrieval operations to return the latest record (per key) as of a specified timestamp. See KIP-889 and KIP-914 for more details. If the new store typed is used in the DSL, improved processing semantics are applied as described in KIP-914.
  • KTable aggregation semantics got further improved via KIP-904, now avoiding spurious intermediate results.
  • Kafka Streams' is improved via KIP-399, now also covering serialization errors. ProductionExceptionHandler
  • MirrorMaker now uses incrementalAlterConfigs API by default to synchronize topic configurations instead of the deprecated alterConfigs API. A new settings called is introduced to allow users to control which API to use. This new setting is marked deprecated and will be removed in the next major release when incrementalAlterConfigs API is always used. See KIP-894 for more details. use.incremental.alter.configs
  • The JmxTool, EndToEndLatency, StreamsResetter, ConsumerPerformance and ClusterTool have been migrated to the tools module. The 'kafka.tools' package is deprecated and will change to 'org.apache.kafka.tools' in the next major release. See KAFKA-14525 for more details.
Upgrading ZooKeeper-based clusters

If you are upgrading from a version prior to 2.1.x, please see the note in step 5 below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION. If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. , , etc.)3.43.3
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to . inter.broker.protocol.version3.5
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 3.5 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
Upgrading KRaft-based clusters

If you are upgrading from a version prior to 3.3.0, please see the note in step 3 below. Once you have changed the metadata.version to the latest version, it will not be possible to downgrade to a version prior to 3.3-IV0.

For a rolling upgrade:

  1. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
  2. Once the cluster's behavior and performance has been verified, bump the metadata.version by running ./bin/kafka-features.sh upgrade --metadata 3.5
  3. Note that the cluster metadata version cannot be downgraded to a pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded. However, it is possible to downgrade to production versions such as 3.3-IV0, 3.3-IV1, etc.

Upgrading to 3.4.0 from any version 0.8.x through 3.3.x

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION. If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. , , etc.)3.33.2
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to . inter.broker.protocol.version3.4
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 3.4 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.

Upgrading a KRaft-based cluster to 3.4.0 from any version 3.0.x through 3.3.x

If you are upgrading from a version prior to 3.3.0, please see the note below. Once you have changed the metadata.version to the latest version, it will not be possible to downgrade to a version prior to 3.3-IV0.

For a rolling upgrade:

  1. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
  2. Once the cluster's behavior and performance has been verified, bump the metadata.version by running ./bin/kafka-features.sh upgrade --metadata 3.4
  3. Note that the cluster metadata version cannot be downgraded to a pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded. However, it is possible to downgrade to production versions such as 3.3-IV0, 3.3-IV1, etc.
Notable changes in 3.4.0
  • Since Apache Kafka 3.4.0, we have added a system property ("org.apache.kafka.disallowed.login.modules") to disable the problematic login modules usage in SASL JAAS configuration. Also by default "com.sun.security.auth.module.JndiLoginModule" is disabled from Apache Kafka 3.4.0.

Upgrading to 3.3.1 from any version 0.8.x through 3.2.x

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION. If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. , , etc.)3.23.1
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to . inter.broker.protocol.version3.3
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 3.3 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.

Upgrading a KRaft-based cluster to 3.3.1 from any version 3.0.x through 3.2.x

If you are upgrading from a version prior to 3.3.1, please see the note below. Once you have changed the metadata.version to the latest version, it will not be possible to downgrade to a version prior to 3.3-IV0.

For a rolling upgrade:

  1. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
  2. Once the cluster's behavior and performance has been verified, bump the metadata.version by running ./bin/kafka-features.sh upgrade --metadata 3.3
  3. Note that the cluster metadata version cannot be downgraded to a pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded. However, it is possible to downgrade to production versions such as 3.3-IV0, 3.3-IV1, etc.
Notable changes in 3.3.1
  • KRaft mode is production ready for new clusters. See KIP-833 for more details (including limitations).
  • The partitioner used by default for records with no keys has been improved to avoid pathological behavior when one or more brokers are slow. The new logic may affect the batching behavior, which can be tuned using the and/or configuration settings. The previous behavior can be restored by setting . See KIP-794 for more details. batch.sizelinger.mspartitioner.class=org.apache.kafka.clients.producer.internals.DefaultPartitioner
  • There is now a slightly different upgrade process for KRaft clusters than for ZK-based clusters, as described above.
  • Introduced a new API to which would create a new Metric if not existing or return the same metric if already registered. Note that this behaviour is different from API which throws an when trying to create an already existing metric. (See KIP-843 for more details). addMetricIfAbsentMetricsaddMetricIllegalArgumentException

Upgrading to 3.2.0 from any version 0.8.x through 3.1.x

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION. If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. , , etc.)3.13.0
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to . inter.broker.protocol.version3.2
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 3.2 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
Notable changes in 3.2.0
  • Idempotence for the producer is enabled by default if no conflicting configurations are set. When producing to brokers older than 2.8.0, the permission is required. Check the compatibility section of KIP-679 for details. In 3.0.0 and 3.1.0, a bug prevented this default from being applied, which meant that idempotence remained disabled unless the user had explicitly set to true (See KAFKA-13598 for more details). This issue was fixed and the default is properly applied in 3.0.1, 3.1.1, and 3.2.0.IDEMPOTENT_WRITEenable.idempotence
  • A notable exception is Connect that by default disables idempotent behavior for all of its producers in order to uniformly support using a wide range of Kafka broker versions. Users can change this behavior to enable idempotence for some or all producers via Connect worker and/or connector configuration. Connect may enable idempotent producers by default in a future major release.
  • Kafka has replaced log4j with reload4j due to security concerns. This only affects modules that specify a logging backend ( and are two such examples). A number of modules, including , leave it to the application to specify the logging backend. More information can be found at reload4j. Projects that depend on the affected modules from the Kafka project should use slf4j-log4j12 version 1.7.35 or above or slf4j-reload4j to avoid possible compatibility issues originating from the logging framework.connect-runtimekafka-toolskafka-clients
  • The example connectors, and , have been removed from the default classpath. To use them in Kafka Connect standalone or distributed mode they need to be explicitly added, for example .FileStreamSourceConnectorFileStreamSinkConnectorCLASSPATH=./lib/connect-file-3.2.0.jar ./bin/connect-distributed.sh

Upgrading to 3.1.0 from any version 0.8.x through 3.0.x

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION. If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. , , etc.)3.02.8
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to . inter.broker.protocol.version3.1
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 3.1 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
Notable changes in 3.1.1
  • Idempotence for the producer is enabled by default if no conflicting configurations are set. When producing to brokers older than 2.8.0, the permission is required. Check the compatibility section of KIP-679 for details. A bug prevented the producer idempotence default from being applied which meant that it remained disabled unless the user had explicitly set to true. See KAFKA-13598 for more details. This issue was fixed and the default is properly applied.IDEMPOTENT_WRITEenable.idempotence
  • A notable exception is Connect that by default disables idempotent behavior for all of its producers in order to uniformly support using a wide range of Kafka broker versions. Users can change this behavior to enable idempotence for some or all producers via Connect worker and/or connector configuration. Connect may enable idempotent producers by default in a future major release.
  • Kafka has replaced log4j with reload4j due to security concerns. This only affects modules that specify a logging backend ( and are two such examples). A number of modules, including , leave it to the application to specify the logging backend. More information can be found at reload4j. Projects that depend on the affected modules from the Kafka project should use slf4j-log4j12 version 1.7.35 or above or slf4j-reload4j to avoid possible compatibility issues originating from the logging framework.connect-runtimekafka-toolskafka-clients
Notable changes in 3.1.0
  • Apache Kafka supports Java 17.
  • The following metrics have been deprecated: , , and . Please use , , and instead. See KIP-773 for more details.bufferpool-wait-time-totalio-waittime-totaliotime-totalbufferpool-wait-time-ns-totalio-wait-time-ns-totalio-time-ns-total
  • IBP 3.1 introduces topic IDs to FetchRequest as a part of KIP-516.

Upgrading to 3.0.1 from any version 0.8.x through 2.8.x

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION. If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. , , etc.)2.82.7
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to . inter.broker.protocol.version3.0
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 3.0 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
Notable changes in 3.0.1
  • Idempotence for the producer is enabled by default if no conflicting configurations are set. When producing to brokers older than 2.8.0, the permission is required. Check the compatibility section of KIP-679 for details. A bug prevented the producer idempotence default from being applied which meant that it remained disabled unless the user had explicitly set to true. See KAFKA-13598 for more details. This issue was fixed and the default is properly applied.IDEMPOTENT_WRITEenable.idempotence
Notable changes in 3.0.0
  • The producer has stronger delivery guarantees by default: is enabled and is set to instead of . See KIP-679 for details. In 3.0.0 and 3.1.0, a bug prevented the idempotence default from being applied which meant that it remained disabled unless the user had explicitly set to true. Note that the bug did not affect the change. See KAFKA-13598 for more details. This issue was fixed and the default is properly applied in 3.0.1, 3.1.1, and 3.2.0.idempotenceacksall1enable.idempotenceacks=all
  • Java 8 and Scala 2.12 support have been deprecated since Apache Kafka 3.0 and will be removed in Apache Kafka 4.0. See KIP-750 and KIP-751 for more details.
  • ZooKeeper has been upgraded to version 3.6.3.
  • A preview of KRaft mode is available, though upgrading to it from the 2.8 Early Access release is not possible. See the file for details.config/kraft/README.md
  • The release tarball no longer includes test, sources, javadoc and test sources jars. These are still published to the Maven Central repository.
  • A number of implementation dependency jars are now available in the runtime classpath instead of compile and runtime classpaths. Compilation errors after the upgrade can be fixed by adding the missing dependency jar(s) explicitly or updating the application not to use internal classes.
  • The default value for the consumer configuration was increased from 10s to 45s. See KIP-735 for more details.session.timeout.ms
  • The broker configuration and topic configuration have been deprecated. The value of both configurations is always assumed to be if is or higher. If or are set, we recommend clearing them at the same time as the upgrade to 3.0. This will avoid potential compatibility issues if the is downgraded. See KIP-724 for more details.log.message.format.versionmessage.format.version3.0inter.broker.protocol.version3.0log.message.format.versionmessage.format.versioninter.broker.protocol.versioninter.broker.protocol.version
  • The Streams API removed all deprecated APIs that were deprecated in version 2.5.0 or earlier. For a complete list of removed APIs compare the detailed Kafka Streams upgrade notes.
  • Kafka Streams no longer has a compile time dependency on "connect:json" module (KAFKA-5146). Projects that were relying on this transitive dependency will have to explicitly declare it.
  • Custom principal builder implementations specified through must now implement the interface to allow for forwarding between brokers. See KIP-590 for more details about the usage of KafkaPrincipalSerde.principal.builder.classKafkaPrincipalSerde
  • A number of deprecated classes, methods and tools have been removed from the , , and modules:clientsconnectcoretools
    • The Scala , and related classes have been removed. Please use the Java and instead.AuthorizerSimpleAclAuthorizerAuthorizerAclAuthorizer
    • The method was removed (KAFKA-12573).Metric#value()
    • The and classes were removed (KAFKA-12584). Please use and instead.SumTotalWindowedSumCumulativeSum
    • The and classes were removed. Please use and respectively instead.CountSampledTotalWindowedCountWindowedSum
    • The , and classes were removed. PrincipalBuilderDefaultPrincipalBuilderResourceFilter
    • Various constants and constructors were removed from , , and .SslConfigsSaslConfigsAclBindingAclBindingFilter
    • The methods were removed. Please use instead.Admin.electedPreferredLeaders()Admin.electLeaders
    • The command line tool was removed. Please use instead.kafka-preferred-replica-electionkafka-leader-election
    • The option was removed from the and command line tools. Please use instead.--zookeeperkafka-topicskafka-reassign-partitions--bootstrap-server
    • In the command line tool, the option is only supported for updating SCRAM Credentials configuration and describing/updating dynamic broker configs when brokers are not running. Please use for other configuration operations.kafka-configs--zookeeper--bootstrap-server
    • The constructor was removed (KAFKA-12577). Please use the remaining public constructor instead.ConfigEntry
    • The config value for the client config has been removed. In the unlikely event that you set this config explicitly, we recommend leaving the config unset ( is used by default).defaultclient.dns.lookupuse_all_dns_ips
    • The and classes have been removed. Please use and instead.ExtendedDeserializerExtendedSerializerDeserializerSerializer
    • The method was removed from the producer, consumer and admin client. Please use .close(long, TimeUnit)close(Duration)
    • The and methods were removed. These methods were not intended to be public API and there is no replacement.ConsumerConfig.addDeserializerToConfigProducerConfig.addSerializerToConfig
    • The method was removed. Please use instead.NoOffsetForPartitionException.partition()partitions()
    • The default is changed to "[RangeAssignor, CooperativeStickyAssignor]", which will use the RangeAssignor by default, but allows upgrading to the CooperativeStickyAssignor with just a single rolling bounce that removes the RangeAssignor from the list. Please check the client upgrade path guide here for more detail.partition.assignment.strategy
    • The Scala was removed. Please use the Java .kafka.common.MessageFormatterorg.apache.kafka.common.MessageFormatter
    • The method was removed. Please use instead.MessageFormatter.init(Properties)configure(Map)
    • The method has been removed from and . The message format v2, which has been the default since 0.11, moved the checksum from the record to the record batch. As such, these methods don't make sense and no replacements exist.checksum()ConsumerRecordRecordMetadata
    • The class was removed. It is not part of the public API, but it may have been used with . It reported the checksum of each record, which has not been supported since message format v2.ChecksumMessageFormatterkafka-console-consumer.sh
    • The class has been removed. Please use instead.org.apache.kafka.clients.consumer.internals.PartitionAssignororg.apache.kafka.clients.consumer.ConsumerPartitionAssignor
    • The and configurations were removed (KAFKA-12591). Dynamic quota defaults must be used instead.quota.producer.defaultquota.consumer.default
    • The and configurations were removed. Please use instead.porthost.namelisteners
    • The and configurations were removed. Please use instead.advertised.portadvertised.host.nameadvertised.listeners
    • The deprecated worker configurations and were removed (KAFKA-12482) from the Kafka Connect worker configuration. Please use instead.rest.host.namerest.portlisteners
  • The method has been deprecated. Please use instead, where the can be retrieved via for stronger semantics. Note that the full set of consumer group metadata is only understood by brokers or version 2.5 or higher, so you must upgrade your kafka cluster to get the stronger semantics. Otherwise, you can just pass in to work with older brokers. See KIP-732 for more details. Producer#sendOffsetsToTransaction(Map offsets, String consumerGroupId)Producer#sendOffsetsToTransaction(Map offsets, ConsumerGroupMetadata metadata)ConsumerGroupMetadataKafkaConsumer#groupMetadata()new ConsumerGroupMetadata(consumerGroupId)
  • The Connect and properties have been completely removed. The use of these Connect worker properties has been deprecated since version 2.0.0. Workers are now hardcoded to use the JSON converter with set to . If your cluster has been using a different internal key or value converter, you can follow the migration steps outlined in KIP-738 to safely upgrade your Connect cluster to 3.0. internal.key.converterinternal.value.converterschemas.enablefalse
  • The Connect-based MirrorMaker (MM2) includes changes to support , enabling replication without renaming topics. The existing is still used by default, but identity replication can be enabled via the configuration property. This is especially useful for users migrating from the older MirrorMaker (MM1), or for use-cases with simple one-way replication topologies where topic renaming is undesirable. Note that , unlike , cannot prevent replication cycles based on topic names, so take care to avoid cycles when constructing your replication topology. IdentityReplicationPolicyDefaultReplicationPolicyreplication.policyIdentityReplicationPolicyDefaultReplicationPolicy
  • The original MirrorMaker (MM1) and related classes have been deprecated. Please use the Connect-based MirrorMaker (MM2), as described in the Geo-Replication section.

Upgrading to 2.8.1 from any version 0.8.x through 2.7.x

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION. If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. , , etc.)2.72.6
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to . inter.broker.protocol.version2.8
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.8 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
Notable changes in 2.8.0
  • The 2.8.0 release added a new method to the Authorizer Interface introduced in KIP-679. The motivation is to unblock our future plan to enable the strongest message delivery guarantee by default. Custom authorizer should consider providing a more efficient implementation that supports audit logging and any custom configs or access rules.
  • IBP 2.8 introduces topic IDs to topics as a part of KIP-516. When using ZooKeeper, this information is stored in the TopicZNode. If the cluster is downgraded to a previous IBP or version, future topics will not get topic IDs and it is not guaranteed that topics will retain their topic IDs in ZooKeeper. This means that upon upgrading again, some topics or all topics will be assigned new IDs.
  • Kafka Streams introduce a type-safe operator as a substitution for deprecated method (cf. KIP-418). split()KStream#branch()

Upgrading to 2.7.0 from any version 0.8.x through 2.6.x

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION. If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. , , etc.)2.62.5
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to . inter.broker.protocol.version2.7
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.7 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
Notable changes in 2.7.0
  • The 2.7.0 release includes the core Raft implementation specified in KIP-595. There is a separate "raft" module containing most of the logic. Until integration with the controller is complete, there is a standalone server that users can use for testing the performance of the Raft implementation. See the README.md in the raft module for details
  • KIP-651 adds support for using PEM files for key and trust stores.
  • KIP-612 adds support for enforcing broker-wide and per-listener connection create rates. The 2.7.0 release contains the first part of KIP-612 with dynamic configuration coming in the 2.8.0 release.
  • The ability to throttle topic and partition creations or topics deletions to prevent a cluster from being harmed via KIP-599
  • When new features become available in Kafka there are two main issues:
    1. How do Kafka clients become aware of broker capabilities?
    2. How does the broker decide which features to enable?
    KIP-584 provides a flexible and operationally easy solution for client discovery, feature gating and rolling upgrades using a single restart.
  • The ability to print record offsets and headers with the is now possible via KIP-431ConsoleConsumer
  • The addition of KIP-554 continues progress towards the goal of Zookeeper removal from Kafka. The addition of KIP-554 means you don't have to connect directly to ZooKeeper anymore for managing SCRAM credentials.
  • Altering non-reconfigurable configs of existent listeners causes . By contrast, the previous (unintended) behavior would have caused the updated configuration to be persisted, but it wouldn't take effect until the broker was restarted. See KAFKA-10479 for more discussion. See and for the supported reconfigurable configs of existent listeners. InvalidRequestExceptionDynamicBrokerConfig.DynamicSecurityConfigsSocketServer.ListenerReconfigurableConfigs
  • Kafka Streams adds support for Sliding Windows Aggregations in the KStreams DSL.
  • Reverse iteration over state stores enabling more efficient most recent update searches with KIP-617
  • End-to-End latency metrics in Kafka Steams see KIP-613 for more details
  • Kafka Streams added metrics reporting default RocksDB properties with KIP-607
  • Better Scala implicit Serdes support from KIP-616

Upgrading to 2.6.0 from any version 0.8.x through 2.5.x

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION. If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. , , etc.)2.52.4
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to . inter.broker.protocol.version2.6
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.6 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
Notable changes in 2.6.0
  • Kafka Streams adds a new processing mode (requires broker 2.5 or newer) that improves application scalability using exactly-once guarantees (cf. KIP-447)
  • TLSv1.3 has been enabled by default for Java 11 or newer. The client and server will negotiate TLSv1.3 if both support it and fallback to TLSv1.2 otherwise. See KIP-573 for more details.
  • The default value for the configuration has been changed from to . If a hostname resolves to multiple IP addresses, clients and brokers will now attempt to connect to each IP in sequence until the connection is successfully established. See KIP-602 for more details. client.dns.lookupdefaultuse_all_dns_ips
  • NotLeaderForPartitionException has been deprecated and replaced with . Fetch requests and other requests intended only for the leader or follower return NOT_LEADER_OR_FOLLOWER(6) instead of REPLICA_NOT_AVAILABLE(9) if the broker is not a replica, ensuring that this transient error during reassignments is handled by all clients as a retriable exception. NotLeaderOrFollowerException

Upgrading to 2.5.0 from any version 0.8.x through 2.4.x

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION. If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. , , etc.)2.42.3
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to . inter.broker.protocol.version2.5
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.5 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
  6. There are several notable changes to the reassignment tool following the completion of KIP-455. This tool now requires the flag to be provided when changing the throttle of an active reassignment. Reassignment cancellation is now possible using the command. Finally, reassignment with has been deprecated in favor of . See the KIP for more detail. kafka-reassign-partitions.sh--additional--cancel--zookeeper--bootstrap-server
Notable changes in 2.5.0
  • When is used, can still return data while it is in the middle of a rebalance for those partitions still owned by the consumer; in addition now may throw a non-fatal to notify users of such an event, in order to distinguish from the fatal and allow users to complete the ongoing rebalance and then reattempt committing offsets for those still-owned partitions.RebalanceProtocol#COOPERATIVEConsumer#pollConsumer#commitSyncRebalanceInProgressExceptionCommitFailedException
  • For improved resiliency in typical network environments, the default value of has been increased from 6s to 18s and from 10s to 30s.zookeeper.session.timeout.msreplica.lag.time.max.ms
  • New DSL operator has been added for aggregating multiple streams together at once.cogroup()
  • Added a new API to translate an input event stream into a KTable.KStream.toTable()
  • Added a new Serde type to represent null keys or null values from input topic.Void
  • Deprecated and replaced it with .UsePreviousTimeOnInvalidTimestampUsePartitionTimeOnInvalidTimeStamp
  • Improved exactly-once semantics by adding a pending offset fencing mechanism and stronger transactional commit consistency check, which greatly simplifies the implementation of a scalable exactly-once application. We also added a new exactly-once semantics code example under examples folder. Check out KIP-447 for the full details.
  • Added a new public api KafkaStreams.queryMetadataForKey(String, K, Serializer) to get detailed information on the key being queried. It provides information about the partition number where the key resides in addition to hosts containing the active and standby partitions for the key.
  • Provided support to query stale stores (for high availability) and the stores belonging to a specific partition by deprecating and replacing it with .KafkaStreams.store(String, QueryableStoreType)KafkaStreams.store(StoreQueryParameters)
  • Added a new public api to access lag information for stores local to an instance with .KafkaStreams.allLocalStorePartitionLags()
  • Scala 2.11 is no longer supported. See KIP-531 for details.
  • All Scala classes from the package have been deprecated. See KIP-504 for details of the new Java authorizer API added in 2.4.0. Note that and were deprecated in 2.4.0. kafka.security.authkafka.security.auth.Authorizerkafka.security.auth.SimpleAclAuthorizer
  • TLSv1 and TLSv1.1 have been disabled by default since these have known security vulnerabilities. Only TLSv1.2 is now enabled by default. You can continue to use TLSv1 and TLSv1.1 by explicitly enabling these in the configuration options and . ssl.protocolssl.enabled.protocols
  • ZooKeeper has been upgraded to 3.5.7, and a ZooKeeper upgrade from 3.4.X to 3.5.7 can fail if there are no snapshot files in the 3.4 data directory. This usually happens in test upgrades where ZooKeeper 3.5.7 is trying to load an existing 3.4 data dir in which no snapshot file has been created. For more details about the issue please refer to ZOOKEEPER-3056. A fix is given in ZOOKEEPER-3056, which is to set config in before the upgrade. snapshot.trust.empty=truezookeeper.properties
  • ZooKeeper version 3.5.7 supports TLS-encrypted connectivity to ZooKeeper both with or without client certificates, and additional Kafka configurations are available to take advantage of this. See KIP-515 for details.

Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, 1.0.x, 1.1.x, 2.0.x or 2.1.x or 2.2.x or 2.3.x to 2.4.0

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.10.0, 0.11.0, 1.0, 2.0, 2.2).
    • log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION (See potential performance impact following the upgrade for the details on what this configuration does.)
    If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3).
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to 2.4. inter.broker.protocol.version
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.4 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.

Additional Upgrade Notes:

  1. ZooKeeper has been upgraded to 3.5.6. ZooKeeper upgrade from 3.4.X to 3.5.6 can fail if there are no snapshot files in 3.4 data directory. This usually happens in test upgrades where ZooKeeper 3.5.6 is trying to load an existing 3.4 data dir in which no snapshot file has been created. For more details about the issue please refer to ZOOKEEPER-3056. A fix is given in ZOOKEEPER-3056, which is to set config in before the upgrade. But we have observed data loss in standalone cluster upgrades when using config. For more details about the issue please refer to ZOOKEEPER-3644. So we recommend the safe workaround of copying empty snapshot file to the 3.4 data directory, if there are no snapshot files in 3.4 data directory. For more details about the workaround please refer to ZooKeeper Upgrade FAQ. snapshot.trust.empty=truezookeeper.propertiessnapshot.trust.empty=true
  2. An embedded Jetty based AdminServer added in ZooKeeper 3.5. AdminServer is enabled by default in ZooKeeper and is started on port 8080. AdminServer is disabled by default in the ZooKeeper config () provided by the Apache Kafka distribution. Make sure to update your local file with if you wish to disable the AdminServer. Please refer AdminServer config to configure the AdminServer. zookeeper.propertieszookeeper.propertiesadmin.enableServer=false
Notable changes in 2.4.0
  • A new Admin API has been added for partition reassignments. Due to changing the way Kafka propagates reassignment information, it is possible to lose reassignment state in failure edge cases while upgrading to the new version. It is not recommended to start reassignments while upgrading.
  • ZooKeeper has been upgraded from 3.4.14 to 3.5.6. TLS and dynamic reconfiguration are supported by the new version.
  • The command line tool has been deprecated. It has been replaced by .bin/kafka-preferred-replica-election.shbin/kafka-leader-election.sh
  • The methods in the Java class have been deprecated in favor of the methods .electPreferredLeadersAdminClientelectLeaders
  • Scala code leveraging the constructor with literal values will need to explicitly call on the second literal.NewTopic(String, int, short)toShort
  • The argument in the constructor is now used to specify an exception message. Previously it referred to the group that failed authorization. This was done for consistency with other exception types and to avoid potential misuse. The constructor which was previously used for a single unauthorized topic was changed similarly. GroupAuthorizationException(String)TopicAuthorizationException(String)
  • The internal interface has been deprecated and replaced with a new in the public API. Some methods/signatures are slightly different between the two interfaces. Users implementing a custom PartitionAssignor should migrate to the new interface as soon as possible. PartitionAssignorConsumerPartitionAssignor
  • The now uses a sticky partitioning strategy. This means that records for specific topic with null keys and no assigned partition will be sent to the same partition until the batch is ready to be sent. When a new batch is created, a new partition is chosen. This decreases latency to produce, but it may result in uneven distribution of records across partitions in edge cases. Generally users will not be impacted, but this difference may be noticeable in tests and other situations producing records for a very short amount of time. DefaultPartitioner
  • The blocking methods have been extended to allow a list of partitions as input parameters rather than a single partition. It enables fewer request/response iterations between clients and brokers fetching for the committed offsets for the consumer group. The old overloaded functions are deprecated and we would recommend users to make their code changes to leverage the new methods (details can be found in KIP-520). KafkaConsumer#committed
  • We've introduced a new error in the produce response to distinguish from the error. To be more concrete, previously when a batch of records was sent as part of a single request to the broker and one or more of the records failed the validation due to various causes (mismatch magic bytes, crc checksum errors, null key for log compacted topics, etc), the whole batch would be rejected with the same and misleading , and the caller of the producer client would see the corresponding exception from either the future object of returned from the call as well as in the Now with the new error code and improved error messages of the exception, producer callers would be better informed about the root cause why their sent records were failed. INVALID_RECORDCORRUPT_MESSAGECORRUPT_MESSAGERecordMetadatasendCallback#onCompletion(RecordMetadata metadata, Exception exception)
  • We are introducing incremental cooperative rebalancing to the clients' group protocol, which allows consumers to keep all of their assigned partitions during a rebalance and at the end revoke only those which must be migrated to another consumer for overall cluster balance. The will choose the latest that is commonly supported by all of the consumer's supported assignors. You can use the new built-in or plug in your own custom cooperative assignor. To do so you must implement the interface and include in the list returned by . Your custom assignor can then leverage the field in each consumer's to give partitions back to their previous owners whenever possible. Note that when a partition is to be reassigned to another consumer, it must be removed from the new assignment until it has been revoked from its original owner. Any consumer that has to revoke a partition will trigger a followup rebalance to allow the revoked partition to safely be assigned to its new owner. See the ConsumerPartitionAssignor RebalanceProtocol javadocs for more information.
    To upgrade from the old (eager) protocol, which always revokes all partitions before rebalancing, to cooperative rebalancing, you must follow a specific upgrade path to get all clients on the same that supports the cooperative protocol. This can be done with two rolling bounces, using the for the example: during the first one, add "cooperative-sticky" to the list of supported assignors for each member (without removing the previous assignor -- note that if previously using the default, you must include that explicitly as well). You then bounce and/or upgrade it. Once the entire group is on 2.4+ and all members have the "cooperative-sticky" among their supported assignors, remove the other assignor(s) and perform a second rolling bounce so that by the end all members support only the cooperative protocol. For further details on the cooperative rebalancing protocol and upgrade path, see KIP-429.
    ConsumerCoordinatorRebalanceProtocolCooperativeStickyAssignorConsumerPartitionAssignorRebalanceProtocol.COOPERATIVEConsumerPartitionAssignor#supportedProtocolsownedPartitionsSubscriptionConsumerPartitionAssignorCooperativeStickyAssignor
  • There are some behavioral changes to the , as well as a new API. Exceptions thrown during any of the listener's three callbacks will no longer be swallowed, and will instead be re-thrown all the way up to the call. The method has been added to allow users to react to abnormal circumstances where a consumer may have lost ownership of its partitions (such as a missed rebalance) and cannot commit offsets. By default, this will simply call the existing API to align with previous behavior. Note however that will not be called when the set of lost partitions is empty. This means that no callback will be invoked at the beginning of the first rebalance of a new consumer joining the group.
    The semantics of the callbacks are further changed when following the cooperative rebalancing protocol described above. In addition to , will also never be called when the set of revoked partitions is empty. The callback will generally be invoked only at the end of a rebalance, and only on the set of partitions that are being moved to another consumer. The callback will however always be called, even with an empty set of partitions, as a way to notify users of a rebalance event (this is true for both cooperative and eager). For details on the new callback semantics, see the ConsumerRebalanceListener javadocs.
    ConsumerRebalanceListenerConsumer.poll()onPartitionsLostonPartitionsRevokedonPartitionsLostConsumerRebalanceListener'sonPartitionsLostonPartitionsRevokedonPartitionsAssigned
  • The Scala trait has been deprecated and replaced with a new Java API . The authorizer implementation class has also been deprecated and replaced with a new implementation . uses features supported by the new API to improve authorization logging and is compatible with . For more details, see KIP-504. kafka.security.auth.Authorizerorg.apache.kafka.server.authorizer.Authorizerkafka.security.auth.SimpleAclAuthorizerkafka.security.authorizer.AclAuthorizerAclAuthorizerSimpleAclAuthorizer

Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, 1.0.x, 1.1.x, 2.0.x or 2.1.x or 2.2.x to 2.3.0

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1).
    • log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION (See potential performance impact following the upgrade for the details on what this configuration does.)
    If you are upgrading from 0.11.0.x, 1.0.x, 1.1.x, 2.0.x, or 2.1.x, and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2).
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to 2.3. inter.broker.protocol.version
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.3 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
Notable changes in 2.3.0
  • We are introducing a new rebalancing protocol for Kafka Connect based on incremental cooperative rebalancing. The new protocol does not require stopping all the tasks during a rebalancing phase between Connect workers. Instead, only the tasks that need to be exchanged between workers are stopped and they are started in a follow up rebalance. The new Connect protocol is enabled by default beginning with 2.3.0. For more details on how it works and how to enable the old behavior of eager rebalancing, checkout incremental cooperative rebalancing design.
  • We are introducing static membership towards consumer user. This feature reduces unnecessary rebalances during normal application upgrades or rolling bounces. For more details on how to use it, checkout static membership design.
  • Kafka Streams DSL switches its used store types. While this change is mainly transparent to users, there are some corner cases that may require code changes. See the Kafka Streams upgrade section for more details.
  • Kafka Streams 2.3.0 requires 0.11 message format or higher and does not work with older message format.

Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, 1.0.x, 1.1.x, 2.0.x or 2.1.x to 2.2.0

If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets. Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1).
    • log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION (See potential performance impact following the upgrade for the details on what this configuration does.)
    If you are upgrading from 0.11.0.x, 1.0.x, 1.1.x, or 2.0.x and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (0.11.0, 1.0, 1.1, 2.0).
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to 2.2. inter.broker.protocol.version
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.2 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
Notable changes in 2.2.1
  • Kafka Streams 2.2.1 requires 0.11 message format or higher and does not work with older message format.
Notable changes in 2.2.0
  • The default consumer group id has been changed from the empty string () to . Consumers who use the new default group id will not be able to subscribe to topics, and fetch or commit offsets. The empty string as consumer group id is deprecated but will be supported until a future major release. Old clients that rely on the empty string group id will now have to explicitly provide it as part of their consumer config. For more information see KIP-289.""null
  • The command line tool is now able to connect directly to brokers with instead of zookeeper. The old option is still available for now. Please read KIP-377 for more information.bin/kafka-topics.sh--bootstrap-server--zookeeper
  • Kafka Streams depends on a newer version of RocksDBs that requires MacOS 10.13 or higher.

Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, 1.0.x, 1.1.x, or 2.0.0 to 2.1.0

Note that 2.1.x contains a change to the internal schema used to store consumer offsets. Once the upgrade is complete, it will not be possible to downgrade to previous versions. See the rolling upgrade notes below for more detail.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1).
    • log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION (See potential performance impact following the upgrade for the details on what this configuration does.)
    If you are upgrading from 0.11.0.x, 1.0.x, 1.1.x, or 2.0.x and you have not overridden the message format, then you only need to override the inter-broker protocol version.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (0.11.0, 1.0, 1.1, 2.0).
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations. It is still possible to downgrade at this point if there are any problems.
  3. Once the cluster's behavior and performance has been verified, bump the protocol version by editing and setting it to 2.1. inter.broker.protocol.version
  4. Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest protocol version, it will no longer be possible to downgrade the cluster to an older version.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.1 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.

Additional Upgrade Notes:

  1. Offset expiration semantics has slightly changed in this version. According to the new semantics, offsets of partitions in a group will not be removed while the group is subscribed to the corresponding topic and is still active (has active consumers). If group becomes empty all its offsets will be removed after default offset retention period (or the one set by broker) has passed (unless the group becomes active again). Offsets associated with standalone (simple) consumers, that do not use Kafka group management, will be removed after default offset retention period (or the one set by broker) has passed since their last commit.
  2. The default for console consumer's property when no is provided is now set to . This is to avoid polluting the consumer coordinator cache as the auto-generated group is not likely to be used by other consumers.enable.auto.commitgroup.idfalse
  3. The default value for the producer's config was changed to , as we introduced in KIP-91, which sets an upper bound on the total time between sending a record and receiving acknowledgement from the broker. By default, the delivery timeout is set to 2 minutes.retriesInteger.MAX_VALUEdelivery.timeout.ms
  4. By default, MirrorMaker now overrides to when configuring the producer. If you have overridden the value of in order to fail faster, you will instead need to override .delivery.timeout.msInteger.MAX_VALUEretriesdelivery.timeout.ms
  5. The API now expects, as a recommended alternative, access to the groups a user should be able to list. Even though the old access is still supported for backward compatibility, using it for this API is not advised.ListGroupDescribe GroupDescribe Cluster
  6. KIP-336 deprecates the ExtendedSerializer and ExtendedDeserializer interfaces and propagates the usage of Serializer and Deserializer. ExtendedSerializer and ExtendedDeserializer were introduced with KIP-82 to provide record headers for serializers and deserializers in a Java 7 compatible fashion. Now we consolidated these interfaces as Java 7 support has been dropped since.
Notable changes in 2.1.0
  • Jetty has been upgraded to 9.4.12, which excludes TLS_RSA_* ciphers by default because they do not support forward secrecy, see https://github.com/eclipse/jetty.project/issues/2807 for more information.
  • Unclean leader election is automatically enabled by the controller when config is dynamically updated by using per-topic config override.unclean.leader.election.enable
  • The has added a method . Now any application using the can gain more information and insight by viewing the metrics captured from the . For more information see KIP-324AdminClientAdminClient#metrics()AdminClientAdminClient
  • Kafka now supports Zstandard compression from KIP-110. You must upgrade the broker as well as clients to make use of it. Consumers prior to 2.1.0 will not be able to read from topics which use Zstandard compression, so you should not enable it for a topic until all downstream consumers are upgraded. See the KIP for more detail.

Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, 1.0.x, or 1.1.x to 2.0.0

Kafka 2.0.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below, you guarantee no downtime during the upgrade. However, please review the notable changes in 2.0.0 before upgrading.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1).
    • log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION (See potential performance impact following the upgrade for the details on what this configuration does.)
    If you are upgrading from 0.11.0.x, 1.0.x, or 1.1.x and you have not overridden the message format, then you only need to override the inter-broker protocol format.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (0.11.0, 1.0, 1.1).
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
  3. Once the entire cluster is upgraded, bump the protocol version by editing and setting it to 2.0. inter.broker.protocol.version
  4. Restart the brokers one by one for the new protocol version to take effect.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.0 on each broker and restart them one by one. Note that the older Scala consumer does not support the new message format introduced in 0.11, so to avoid the performance cost of down-conversion (or to take advantage of exactly once semantics), the newer Java consumer must be used.

Additional Upgrade Notes:

  1. If you are willing to accept downtime, you can simply take all the brokers down, update the code and start them back up. They will start with the new protocol by default.
  2. Bumping the protocol version and restarting can be done any time after the brokers are upgraded. It does not have to be immediately after. Similarly for the message format version.
  3. If you are using Java8 method references in your Kafka Streams code you might need to update your code to resolve method ambiguities. Hot-swapping the jar-file only might not work.
  4. ACLs should not be added to prefixed resources, (added in KIP-290), until all brokers in the cluster have been updated.

    NOTE: any prefixed ACLs added to a cluster, even after the cluster is fully upgraded, will be ignored should the cluster be downgraded again.

Notable changes in 2.0.0
  • KIP-186 increases the default offset retention time from 1 day to 7 days. This makes it less likely to "lose" offsets in an application that commits infrequently. It also increases the active set of offsets and therefore can increase memory usage on the broker. Note that the console consumer currently enables offset commit by default and can be the source of a large number of offsets which this change will now preserve for 7 days instead of 1. You can preserve the existing behavior by setting the broker config to 1440.offsets.retention.minutes
  • Support for Java 7 has been dropped, Java 8 is now the minimum version required.
  • The default value for was changed to , which performs hostname verification (man-in-the-middle attacks are possible otherwise). Set to an empty string to restore the previous behaviour. ssl.endpoint.identification.algorithmhttpsssl.endpoint.identification.algorithm
  • KAFKA-5674 extends the lower interval of minimum to zero and therefore allows IP-based filtering of inbound connections.max.connections.per.ip
  • KIP-272 added API version tag to the metric . This metric now becomes . This will impact JMX monitoring tools that do not automatically aggregate. To get the total count for a specific request type, the tool needs to be updated to aggregate across different versions. kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce|FetchConsumer|FetchFollower|...}kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce|FetchConsumer|FetchFollower|...},version={0|1|2|3|...}
  • KIP-225 changed the metric "records.lag" to use tags for topic and partition. The original version with the name format "{topic}-{partition}.records-lag" has been removed.
  • The Scala consumers, which have been deprecated since 0.11.0.0, have been removed. The Java consumer has been the recommended option since 0.10.0.0. Note that the Scala consumers in 1.1.0 (and older) will continue to work even if the brokers are upgraded to 2.0.0.
  • The Scala producers, which have been deprecated since 0.10.0.0, have been removed. The Java producer has been the recommended option since 0.9.0.0. Note that the behaviour of the default partitioner in the Java producer differs from the default partitioner in the Scala producers. Users migrating should consider configuring a custom partitioner that retains the previous behaviour. Note that the Scala producers in 1.1.0 (and older) will continue to work even if the brokers are upgraded to 2.0.0.
  • MirrorMaker and ConsoleConsumer no longer support the Scala consumer, they always use the Java consumer.
  • The ConsoleProducer no longer supports the Scala producer, it always uses the Java producer.
  • A number of deprecated tools that rely on the Scala clients have been removed: ReplayLogProducer, SimpleConsumerPerformance, SimpleConsumerShell, ExportZkOffsets, ImportZkOffsets, UpdateOffsetsInZK, VerifyConsumerRebalance.
  • The deprecated kafka.tools.ProducerPerformance has been removed, please use org.apache.kafka.tools.ProducerPerformance.
  • New Kafka Streams configuration parameter added that allows rolling bounce upgrade from older version. upgrade.from
  • KIP-284 changed the retention time for Kafka Streams repartition topics by setting its default value to .Long.MAX_VALUE
  • Updated APIs in Kafka Streams for registering state stores to the processor topology. For more details please read the Streams Upgrade Guide.ProcessorStateManager
  • In earlier releases, Connect's worker configuration required the and properties. In 2.0, these are no longer required and default to the JSON converter. You may safely remove these properties from your Connect standalone and distributed worker configurations:internal.key.converterinternal.value.converter
    internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter.schemas.enable=false
  • KIP-266 adds a new consumer configuration to specify the default timeout to use for APIs that could block. The KIP also adds overloads for such blocking APIs to support specifying a specific timeout to use for each of them instead of using the default timeout set by . In particular, a new API has been added which does not block for dynamic partition assignment. The old API has been deprecated and will be removed in a future version. Overloads have also been added for other methods like , , , , and that take in a .default.api.timeout.msKafkaConsumerdefault.api.timeout.mspoll(Duration)poll(long)KafkaConsumerpartitionsForlistTopicsoffsetsForTimesbeginningOffsetsendOffsetscloseDuration
  • Also as part of KIP-266, the default value of has been changed to 30 seconds. The previous value was a little higher than 5 minutes to account for maximum time that a rebalance would take. Now we treat the JoinGroup request in the rebalance as a special case and use a value derived from for the request timeout. All other request types use the timeout defined by request.timeout.msmax.poll.interval.msrequest.timeout.ms
  • The internal method has been removed. Users are encouraged to migrate to .kafka.admin.AdminClient.deleteRecordsBeforeorg.apache.kafka.clients.admin.AdminClient.deleteRecords
  • The AclCommand tool convenience option uses the KIP-277 finer grained ACL on the given topic. --producer
  • KIP-176 removes the option for all consumer based tools. This option is redundant since the new consumer is automatically used if --bootstrap-server is defined. --new-consumer
  • KIP-290 adds the ability to define ACLs on prefixed resources, e.g. any topic starting with 'foo'.
  • KIP-283 improves message down-conversion handling on Kafka broker, which has typically been a memory-intensive operation. The KIP adds a mechanism by which the operation becomes less memory intensive by down-converting chunks of partition data at a time which helps put an upper bound on memory consumption. With this improvement, there is a change in protocol behavior where the broker could send an oversized message batch towards the end of the response with an invalid offset. Such oversized messages must be ignored by consumer clients, as is done by . FetchResponseKafkaConsumer

    KIP-283 also adds new topic and broker configurations and respectively to control whether down-conversion is enabled. When disabled, broker does not perform any down-conversion and instead sends an error to the client.message.downconversion.enablelog.message.downconversion.enableUNSUPPORTED_VERSION

  • Dynamic broker configuration options can be stored in ZooKeeper using kafka-configs.sh before brokers are started. This option can be used to avoid storing clear passwords in server.properties as all password configs may be stored encrypted in ZooKeeper.
  • ZooKeeper hosts are now re-resolved if connection attempt fails. But if your ZooKeeper host names resolve to multiple addresses and some of them are not reachable, then you may need to increase the connection timeout .zookeeper.connection.timeout.ms
New Protocol Versions
  • KIP-279: OffsetsForLeaderEpochResponse v1 introduces a partition-level field. leader_epoch
  • KIP-219: Bump up the protocol versions of non-cluster action requests and responses that are throttled on quota violation.
  • KIP-290: Bump up the protocol versions ACL create, describe and delete requests and responses.
Upgrading a 1.1 Kafka Streams Application
  • Upgrading your Streams application from 1.1 to 2.0 does not require a broker upgrade. A Kafka Streams 2.0 application can connect to 2.0, 1.1, 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
  • Note that in 2.0 we have removed the public APIs that are deprecated prior to 1.0; users leveraging on those deprecated APIs need to make code changes accordingly. See Streams API changes in 2.0.0 for more details.

Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, or 1.0.x to 1.1.x

Kafka 1.1.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below, you guarantee no downtime during the upgrade. However, please review the notable changes in 1.1.0 before upgrading.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0).
    • log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION (See potential performance impact following the upgrade for the details on what this configuration does.)
    If you are upgrading from 0.11.0.x or 1.0.x and you have not overridden the message format, then you only need to override the inter-broker protocol format.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (0.11.0 or 1.0).
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
  3. Once the entire cluster is upgraded, bump the protocol version by editing and setting it to 1.1. inter.broker.protocol.version
  4. Restart the brokers one by one for the new protocol version to take effect.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 1.1 on each broker and restart them one by one. Note that the older Scala consumer does not support the new message format introduced in 0.11, so to avoid the performance cost of down-conversion (or to take advantage of exactly once semantics), the newer Java consumer must be used.

Additional Upgrade Notes:

  1. If you are willing to accept downtime, you can simply take all the brokers down, update the code and start them back up. They will start with the new protocol by default.
  2. Bumping the protocol version and restarting can be done any time after the brokers are upgraded. It does not have to be immediately after. Similarly for the message format version.
  3. If you are using Java8 method references in your Kafka Streams code you might need to update your code to resolve method ambiguties. Hot-swapping the jar-file only might not work.
Notable changes in 1.1.1
  • New Kafka Streams configuration parameter added that allows rolling bounce upgrade from version 0.10.0.x upgrade.from
  • See the Kafka Streams upgrade guide for details about this new config.
Notable changes in 1.1.0
  • The kafka artifact in Maven no longer depends on log4j or slf4j-log4j12. Similarly to the kafka-clients artifact, users can now choose the logging back-end by including the appropriate slf4j module (slf4j-log4j12, logback, etc.). The release tarball still includes log4j and slf4j-log4j12.
  • KIP-225 changed the metric "records.lag" to use tags for topic and partition. The original version with the name format "{topic}-{partition}.records-lag" is deprecated and will be removed in 2.0.0.
  • Kafka Streams is more robust against broker communication errors. Instead of stopping the Kafka Streams client with a fatal exception, Kafka Streams tries to self-heal and reconnect to the cluster. Using the new you have better control of how often Kafka Streams retries and can configure fine-grained timeouts (instead of hard coded retries as in older version).AdminClient
  • Kafka Streams rebalance time was reduced further making Kafka Streams more responsive.
  • Kafka Connect now supports message headers in both sink and source connectors, and to manipulate them via simple message transforms. Connectors must be changed to explicitly use them. A new is introduced to control how headers are (de)serialized, and the new "SimpleHeaderConverter" is used by default to use string representations of values.HeaderConverter
  • kafka.tools.DumpLogSegments now automatically sets deep-iteration option if print-data-log is enabled explicitly or implicitly due to any of the other options like decoder.
New Protocol Versions
  • KIP-226 introduced DescribeConfigs Request/Response v1.
  • KIP-227 introduced Fetch Request/Response v7.
Upgrading a 1.0 Kafka Streams Application
  • Upgrading your Streams application from 1.0 to 1.1 does not require a broker upgrade. A Kafka Streams 1.1 application can connect to 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
  • See Streams API changes in 1.1.0 for more details.

Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x or 0.11.0.x to 1.0.0

Kafka 1.0.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below, you guarantee no downtime during the upgrade. However, please review the notable changes in 1.0.0 before upgrading.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1, 0.10.2, 0.11.0).
    • log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION (See potential performance impact following the upgrade for the details on what this configuration does.)
    If you are upgrading from 0.11.0.x and you have not overridden the message format, you must set both the message format version and the inter-broker protocol version to 0.11.0.
    • inter.broker.protocol.version=0.11.0
    • log.message.format.version=0.11.0
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
  3. Once the entire cluster is upgraded, bump the protocol version by editing and setting it to 1.0. inter.broker.protocol.version
  4. Restart the brokers one by one for the new protocol version to take effect.
  5. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 1.0 on each broker and restart them one by one. If you are upgrading from 0.11.0 and log.message.format.version is set to 0.11.0, you can update the config and skip the rolling restart. Note that the older Scala consumer does not support the new message format introduced in 0.11, so to avoid the performance cost of down-conversion (or to take advantage of exactly once semantics), the newer Java consumer must be used.

Additional Upgrade Notes:

  1. If you are willing to accept downtime, you can simply take all the brokers down, update the code and start them back up. They will start with the new protocol by default.
  2. Bumping the protocol version and restarting can be done any time after the brokers are upgraded. It does not have to be immediately after. Similarly for the message format version.
Notable changes in 1.0.2
  • New Kafka Streams configuration parameter added that allows rolling bounce upgrade from version 0.10.0.x upgrade.from
  • See the Kafka Streams upgrade guide for details about this new config.
Notable changes in 1.0.1
  • Restored binary compatibility of AdminClient's Options classes (e.g. CreateTopicsOptions, DeleteTopicsOptions, etc.) with 0.11.0.x. Binary (but not source) compatibility had been broken inadvertently in 1.0.0.
Notable changes in 1.0.0
  • Topic deletion is now enabled by default, since the functionality is now stable. Users who wish to to retain the previous behavior should set the broker config to . Keep in mind that topic deletion removes data and the operation is not reversible (i.e. there is no "undelete" operation)delete.topic.enablefalse
  • For topics that support timestamp search if no offset can be found for a partition, that partition is now included in the search result with a null offset value. Previously, the partition was not included in the map. This change was made to make the search behavior consistent with the case of topics not supporting timestamp search.
  • If the is 1.0 or later, a broker will now stay online to serve replicas on live log directories even if there are offline log directories. A log directory may become offline due to IOException caused by hardware failure. Users need to monitor the per-broker metric to check whether there is offline log directory. inter.broker.protocol.versionofflineLogDirectoryCount
  • Added KafkaStorageException which is a retriable exception. KafkaStorageException will be converted to NotLeaderForPartitionException in the response if the version of the client's FetchRequest or ProducerRequest does not support KafkaStorageException.
  • -XX:+DisableExplicitGC was replaced by -XX:+ExplicitGCInvokesConcurrent in the default JVM settings. This helps avoid out of memory exceptions during allocation of native memory by direct buffers in some cases.
  • The overridden method implementations have been removed from the following deprecated classes in the package: , , , , , , and . This was only intended for use on the broker, but it is no longer in use and the implementations have not been maintained. A stub implementation has been retained for binary compatibility.handleErrorkafka.apiFetchRequestGroupCoordinatorRequestOffsetCommitRequestOffsetFetchRequestOffsetRequestProducerRequestTopicMetadataRequest
  • The Java clients and tools now accept any string as a client-id.
  • The deprecated tool has been removed. Use to get consumer group details.kafka-consumer-offset-checker.shkafka-consumer-groups.sh
  • SimpleAclAuthorizer now logs access denials to the authorizer log by default.
  • Authentication failures are now reported to clients as one of the subclasses of . No retries will be performed if a client connection fails authentication.AuthenticationException
  • Custom implementations may throw to provide an error message to return to clients indicating the reason for authentication failure. Implementors should take care not to include any security-critical information in the exception message that should not be leaked to unauthenticated clients.SaslServerSaslAuthenticationException
  • The mbean registered with JMX to provide version and commit id will be deprecated and replaced with metrics providing these attributes.app-info
  • Kafka metrics may now contain non-numeric values. has been deprecated and will return in such cases to minimise the probability of breaking users who read the value of every client metric (via a implementation or by calling the method). can be used to retrieve numeric and non-numeric metric values.org.apache.kafka.common.Metric#value()0.0MetricsReportermetrics()org.apache.kafka.common.Metric#metricValue()
  • Every Kafka rate metric now has a corresponding cumulative count metric with the suffix to simplify downstream processing. For example, has a corresponding metric named .-totalrecords-consumed-raterecords-consumed-total
  • Mx4j will only be enabled if the system property is set to . Due to a logic inversion bug, it was previously enabled by default and disabled if was set to .kafka_mx4jenabletruekafka_mx4jenabletrue
  • The package in the clients jar has been made public and added to the javadocs. Internal classes which had previously been located in this package have been moved elsewhere.org.apache.kafka.common.security.auth
  • When using an Authorizer and a user doesn't have required permissions on a topic, the broker will return TOPIC_AUTHORIZATION_FAILED errors to requests irrespective of topic existence on broker. If the user have required permissions and the topic doesn't exists, then the UNKNOWN_TOPIC_OR_PARTITION error code will be returned.
  • config/consumer.properties file updated to use new consumer config properties.
New Protocol Versions
  • KIP-112: LeaderAndIsrRequest v1 introduces a partition-level field. is_new
  • KIP-112: UpdateMetadataRequest v4 introduces a partition-level field. offline_replicas
  • KIP-112: MetadataResponse v5 introduces a partition-level field. offline_replicas
  • KIP-112: ProduceResponse v4 introduces error code for KafkaStorageException.
  • KIP-112: FetchResponse v6 introduces error code for KafkaStorageException.
  • KIP-152: SaslAuthenticate request has been added to enable reporting of authentication failures. This request will be used if the SaslHandshake request version is greater than 0.
Upgrading a 0.11.0 Kafka Streams Application
  • Upgrading your Streams application from 0.11.0 to 1.0 does not require a broker upgrade. A Kafka Streams 1.0 application can connect to 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though). However, Kafka Streams 1.0 requires 0.10 message format or newer and does not work with older message formats.
  • If you are monitoring on streams metrics, you will need make some changes to the metrics names in your reporting and monitoring code, because the metrics sensor hierarchy was changed.
  • There are a few public APIs including , and , are being deprecated by new APIs. We recommend making corresponding code changes, which should be very minor since the new APIs look quite similar, when you upgrade. ProcessorContext#schedule()Processor#punctuate()KStreamBuilderTopologyBuilder
  • See Streams API changes in 1.0.0 for more details.
Upgrading a 0.10.2 Kafka Streams Application
  • Upgrading your Streams application from 0.10.2 to 1.0 does not require a broker upgrade. A Kafka Streams 1.0 application can connect to 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
  • If you are monitoring on streams metrics, you will need make some changes to the metrics names in your reporting and monitoring code, because the metrics sensor hierarchy was changed.
  • There are a few public APIs including , and , are being deprecated by new APIs. We recommend making corresponding code changes, which should be very minor since the new APIs look quite similar, when you upgrade. ProcessorContext#schedule()Processor#punctuate()KStreamBuilderTopologyBuilder
  • If you specify customized , and in configs, it is recommended to use their replaced configure parameter as these configs are deprecated. key.serdevalue.serdetimestamp.extractor
  • See Streams API changes in 0.11.0 for more details.
Upgrading a 0.10.1 Kafka Streams Application
  • Upgrading your Streams application from 0.10.1 to 1.0 does not require a broker upgrade. A Kafka Streams 1.0 application can connect to 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
  • You need to recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
  • If you are monitoring on streams metrics, you will need make some changes to the metrics names in your reporting and monitoring code, because the metrics sensor hierarchy was changed.
  • There are a few public APIs including , and , are being deprecated by new APIs. We recommend making corresponding code changes, which should be very minor since the new APIs look quite similar, when you upgrade. ProcessorContext#schedule()Processor#punctuate()KStreamBuilderTopologyBuilder
  • If you specify customized , and in configs, it is recommended to use their replaced configure parameter as these configs are deprecated. key.serdevalue.serdetimestamp.extractor
  • If you use a custom (i.e., user implemented) timestamp extractor, you will need to update this code, because the interface was changed. TimestampExtractor
  • If you register custom metrics, you will need to update this code, because the interface was changed. StreamsMetric
  • See Streams API changes in 1.0.0, Streams API changes in 0.11.0 and Streams API changes in 0.10.2 for more details.
Upgrading a 0.10.0 Kafka Streams Application
  • Upgrading your Streams application from 0.10.0 to 1.0 does require a broker upgrade because a Kafka Streams 1.0 application can only connect to 0.1, 0.11.0, 0.10.2, or 0.10.1 brokers.
  • There are couple of API changes, that are not backward compatible (cf. Streams API changes in 1.0.0, Streams API changes in 0.11.0, Streams API changes in 0.10.2, and Streams API changes in 0.10.1 for more details). Thus, you need to update and recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
  • Upgrading from 0.10.0.x to 1.0.2 requires two rolling bounces with config set for first upgrade phase (cf. KIP-268). As an alternative, an offline upgrade is also possible. upgrade.from="0.10.0"
    • prepare your application instances for a rolling bounce and make sure that config is set to for new version 0.11.0.3 upgrade.from"0.10.0"
    • bounce each instance of your application once
    • prepare your newly deployed 1.0.2 application instances for a second round of rolling bounces; make sure to remove the value for config upgrade.from
    • bounce each instance of your application once more to complete the upgrade
  • Upgrading from 0.10.0.x to 1.0.0 or 1.0.1 requires an offline upgrade (rolling bounce upgrade is not supported)
    • stop all old (0.10.0.x) application instances
    • update your code and swap old code and jar file with new code and new jar file
    • restart all new (1.0.0 or 1.0.1) application instances

Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x or 0.10.2.x to 0.11.0.0

Kafka 0.11.0.0 introduces a new message format version as well as wire protocol changes. By following the recommended rolling upgrade plan below, you guarantee no downtime during the upgrade. However, please review the notable changes in 0.11.0.0 before upgrading.

Starting with version 0.10.2, Java clients (producer and consumer) have acquired the ability to communicate with older brokers. Version 0.11.0 clients can talk to version 0.10.0 or newer brokers. However, if your brokers are older than 0.10.0, you must upgrade all the brokers in the Kafka cluster before upgrading your clients. Version 0.11.0 brokers support 0.8.x and newer clients.

For a rolling upgrade:

  1. Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the current message format version currently in use. If you have not overridden the message format previously, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
    • inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1 or 0.10.2).
    • log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION (See potential performance impact following the upgrade for the details on what this configuration does.)
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
  3. Once the entire cluster is upgraded, bump the protocol version by editing and setting it to 0.11.0, but do not change yet. inter.broker.protocol.versionlog.message.format.version
  4. Restart the brokers one by one for the new protocol version to take effect.
  5. Once all (or most) consumers have been upgraded to 0.11.0 or later, then change log.message.format.version to 0.11.0 on each broker and restart them one by one. Note that the older Scala consumer does not support the new message format, so to avoid the performance cost of down-conversion (or to take advantage of exactly once semantics), the new Java consumer must be used.

Additional Upgrade Notes:

  1. If you are willing to accept downtime, you can simply take all the brokers down, update the code and start them back up. They will start with the new protocol by default.
  2. Bumping the protocol version and restarting can be done any time after the brokers are upgraded. It does not have to be immediately after. Similarly for the message format version.
  3. It is also possible to enable the 0.11.0 message format on individual topics using the topic admin tool () prior to updating the global setting .bin/kafka-topics.shlog.message.format.version
  4. If you are upgrading from a version prior to 0.10.0, it is NOT necessary to first update the message format to 0.10.0 before you switch to 0.11.0.
Upgrading a 0.10.2 Kafka Streams Application
  • Upgrading your Streams application from 0.10.2 to 0.11.0 does not require a broker upgrade. A Kafka Streams 0.11.0 application can connect to 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
  • If you specify customized , and in configs, it is recommended to use their replaced configure parameter as these configs are deprecated. key.serdevalue.serdetimestamp.extractor
  • See Streams API changes in 0.11.0 for more details.
Upgrading a 0.10.1 Kafka Streams Application
  • Upgrading your Streams application from 0.10.1 to 0.11.0 does not require a broker upgrade. A Kafka Streams 0.11.0 application can connect to 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
  • You need to recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
  • If you specify customized , and in configs, it is recommended to use their replaced configure parameter as these configs are deprecated. key.serdevalue.serdetimestamp.extractor
  • If you use a custom (i.e., user implemented) timestamp extractor, you will need to update this code, because the interface was changed. TimestampExtractor
  • If you register custom metrics, you will need to update this code, because the interface was changed. StreamsMetric
  • See Streams API changes in 0.11.0 and Streams API changes in 0.10.2 for more details.
Upgrading a 0.10.0 Kafka Streams Application
  • Upgrading your Streams application from 0.10.0 to 0.11.0 does require a broker upgrade because a Kafka Streams 0.11.0 application can only connect to 0.11.0, 0.10.2, or 0.10.1 brokers.
  • There are couple of API changes, that are not backward compatible (cf. Streams API changes in 0.11.0, Streams API changes in 0.10.2, and Streams API changes in 0.10.1 for more details). Thus, you need to update and recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
  • Upgrading from 0.10.0.x to 0.11.0.3 requires two rolling bounces with config set for first upgrade phase (cf. KIP-268). As an alternative, an offline upgrade is also possible. upgrade.from="0.10.0"
    • prepare your application instances for a rolling bounce and make sure that config is set to for new version 0.11.0.3 upgrade.from"0.10.0"
    • bounce each instance of your application once
    • prepare your newly deployed 0.11.0.3 application instances for a second round of rolling bounces; make sure to remove the value for config upgrade.from
    • bounce each instance of your application once more to complete the upgrade
  • Upgrading from 0.10.0.x to 0.11.0.0, 0.11.0.1, or 0.11.0.2 requires an offline upgrade (rolling bounce upgrade is not supported)
    • stop all old (0.10.0.x) application instances
    • update your code and swap old code and jar file with new code and new jar file
    • restart all new (0.11.0.0 , 0.11.0.1, or 0.11.0.2) application instances
Notable changes in 0.11.0.3
  • New Kafka Streams configuration parameter added that allows rolling bounce upgrade from version 0.10.0.x upgrade.from
  • See the Kafka Streams upgrade guide for details about this new config.
Notable changes in 0.11.0.0
  • Unclean leader election is now disabled by default. The new default favors durability over availability. Users who wish to to retain the previous behavior should set the broker config to .unclean.leader.election.enabletrue
  • Producer configs , and have been removed. They were initially deprecated in Kafka 0.9.0.0.block.on.buffer.fullmetadata.fetch.timeout.mstimeout.ms
  • The broker config is now enforced upon auto topic creation. Internal auto topic creation will fail with a GROUP_COORDINATOR_NOT_AVAILABLE error until the cluster size meets this replication factor requirement.offsets.topic.replication.factor
  • When compressing data with snappy, the producer and broker will use the compression scheme's default block size (2 x 32 KB) instead of 1 KB in order to improve the compression ratio. There have been reports of data compressed with the smaller block size being 50% larger than when compressed with the larger block size. For the snappy case, a producer with 5000 partitions will require an additional 315 MB of JVM heap.
  • Similarly, when compressing data with gzip, the producer and broker will use 8 KB instead of 1 KB as the buffer size. The default for gzip is excessively low (512 bytes).
  • The broker configuration now applies to the total size of a batch of messages. Previously the setting applied to batches of compressed messages, or to non-compressed messages individually. A message batch may consist of only a single message, so in most cases, the limitation on the size of individual messages is only reduced by the overhead of the batch format. However, there are some subtle implications for message format conversion (see below for more detail). Note also that while previously the broker would ensure that at least one message is returned in each fetch request (regardless of the total and partition-level fetch sizes), the same behavior now applies to one message batch.max.message.bytes
  • GC log rotation is enabled by default, see KAFKA-3754 for details.
  • Deprecated constructors of RecordMetadata, MetricName and Cluster classes have been removed.
  • Added user headers support through a new Headers interface providing user headers read and write access.
  • ProducerRecord and ConsumerRecord expose the new Headers API via method call.Headers headers()
  • ExtendedSerializer and ExtendedDeserializer interfaces are introduced to support serialization and deserialization for headers. Headers will be ignored if the configured serializer and deserializer are not the above classes.
  • A new config, , was introduced. This config specifies the time, in milliseconds, that the will delay the initial consumer rebalance. The rebalance will be further delayed by the value of as new members join the group, up to a maximum of . The default value for this is 3 seconds. During development and testing it might be desirable to set this to 0 in order to not delay test execution time. group.initial.rebalance.delay.msGroupCoordinatorgroup.initial.rebalance.delay.msmax.poll.interval.ms
  • org.apache.kafka.common.Cluster#partitionsForTopic, and methods will return an empty list instead of (which is considered a bad practice) in case the metadata for the required topic does not exist. partitionsForNodeavailablePartitionsForTopicnull
  • Streams API configuration parameters , , and were deprecated and replaced by , , and , respectively. timestamp.extractorkey.serdevalue.serdedefault.timestamp.extractordefault.key.serdedefault.value.serde
  • For offset commit failures in the Java consumer's APIs, we no longer expose the underlying cause when instances of are passed to the commit callback. See KAFKA-5052 for more detail. commitAsyncRetriableCommitFailedException
New Protocol Versions
  • KIP-107: FetchRequest v5 introduces a partition-level field. log_start_offset
  • KIP-107: FetchResponse v5 introduces a partition-level field. log_start_offset
  • KIP-82: ProduceRequest v3 introduces an array of in the message protocol, containing field and field.headerkeyvalue
  • KIP-82: FetchResponse v5 introduces an array of in the message protocol, containing field and field.headerkeyvalue
Notes on Exactly Once Semantics

Kafka 0.11.0 includes support for idempotent and transactional capabilities in the producer. Idempotent delivery ensures that messages are delivered exactly once to a particular topic partition during the lifetime of a single producer. Transactional delivery allows producers to send data to multiple partitions such that either all messages are successfully delivered, or none of them are. Together, these capabilities enable "exactly once semantics" in Kafka. More details on these features are available in the user guide, but below we add a few specific notes on enabling them in an upgraded cluster. Note that enabling EoS is not required and there is no impact on the broker's behavior if unused.

  1. Only the new Java producer and consumer support exactly once semantics.
  2. These features depend crucially on the 0.11.0 message format. Attempting to use them on an older format will result in unsupported version errors.
  3. Transaction state is stored in a new internal topic . This topic is not created until the the first attempt to use a transactional request API. Similar to the consumer offsets topic, there are several settings to control the topic's configuration. For example, controls the minimum ISR for this topic. See the configuration section in the user guide for a full list of options.__transaction_statetransaction.state.log.min.isr
  4. For secure clusters, the transactional APIs require new ACLs which can be turned on with the . tool.bin/kafka-acls.sh
  5. EoS in Kafka introduces new request APIs and modifies several existing ones. See KIP-98 for the full details
Notes on the new message format in 0.11.0

The 0.11.0 message format includes several major enhancements in order to support better delivery semantics for the producer (see KIP-98) and improved replication fault tolerance (see KIP-101). Although the new format contains more information to make these improvements possible, we have made the batch format much more efficient. As long as the number of messages per batch is more than 2, you can expect lower overall overhead. For smaller batches, however, there may be a small performance impact. See here for the results of our initial performance analysis of the new message format. You can also find more detail on the message format in the KIP-98 proposal.

One of the notable differences in the new message format is that even uncompressed messages are stored together as a single batch. This has a few implications for the broker configuration , which limits the size of a single batch. First, if an older client produces messages to a topic partition using the old format, and the messages are individually smaller than , the broker may still reject them after they are merged into a single batch during the up-conversion process. Generally this can happen when the aggregate size of the individual messages is larger than . There is a similar effect for older consumers reading messages down-converted from the new format: if the fetch size is not set at least as large as , the consumer may not be able to make progress even if the individual uncompressed messages are smaller than the configured fetch size. This behavior does not impact the Java client for 0.10.1.0 and later since it uses an updated fetch protocol which ensures that at least one message can be returned even if it exceeds the fetch size. To get around these problems, you should ensure 1) that the producer's batch size is not set larger than , and 2) that the consumer's fetch size is set at least as large as . max.message.bytesmax.message.bytesmax.message.bytesmax.message.bytesmax.message.bytesmax.message.bytes

Most of the discussion on the performance impact of upgrading to the 0.10.0 message format remains pertinent to the 0.11.0 upgrade. This mainly affects clusters that are not secured with TLS since "zero-copy" transfer is already not possible in that case. In order to avoid the cost of down-conversion, you should ensure that consumer applications are upgraded to the latest 0.11.0 client. Significantly, since the old consumer has been deprecated in 0.11.0.0, it does not support the new message format. You must upgrade to use the new consumer to use the new message format without the cost of down-conversion. Note that 0.11.0 consumers support backwards compatibility with 0.10.0 brokers and upward, so it is possible to upgrade the clients first before the brokers.

Upgrading from 0.8.x, 0.9.x, 0.10.0.x or 0.10.1.x to 0.10.2.0

0.10.2.0 has wire protocol changes. By following the recommended rolling upgrade plan below, you guarantee no downtime during the upgrade. However, please review the notable changes in 0.10.2.0 before upgrading.

Starting with version 0.10.2, Java clients (producer and consumer) have acquired the ability to communicate with older brokers. Version 0.10.2 clients can talk to version 0.10.0 or newer brokers. However, if your brokers are older than 0.10.0, you must upgrade all the brokers in the Kafka cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.x and newer clients.

For a rolling upgrade:

  1. Update server.properties file on all brokers and add the following properties:
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
  3. Once the entire cluster is upgraded, bump the protocol version by editing inter.broker.protocol.version and setting it to 0.10.2.
  4. If your previous message format is 0.10.0, change log.message.format.version to 0.10.2 (this is a no-op as the message format is the same for 0.10.0, 0.10.1 and 0.10.2). If your previous message format version is lower than 0.10.0, do not change log.message.format.version yet - this parameter should only change once all consumers have been upgraded to 0.10.0.0 or later.
  5. Restart the brokers one by one for the new protocol version to take effect.
  6. If log.message.format.version is still lower than 0.10.0 at this point, wait until all consumers have been upgraded to 0.10.0 or later, then change log.message.format.version to 0.10.2 on each broker and restart them one by one.

Note: If you are willing to accept downtime, you can simply take all the brokers down, update the code and start all of them. They will start with the new protocol by default.

Note: Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.

Upgrading a 0.10.1 Kafka Streams Application
  • Upgrading your Streams application from 0.10.1 to 0.10.2 does not require a broker upgrade. A Kafka Streams 0.10.2 application can connect to 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
  • You need to recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
  • If you use a custom (i.e., user implemented) timestamp extractor, you will need to update this code, because the interface was changed. TimestampExtractor
  • If you register custom metrics, you will need to update this code, because the interface was changed. StreamsMetric
  • See Streams API changes in 0.10.2 for more details.
Upgrading a 0.10.0 Kafka Streams Application
  • Upgrading your Streams application from 0.10.0 to 0.10.2 does require a broker upgrade because a Kafka Streams 0.10.2 application can only connect to 0.10.2 or 0.10.1 brokers.
  • There are couple of API changes, that are not backward compatible (cf. Streams API changes in 0.10.2 for more details). Thus, you need to update and recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
  • Upgrading from 0.10.0.x to 0.10.2.2 requires two rolling bounces with config set for first upgrade phase (cf. KIP-268). As an alternative, an offline upgrade is also possible. upgrade.from="0.10.0"
    • prepare your application instances for a rolling bounce and make sure that config is set to for new version 0.10.2.2 upgrade.from"0.10.0"
    • bounce each instance of your application once
    • prepare your newly deployed 0.10.2.2 application instances for a second round of rolling bounces; make sure to remove the value for config upgrade.from
    • bounce each instance of your application once more to complete the upgrade
  • Upgrading from 0.10.0.x to 0.10.2.0 or 0.10.2.1 requires an offline upgrade (rolling bounce upgrade is not supported)
    • stop all old (0.10.0.x) application instances
    • update your code and swap old code and jar file with new code and new jar file
    • restart all new (0.10.2.0 or 0.10.2.1) application instances
Notable changes in 0.10.2.2
  • New configuration parameter added that allows rolling bounce upgrade from version 0.10.0.x upgrade.from
Notable changes in 0.10.2.1
  • The default values for two configurations of the StreamsConfig class were changed to improve the resiliency of Kafka Streams applications. The internal Kafka Streams producer default value was changed from 0 to 10. The internal Kafka Streams consumer default value was changed from 300000 to . retriesmax.poll.interval.msInteger.MAX_VALUE
Notable changes in 0.10.2.0
  • The Java clients (producer and consumer) have acquired the ability to communicate with older brokers. Version 0.10.2 clients can talk to version 0.10.0 or newer brokers. Note that some features are not available or are limited when older brokers are used.
  • Several methods on the Java consumer may now throw if the calling thread is interrupted. Please refer to the Javadoc for a more in-depth explanation of this change.InterruptExceptionKafkaConsumer
  • Java consumer now shuts down gracefully. By default, the consumer waits up to 30 seconds to complete pending requests. A new close API with timeout has been added to to control the maximum wait time.KafkaConsumer
  • Multiple regular expressions separated by commas can be passed to MirrorMaker with the new Java consumer via the --whitelist option. This makes the behaviour consistent with MirrorMaker when used the old Scala consumer.
  • Upgrading your Streams application from 0.10.1 to 0.10.2 does not require a broker upgrade. A Kafka Streams 0.10.2 application can connect to 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
  • The Zookeeper dependency was removed from the Streams API. The Streams API now uses the Kafka protocol to manage internal topics instead of modifying Zookeeper directly. This eliminates the need for privileges to access Zookeeper directly and "StreamsConfig.ZOOKEEPER_CONFIG" should not be set in the Streams app any more. If the Kafka cluster is secured, Streams apps must have the required security privileges to create new topics.
  • Several new fields including "security.protocol", "connections.max.idle.ms", "retry.backoff.ms", "reconnect.backoff.ms" and "request.timeout.ms" were added to StreamsConfig class. User should pay attention to the default values and set these if needed. For more details please refer to 3.5 Kafka Streams Configs.
New Protocol Versions
  • KIP-88: OffsetFetchRequest v2 supports retrieval of offsets for all topics if the array is set to . topicsnull
  • KIP-88: OffsetFetchResponse v2 introduces a top-level field. error_code
  • KIP-103: UpdateMetadataRequest v3 introduces a field to the elements of the array. listener_nameend_points
  • KIP-108: CreateTopicsRequest v1 introduces a field. validate_only
  • KIP-108: CreateTopicsResponse v1 introduces an field to the elements of the array. error_messagetopic_errors

Upgrading from 0.8.x, 0.9.x or 0.10.0.X to 0.10.1.0

0.10.1.0 has wire protocol changes. By following the recommended rolling upgrade plan below, you guarantee no downtime during the upgrade. However, please notice the Potential breaking changes in 0.10.1.0 before upgrade.
Note: Because new protocols are introduced, it is important to upgrade your Kafka clusters before upgrading your clients (i.e. 0.10.1.x clients only support 0.10.1.x or later brokers while 0.10.1.x brokers also support older clients).

For a rolling upgrade:

  1. Update server.properties file on all brokers and add the following properties:
  2. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
  3. Once the entire cluster is upgraded, bump the protocol version by editing inter.broker.protocol.version and setting it to 0.10.1.0.
  4. If your previous message format is 0.10.0, change log.message.format.version to 0.10.1 (this is a no-op as the message format is the same for both 0.10.0 and 0.10.1). If your previous message format version is lower than 0.10.0, do not change log.message.format.version yet - this parameter should only change once all consumers have been upgraded to 0.10.0.0 or later.
  5. Restart the brokers one by one for the new protocol version to take effect.
  6. If log.message.format.version is still lower than 0.10.0 at this point, wait until all consumers have been upgraded to 0.10.0 or later, then change log.message.format.version to 0.10.1 on each broker and restart them one by one.

Note: If you are willing to accept downtime, you can simply take all the brokers down, update the code and start all of them. They will start with the new protocol by default.

Note: Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.

Notable changes in 0.10.1.2
  • New configuration parameter added that allows rolling bounce upgrade from version 0.10.0.x upgrade.from
Potential breaking changes in 0.10.1.0
  • The log retention time is no longer based on last modified time of the log segments. Instead it will be based on the largest timestamp of the messages in a log segment.
  • The log rolling time is no longer depending on log segment create time. Instead it is now based on the timestamp in the messages. More specifically. if the timestamp of the first message in the segment is T, the log will be rolled out when a new message has a timestamp greater than or equal to T + log.roll.ms
  • The open file handlers of 0.10.0 will increase by ~33% because of the addition of time index files for each segment.
  • The time index and offset index share the same index size configuration. Since each time index entry is 1.5x the size of offset index entry. User may need to increase log.index.size.max.bytes to avoid potential frequent log rolling.
  • Due to the increased number of index files, on some brokers with large amount the log segments (e.g. >15K), the log loading process during the broker startup could be longer. Based on our experiment, setting the num.recovery.threads.per.data.dir to one may reduce the log loading time.
Upgrading a 0.10.0 Kafka Streams Application
  • Upgrading your Streams application from 0.10.0 to 0.10.1 does require a broker upgrade because a Kafka Streams 0.10.1 application can only connect to 0.10.1 brokers.
  • There are couple of API changes, that are not backward compatible (cf. Streams API changes in 0.10.1 for more details). Thus, you need to update and recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
  • Upgrading from 0.10.0.x to 0.10.1.2 requires two rolling bounces with config set for first upgrade phase (cf. KIP-268). As an alternative, an offline upgrade is also possible. upgrade.from="0.10.0"
    • prepare your application instances for a rolling bounce and make sure that config is set to for new version 0.10.1.2 upgrade.from"0.10.0"
    • bounce each instance of your application once
    • prepare your newly deployed 0.10.1.2 application instances for a second round of rolling bounces; make sure to remove the value for config upgrade.from
    • bounce each instance of your application once more to complete the upgrade
  • Upgrading from 0.10.0.x to 0.10.1.0 or 0.10.1.1 requires an offline upgrade (rolling bounce upgrade is not supported)
    • stop all old (0.10.0.x) application instances
    • update your code and swap old code and jar file with new code and new jar file
    • restart all new (0.10.1.0 or 0.10.1.1) application instances
Notable changes in 0.10.1.0
  • The new Java consumer is no longer in beta and we recommend it for all new development. The old Scala consumers are still supported, but they will be deprecated in the next release and will be removed in a future major release.
  • The / switch is no longer required to use tools like MirrorMaker and the Console Consumer with the new consumer; one simply needs to pass a Kafka broker to connect to instead of the ZooKeeper ensemble. In addition, usage of the Console Consumer with the old consumer has been deprecated and it will be removed in a future major release. --new-consumer--new.consumer
  • Kafka clusters can now be uniquely identified by a cluster id. It will be automatically generated when a broker is upgraded to 0.10.1.0. The cluster id is available via the kafka.server:type=KafkaServer,name=ClusterId metric and it is part of the Metadata response. Serializers, client interceptors and metric reporters can receive the cluster id by implementing the ClusterResourceListener interface.
  • The BrokerState "RunningAsController" (value 4) has been removed. Due to a bug, a broker would only be in this state briefly before transitioning out of it and hence the impact of the removal should be minimal. The recommended way to detect if a given broker is the controller is via the kafka.controller:type=KafkaController,name=ActiveControllerCount metric.
  • The new Java Consumer now allows users to search offsets by timestamp on partitions.
  • The new Java Consumer now supports heartbeating from a background thread. There is a new configuration which controls the maximum time between poll invocations before the consumer will proactively leave the group (5 minutes by default). The value of the configuration (default to 30 seconds) must always be smaller than (default to 5 minutes), since that is the maximum time that a JoinGroup request can block on the server while the consumer is rebalance. Finally, the default value of has been adjusted down to 10 seconds, and the default value of has been changed to 500.max.poll.interval.msrequest.timeout.msmax.poll.interval.mssession.timeout.msmax.poll.records
  • When using an Authorizer and a user doesn't have Describe authorization on a topic, the broker will no longer return TOPIC_AUTHORIZATION_FAILED errors to requests since this leaks topic names. Instead, the UNKNOWN_TOPIC_OR_PARTITION error code will be returned. This may cause unexpected timeouts or delays when using the producer and consumer since Kafka clients will typically retry automatically on unknown topic errors. You should consult the client logs if you suspect this could be happening.
  • Fetch responses have a size limit by default (50 MB for consumers and 10 MB for replication). The existing per partition limits also apply (1 MB for consumers and replication). Note that neither of these limits is an absolute maximum as explained in the next point.
  • Consumers and replicas can make progress if a message larger than the response/partition size limit is found. More concretely, if the first message in the first non-empty partition of the fetch is larger than either or both limits, the message will still be returned.
  • Overloaded constructors were added to and to allow the caller to specify the order of the partitions (since order is significant in v3). The previously existing constructors were deprecated and the partitions are shuffled before the request is sent to avoid starvation issues. kafka.api.FetchRequestkafka.javaapi.FetchRequest
New Protocol Versions
  • ListOffsetRequest v1 supports accurate offset search based on timestamps.
  • MetadataResponse v2 introduces a new field: "cluster_id".
  • FetchRequest v3 supports limiting the response size (in addition to the existing per partition limit), it returns messages bigger than the limits if required to make progress and the order of partitions in the request is now significant.
  • JoinGroup v1 introduces a new field: "rebalance_timeout".

Upgrading from 0.8.x or 0.9.x to 0.10.0.0

0.10.0.0 has potential breaking changes (please review before upgrading) and possible performance impact following the upgrade. By following the recommended rolling upgrade plan below, you guarantee no downtime and no performance impact during and following the upgrade.
Note: Because new protocols are introduced, it is important to upgrade your Kafka clusters before upgrading your clients.

Notes to clients with version 0.9.0.0: Due to a bug introduced in 0.9.0.0, clients that depend on ZooKeeper (old Scala high-level Consumer and MirrorMaker if used with the old consumer) will not work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9.0.1 before brokers are upgraded to 0.10.0.x. This step is not necessary for 0.8.X or 0.9.0.1 clients.

For a rolling upgrade:

  1. Update server.properties file on all brokers and add the following properties:
  2. Upgrade the brokers. This can be done a broker at a time by simply bringing it down, updating the code, and restarting it.
  3. Once the entire cluster is upgraded, bump the protocol version by editing inter.broker.protocol.version and setting it to 0.10.0.0. NOTE: You shouldn't touch log.message.format.version yet - this parameter should only change once all consumers have been upgraded to 0.10.0.0
  4. Restart the brokers one by one for the new protocol version to take effect.
  5. Once all consumers have been upgraded to 0.10.0, change log.message.format.version to 0.10.0 on each broker and restart them one by one.

Note: If you are willing to accept downtime, you can simply take all the brokers down, update the code and start all of them. They will start with the new protocol by default.

Note: Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.

Potential performance impact following upgrade to 0.10.0.0

The message format in 0.10.0 includes a new timestamp field and uses relative offsets for compressed messages. The on disk message format can be configured through log.message.format.version in the server.properties file. The default on-disk message format is 0.10.0. If a consumer client is on a version before 0.10.0.0, it only understands message formats before 0.10.0. In this case, the broker is able to convert messages from the 0.10.0 format to an earlier format before sending the response to the consumer on an older version. However, the broker can't use zero-copy transfer in this case. Reports from the Kafka community on the performance impact have shown CPU utilization going from 20% before to 100% after an upgrade, which forced an immediate upgrade of all clients to bring performance back to normal. To avoid such message conversion before consumers are upgraded to 0.10.0.0, one can set log.message.format.version to 0.8.2 or 0.9.0 when upgrading the broker to 0.10.0.0. This way, the broker can still use zero-copy transfer to send the data to the old consumers. Once consumers are upgraded, one can change the message format to 0.10.0 on the broker and enjoy the new message format that includes new timestamp and improved compression. The conversion is supported to ensure compatibility and can be useful to support a few apps that have not updated to newer clients yet, but is impractical to support all consumer traffic on even an overprovisioned cluster. Therefore, it is critical to avoid the message conversion as much as possible when brokers have been upgraded but the majority of clients have not.

For clients that are upgraded to 0.10.0.0, there is no performance impact.

Note: By setting the message format version, one certifies that all existing messages are on or below that message format version. Otherwise consumers before 0.10.0.0 might break. In particular, after the message format is set to 0.10.0, one should not change it back to an earlier format as it may break consumers on versions before 0.10.0.0.

Note: Due to the additional timestamp introduced in each message, producers sending small messages may see a message throughput degradation because of the increased overhead. Likewise, replication now transmits an additional 8 bytes per message. If you're running close to the network capacity of your cluster, it's possible that you'll overwhelm the network cards and see failures and performance issues due to the overload.

Note: If you have enabled compression on producers, you may notice reduced producer throughput and/or lower compression rate on the broker in some cases. When receiving compressed messages, 0.10.0 brokers avoid recompressing the messages, which in general reduces the latency and improves the throughput. In certain cases, however, this may reduce the batching size on the producer, which could lead to worse throughput. If this happens, users can tune linger.ms and batch.size of the producer for better throughput. In addition, the producer buffer used for compressing messages with snappy is smaller than the one used by the broker, which may have a negative impact on the compression ratio for the messages on disk. We intend to make this configurable in a future Kafka release.

Potential breaking changes in 0.10.0.0
  • Starting from Kafka 0.10.0.0, the message format version in Kafka is represented as the Kafka version. For example, message format 0.9.0 refers to the highest message version supported by Kafka 0.9.0.
  • Message format 0.10.0 has been introduced and it is used by default. It includes a timestamp field in the messages and relative offsets are used for compressed messages.
  • ProduceRequest/Response v2 has been introduced and it is used by default to support message format 0.10.0
  • FetchRequest/Response v2 has been introduced and it is used by default to support message format 0.10.0
  • MessageFormatter interface was changed from to def writeTo(key: Array[Byte], value: Array[Byte], output: PrintStream)def writeTo(consumerRecord: ConsumerRecord[Array[Byte], Array[Byte]], output: PrintStream)
  • MessageReader interface was changed from to def readMessage(): KeyedMessage[Array[Byte], Array[Byte]]def readMessage(): ProducerRecord[Array[Byte], Array[Byte]]
  • MessageFormatter's package was changed from to kafka.toolskafka.common
  • MessageReader's package was changed from to kafka.toolskafka.common
  • MirrorMakerMessageHandler no longer exposes the method as it was never called. handle(record: MessageAndMetadata[Array[Byte], Array[Byte]])
  • The 0.7 KafkaMigrationTool is no longer packaged with Kafka. If you need to migrate from 0.7 to 0.10.0, please migrate to 0.8 first and then follow the documented upgrade process to upgrade from 0.8 to 0.10.0.
  • The new consumer has standardized its APIs to accept as the sequence type for method parameters. Existing code may have to be updated to work with the 0.10.0 client library. java.util.Collection
  • LZ4-compressed message handling was changed to use an interoperable framing specification (LZ4f v1.5.1). To maintain compatibility with old clients, this change only applies to Message format 0.10.0 and later. Clients that Produce/Fetch LZ4-compressed messages using v0/v1 (Message format 0.9.0) should continue to use the 0.9.0 framing implementation. Clients that use Produce/Fetch protocols v2 or later should use interoperable LZ4f framing. A list of interoperable LZ4 libraries is available at https://www./
Notable changes in 0.10.0.0
  • Starting from Kafka 0.10.0.0, a new client library named Kafka Streams is available for stream processing on data stored in Kafka topics. This new client library only works with 0.10.x and upward versioned brokers due to message format changes mentioned above. For more information please read Streams documentation.
  • The default value of the configuration parameter is now 64K for the new consumer.receive.buffer.bytes
  • The new consumer now exposes the configuration parameter to restrict internal topics (such as the consumer offsets topic) from accidentally being included in regular expression subscriptions. By default, it is enabled.exclude.internal.topics
  • The old Scala producer has been deprecated. Users should migrate their code to the Java producer included in the kafka-clients JAR as soon as possible.
  • The new consumer API has been marked stable.

Upgrading from 0.8.0, 0.8.1.X, or 0.8.2.X to 0.9.0.0

0.9.0.0 has potential breaking changes (please review before upgrading) and an inter-broker protocol change from previous versions. This means that upgraded brokers and clients may not be compatible with older versions. It is important that you upgrade your Kafka cluster before upgrading your clients. If you are using MirrorMaker downstream clusters should be upgraded first as well.

For a rolling upgrade:

  1. Update server.properties file on all brokers and add the following property: inter.broker.protocol.version=0.8.2.X
  2. Upgrade the brokers. This can be done a broker at a time by simply bringing it down, updating the code, and restarting it.
  3. Once the entire cluster is upgraded, bump the protocol version by editing inter.broker.protocol.version and setting it to 0.9.0.0.
  4. Restart the brokers one by one for the new protocol version to take effect

Note: If you are willing to accept downtime, you can simply take all the brokers down, update the code and start all of them. They will start with the new protocol by default.

Note: Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.

Potential breaking changes in 0.9.0.0
  • Java 1.6 is no longer supported.
  • Scala 2.9 is no longer supported.
  • Broker IDs above 1000 are now reserved by default to automatically assigned broker IDs. If your cluster has existing broker IDs above that threshold make sure to increase the reserved.broker.max.id broker configuration property accordingly.
  • Configuration parameter replica.lag.max.messages was removed. Partition leaders will no longer consider the number of lagging messages when deciding which replicas are in sync.
  • Configuration parameter replica.lag.time.max.ms now refers not just to the time passed since last fetch request from replica, but also to time since the replica last caught up. Replicas that are still fetching messages from leaders but did not catch up to the latest messages in replica.lag.time.max.ms will be considered out of sync.
  • Compacted topics no longer accept messages without key and an exception is thrown by the producer if this is attempted. In 0.8.x, a message without key would cause the log compaction thread to subsequently complain and quit (and stop compacting all compacted topics).
  • MirrorMaker no longer supports multiple target clusters. As a result it will only accept a single --consumer.config parameter. To mirror multiple source clusters, you will need at least one MirrorMaker instance per source cluster, each with its own consumer configuration.
  • Tools packaged under org.apache.kafka.clients.tools.* have been moved to org.apache.kafka.tools.*. All included scripts will still function as usual, only custom code directly importing these classes will be affected.
  • The default Kafka JVM performance options (KAFKA_JVM_PERFORMANCE_OPTS) have been changed in kafka-run-class.sh.
  • The kafka-topics.sh script (kafka.admin.TopicCommand) now exits with non-zero exit code on failure.
  • The kafka-topics.sh script (kafka.admin.TopicCommand) will now print a warning when topic names risk metric collisions due to the use of a '.' or '_' in the topic name, and error in the case of an actual collision.
  • The kafka-console-producer.sh script (kafka.tools.ConsoleProducer) will use the Java producer instead of the old Scala producer be default, and users have to specify 'old-producer' to use the old producer.
  • By default, all command line tools will print all logging messages to stderr instead of stdout.
Notable changes in 0.9.0.1
  • The new broker id generation feature can be disabled by setting broker.id.generation.enable to false.
  • Configuration parameter log.cleaner.enable is now true by default. This means topics with a cleanup.policy=compact will now be compacted by default, and 128 MB of heap will be allocated to the cleaner process via log.cleaner.dedupe.buffer.size. You may want to review log.cleaner.dedupe.buffer.size and the other log.cleaner configuration values based on your usage of compacted topics.
  • Default value of configuration parameter fetch.min.bytes for the new consumer is now 1 by default.
Deprecations in 0.9.0.0
  • Altering topic configuration from the kafka-topics.sh script (kafka.admin.TopicCommand) has been deprecated. Going forward, please use the kafka-configs.sh script (kafka.admin.ConfigCommand) for this functionality.
  • The kafka-consumer-offset-checker.sh (kafka.tools.ConsumerOffsetChecker) has been deprecated. Going forward, please use kafka-consumer-groups.sh (kafka.admin.ConsumerGroupCommand) for this functionality.
  • The kafka.tools.ProducerPerformance class has been deprecated. Going forward, please use org.apache.kafka.tools.ProducerPerformance for this functionality (kafka-producer-perf-test.sh will also be changed to use the new class).
  • The producer config block.on.buffer.full has been deprecated and will be removed in future release. Currently its default value has been changed to false. The KafkaProducer will no longer throw BufferExhaustedException but instead will use max.block.ms value to block, after which it will throw a TimeoutException. If block.on.buffer.full property is set to true explicitly, it will set the max.block.ms to Long.MAX_VALUE and metadata.fetch.timeout.ms will not be honoured

Upgrading from 0.8.1 to 0.8.2

0.8.2 is fully compatible with 0.8.1. The upgrade can be done one broker at a time by simply bringing it down, updating the code, and restarting it.

Upgrading from 0.8.0 to 0.8.1

0.8.1 is fully compatible with 0.8. The upgrade can be done one broker at a time by simply bringing it down, updating the code, and restarting it.

Upgrading from 0.7

Release 0.7 is incompatible with newer releases. Major changes were made to the API, ZooKeeper data structures, and protocol, and configuration in order to add replication (Which was missing in 0.7). The upgrade from 0.7 to later versions requires a special tool for migration. This migration can be done without downtime.

2. APIs

Kafka includes five core apis:
  1. The Producer API allows applications to send streams of data to topics in the Kafka cluster.
  2. The Consumer API allows applications to read streams of data from topics in the Kafka cluster.
  3. The Streams API allows transforming streams of data from input topics to output topics.
  4. The Connect API allows implementing connectors that continually pull from some source system or application into Kafka or push from Kafka into some sink system or application.
  5. The Admin API allows managing and inspecting topics, brokers, and other Kafka objects.
Kafka exposes all its functionality over a language independent protocol which has clients available in many programming languages. However only the Java clients are maintained as part of the main Kafka project, the others are available as independent open source projects. A list of non-Java clients is available here.

2.1 Producer API

The Producer API allows applications to send streams of data to topics in the Kafka cluster.

Examples showing how to use the producer are given in the javadocs.

To use the producer, you can use the following maven dependency:

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-clients</artifactId>
	<version>3.5.0</version>
</dependency>

2.2 Consumer API

The Consumer API allows applications to read streams of data from topics in the Kafka cluster.

Examples showing how to use the consumer are given in the javadocs.

To use the consumer, you can use the following maven dependency:

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-clients</artifactId>
	<version>3.5.0</version>
</dependency>

2.3 Streams API

The Streams API allows transforming streams of data from input topics to output topics.

Examples showing how to use this library are given in the javadocs

Additional documentation on using the Streams API is available here.

To use Kafka Streams you can use the following maven dependency:

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-streams</artifactId>
	<version>3.5.0</version>
</dependency>

When using Scala you may optionally include the library. Additional documentation on using the Kafka Streams DSL for Scala is available in the developer guide. kafka-streams-scala

To use Kafka Streams DSL for Scala for Scala 2.13 you can use the following maven dependency:

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-streams-scala_2.13</artifactId>
	<version>3.5.0</version>
</dependency>

2.4 Connect API

The Connect API allows implementing connectors that continually pull from some source data system into Kafka or push from Kafka into some sink data system.

Many users of Connect won't need to use this API directly, though, they can use pre-built connectors without needing to write any code. Additional information on using Connect is available here.

Those who want to implement custom connectors can see the javadoc.

2.5 Admin API

The Admin API supports managing and inspecting topics, brokers, acls, and other Kafka objects.

To use the Admin API, add the following Maven dependency:

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-clients</artifactId>
	<version>3.5.0</version>
</dependency>
For more information about the Admin APIs, see the javadoc.

3. Configuration

Kafka uses key-value pairs in the property file format for configuration. These values can be supplied either from a file or programmatically.

3.1 Broker Configs

The essential configurations are the following:
  • broker.id
  • log.dirs
  • zookeeper.connect
Topic-level configurations and defaults are discussed in more detail below.
  • advertised.listeners

    Listeners to publish to ZooKeeper for clients to use, if different than the config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for will be used. Unlike , it is not valid to advertise the 0.0.0.0 meta-address.
    Also unlike , there can be duplicated ports in this property, so that one listener can be configured to advertise another listener's address. This can be useful in some cases where external load balancers are used.
    listenerslistenerslistenerslisteners

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:per-broker
  • auto.create.topics.enable

    Enable auto creation of topic on the server

    Type:boolean
    Default:true
    Valid Values:
    Importance:high
    Update Mode:read-only
  • auto.leader.rebalance.enable

    Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by `leader.imbalance.check.interval.seconds`. If the leader imbalance exceeds `leader.imbalance.per.broker.percentage`, leader rebalance to the preferred leader for partitions is triggered.

    Type:boolean
    Default:true
    Valid Values:
    Importance:high
    Update Mode:read-only
  • background.threads

    The number of threads to use for various background processing tasks

    Type:int
    Default:10
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • broker.id

    The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1.

    Type:int
    Default:-1
    Valid Values:
    Importance:high
    Update Mode:read-only
  • compression.type

    Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.

    Type:string
    Default:producer
    Valid Values:[uncompressed, zstd, lz4, snappy, gzip, producer]
    Importance:high
    Update Mode:cluster-wide
  • control.plane.listener.name

    Name of listener used for communication between controller and brokers. Broker will use the control.plane.listener.name to locate the endpoint in listeners list, to listen for connections from the controller. For example, if a broker's config is :
    listeners = INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094
    listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL
    control.plane.listener.name = CONTROLLER
    On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL".
    On controller side, when it discovers a broker's published endpoints through zookeeper, it will use the control.plane.listener.name to find the endpoint, which it will use to establish connection to the broker.
    For example, if the broker's published endpoints on zookeeper are :
    "endpoints" : ["INTERNAL://broker1.example.com:9092","EXTERNAL://broker1.example.com:9093","CONTROLLER://broker1.example.com:9094"]
    and the controller's config is :
    listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL
    control.plane.listener.name = CONTROLLER
    then controller will use "broker1.example.com:9094" with security protocol "SSL" to connect to the broker.
    If not explicitly configured, the default value will be null and there will be no dedicated endpoints for controller connections.
    If explicitly configured, the value cannot be the same as the value of .
    inter.broker.listener.name

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • controller.listener.names

    A comma-separated list of the names of the listeners used by the controller. This is required if running in KRaft mode. When communicating with the controller quorum, the broker will always use the first listener in this list.
    Note: The ZK-based controller should not set this configuration.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • controller.quorum.election.backoff.max.ms

    Maximum time in milliseconds before starting new elections. This is used in the binary exponential backoff mechanism that helps prevent gridlocked elections

    Type:int
    Default:1000 (1 second)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • controller.quorum.election.timeout.ms

    Maximum time in milliseconds to wait without being able to fetch from the leader before triggering a new election

    Type:int
    Default:1000 (1 second)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • controller.quorum.fetch.timeout.ms

    Maximum time without a successful fetch from the current leader before becoming a candidate and triggering an election for voters; Maximum time without receiving fetch from a majority of the quorum before asking around to see if there's a new epoch for leader

    Type:int
    Default:2000 (2 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • controller.quorum.voters

    Map of id/endpoint information for the set of voters in a comma-separated list of `{id}@{host}:{port}` entries. For example: `1@localhost:9092,2@localhost:9093,3@localhost:9094`

    Type:list
    Default:""
    Valid Values:non-empty list
    Importance:high
    Update Mode:read-only
  • delete.topic.enable

    Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off

    Type:boolean
    Default:true
    Valid Values:
    Importance:high
    Update Mode:read-only
  • early.start.listeners

    A comma-separated list of listener names which may be started before the authorizer has finished initialization. This is useful when the authorizer is dependent on the cluster itself for bootstrapping, as is the case for the StandardAuthorizer (which stores ACLs in the metadata log.) By default, all listeners included in controller.listener.names will also be early start listeners. A listener should not appear in this list if it accepts external traffic.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • leader.imbalance.check.interval.seconds

    The frequency with which the partition rebalance check is triggered by the controller

    Type:long
    Default:300
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • leader.imbalance.per.broker.percentage

    The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.

    Type:int
    Default:10
    Valid Values:
    Importance:high
    Update Mode:read-only
  • listeners

    Listener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, must also be set.
    Listener names and port numbers must be unique.
    Specify hostname as 0.0.0.0 to bind to all interfaces.
    Leave hostname empty to bind to default interface.
    Examples of legal listener lists:
    PLAINTEXT://myhost:9092,SSL://:9091
    CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093
    listener.security.protocol.map

    Type:string
    Default:PLAINTEXT://:9092
    Valid Values:
    Importance:high
    Update Mode:per-broker
  • log.dir

    The directory in which the log data is kept (supplemental for log.dirs property)

    Type:string
    Default:/tmp/kafka-logs
    Valid Values:
    Importance:high
    Update Mode:read-only
  • log.dirs

    A comma-separated list of the directories where the log data is stored. If not set, the value in log.dir is used.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • log.flush.interval.messages

    The number of messages accumulated on a log partition before messages are flushed to disk

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • log.flush.interval.ms

    The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used

    Type:long
    Default:null
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • log.flush.offset.checkpoint.interval.ms

    The frequency with which we update the persistent record of the last flush which acts as the log recovery point

    Type:int
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:high
    Update Mode:read-only
  • log.flush.scheduler.interval.ms

    The frequency in ms that the log flusher checks whether any log needs to be flushed to disk

    Type:long
    Default:9223372036854775807
    Valid Values:
    Importance:high
    Update Mode:read-only
  • log.flush.start.offset.checkpoint.interval.ms

    The frequency with which we update the persistent record of log start offset

    Type:int
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:high
    Update Mode:read-only
  • log.retention.bytes

    The maximum size of the log before deleting it

    Type:long
    Default:-1
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • log.retention.hours

    The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property

    Type:int
    Default:168
    Valid Values:
    Importance:high
    Update Mode:read-only
  • log.retention.minutes

    The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used

    Type:int
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • log.retention.ms

    The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.

    Type:long
    Default:null
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • log.roll.hours

    The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property

    Type:int
    Default:168
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • log.roll.jitter.hours

    The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property

    Type:int
    Default:0
    Valid Values:[0,...]
    Importance:high
    Update Mode:read-only
  • log.roll.jitter.ms

    The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used

    Type:long
    Default:null
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • log.roll.ms

    The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used

    Type:long
    Default:null
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • log.segment.bytes

    The maximum size of a single log file

    Type:int
    Default:1073741824 (1 gibibyte)
    Valid Values:[14,...]
    Importance:high
    Update Mode:cluster-wide
  • log.segment.delete.delay.ms

    The amount of time to wait before deleting a file from the filesystem

    Type:long
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:high
    Update Mode:cluster-wide
  • message.max.bytes

    The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level config.max.message.bytes

    Type:int
    Default:1048588
    Valid Values:[0,...]
    Importance:high
    Update Mode:cluster-wide
  • metadata.log.dir

    This configuration determines where we put the metadata log for clusters in KRaft mode. If it is not set, the metadata log is placed in the first log directory from log.dirs.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • metadata.log.max.record.bytes.between.snapshots

    This is the maximum number of bytes in the log between the latest snapshot and the high-watermark needed before generating a new snapshot. The default value is 20971520. To generate snapshots based on the time elapsed, see the configuration. The Kafka node will generate a snapshot when either the maximum time interval is reached or the maximum bytes limit is reached.metadata.log.max.snapshot.interval.ms

    Type:long
    Default:20971520
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • metadata.log.max.snapshot.interval.ms

    This is the maximum number of milliseconds to wait to generate a snapshot if there are committed records in the log that are not included in the latest snapshot. A value of zero disables time based snapshot generation. The default value is 3600000. To generate snapshots based on the number of metadata bytes, see the configuration. The Kafka node will generate a snapshot when either the maximum time interval is reached or the maximum bytes limit is reached.metadata.log.max.record.bytes.between.snapshots

    Type:long
    Default:3600000 (1 hour)
    Valid Values:[0,...]
    Importance:high
    Update Mode:read-only
  • metadata.log.segment.bytes

    The maximum size of a single metadata log file.

    Type:int
    Default:1073741824 (1 gibibyte)
    Valid Values:[12,...]
    Importance:high
    Update Mode:read-only
  • metadata.log.segment.ms

    The maximum time before a new metadata log file is rolled out (in milliseconds).

    Type:long
    Default:604800000 (7 days)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • metadata.max.retention.bytes

    The maximum combined size of the metadata log and snapshots before deleting old snapshots and log files. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit.

    Type:long
    Default:104857600 (100 mebibytes)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • metadata.max.retention.ms

    The number of milliseconds to keep a metadata log file or snapshot before deleting it. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit.

    Type:long
    Default:604800000 (7 days)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • min.insync.replicas

    When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).
    When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • node.id

    The node ID associated with the roles this process is playing when `process.roles` is non-empty. This is required configuration when running in KRaft mode.

    Type:int
    Default:-1
    Valid Values:
    Importance:high
    Update Mode:read-only
  • num.io.threads

    The number of threads that the server uses for processing requests, which may include disk I/O

    Type:int
    Default:8
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • num.network.threads

    The number of threads that the server uses for receiving requests from the network and sending responses to the network. Noted: each listener (except for controller listener) creates its own thread pool.

    Type:int
    Default:3
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • num.recovery.threads.per.data.dir

    The number of threads per data directory to be used for log recovery at startup and flushing at shutdown

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • num.replica.alter.log.dirs.threads

    The number of threads that can move replicas between log directories, which may include disk I/O

    Type:int
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • num.replica.fetchers

    Number of fetcher threads used to replicate records from each source broker. The total number of fetchers on each broker is bound by multiplied by the number of brokers in the cluster.Increasing this value can increase the degree of I/O parallelism in the follower and leader broker at the cost of higher CPU and memory utilization.num.replica.fetchers

    Type:int
    Default:1
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • offset.metadata.max.bytes

    The maximum size for a metadata entry associated with an offset commit

    Type:int
    Default:4096 (4 kibibytes)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • offsets.commit.required.acks

    The required acks before the commit can be accepted. In general, the default (-1) should not be overridden

    Type:short
    Default:-1
    Valid Values:
    Importance:high
    Update Mode:read-only
  • offsets.commit.timeout.ms

    Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout.

    Type:int
    Default:5000 (5 seconds)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.load.buffer.size

    Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large).

    Type:int
    Default:5242880
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.retention.check.interval.ms

    Frequency at which to check for stale offsets

    Type:long
    Default:600000 (10 minutes)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.retention.minutes

    For subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention period has elapsed after the consumer group loses all its consumers (i.e. becomes empty); 2) this retention period has elapsed since the last time an offset is committed for the partition and the group is no longer subscribed to the corresponding topic. For standalone consumers (using manual assignment), offsets will be expired after this retention period has elapsed since the time of last commit. Note that when a group is deleted via the delete-group request, its committed offsets will also be deleted without extra retention period; also when a topic is deleted via the delete-topic request, upon propagated metadata update any group's committed offsets for that topic will also be deleted without extra retention period.

    Type:int
    Default:10080
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.topic.compression.codec

    Compression codec for the offsets topic - compression may be used to achieve "atomic" commits

    Type:int
    Default:0
    Valid Values:
    Importance:high
    Update Mode:read-only
  • offsets.topic.num.partitions

    The number of partitions for the offset commit topic (should not change after deployment)

    Type:int
    Default:50
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.topic.replication.factor

    The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.

    Type:short
    Default:3
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.topic.segment.bytes

    The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads

    Type:int
    Default:104857600 (100 mebibytes)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • process.roles

    The roles that this process plays: 'broker', 'controller', or 'broker,controller' if it is both. This configuration is only applicable for clusters in KRaft (Kafka Raft) mode (instead of ZooKeeper). Leave this config undefined or empty for Zookeeper clusters.

    Type:list
    Default:""
    Valid Values:[broker, controller]
    Importance:high
    Update Mode:read-only
  • queued.max.requests

    The number of queued requests allowed for data-plane, before blocking the network threads

    Type:int
    Default:500
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • replica.fetch.min.bytes

    Minimum bytes expected for each fetch response. If not enough bytes, wait up to (broker config).replica.fetch.wait.max.ms

    Type:int
    Default:1
    Valid Values:
    Importance:high
    Update Mode:read-only
  • replica.fetch.wait.max.ms

    The maximum wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics

    Type:int
    Default:500
    Valid Values:
    Importance:high
    Update Mode:read-only
  • replica.high.watermark.checkpoint.interval.ms

    The frequency with which the high watermark is saved out to disk

    Type:long
    Default:5000 (5 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • replica.lag.time.max.ms

    If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr

    Type:long
    Default:30000 (30 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • replica.socket.receive.buffer.bytes

    The socket receive buffer for network requests to the leader for replicating data

    Type:int
    Default:65536 (64 kibibytes)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • replica.socket.timeout.ms

    The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms

    Type:int
    Default:30000 (30 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:30000 (30 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • sasl.mechanism.controller.protocol

    SASL mechanism used for communication with controllers. Default is GSSAPI.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:high
    Update Mode:read-only
  • socket.receive.buffer.bytes

    The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

    Type:int
    Default:102400 (100 kibibytes)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • socket.request.max.bytes

    The maximum number of bytes in a socket request

    Type:int
    Default:104857600 (100 mebibytes)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • socket.send.buffer.bytes

    The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

    Type:int
    Default:102400 (100 kibibytes)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • transaction.max.timeout.ms

    The maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction.

    Type:int
    Default:900000 (15 minutes)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transaction.state.log.load.buffer.size

    Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large).

    Type:int
    Default:5242880
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transaction.state.log.min.isr

    Overridden min.insync.replicas config for the transaction topic.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transaction.state.log.num.partitions

    The number of partitions for the transaction topic (should not change after deployment).

    Type:int
    Default:50
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transaction.state.log.replication.factor

    The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.

    Type:short
    Default:3
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transaction.state.log.segment.bytes

    The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads

    Type:int
    Default:104857600 (100 mebibytes)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transactional.id.expiration.ms

    The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. Transactional IDs will not expire while a the transaction is still ongoing.

    Type:int
    Default:604800000 (7 days)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • unclean.leader.election.enable

    Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss

    Type:boolean
    Default:false
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • zookeeper.connect

    Specifies the ZooKeeper connection string in the form where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form .
    The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of you would give the connection string as .
    hostname:porthostname1:port1,hostname2:port2,hostname3:port3/chroot/pathhostname1:port1,hostname2:port2,hostname3:port3/chroot/path

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • zookeeper.connection.timeout.ms

    The max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used

    Type:int
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • zookeeper.max.in.flight.requests

    The maximum number of unacknowledged requests the client will send to Zookeeper before blocking.

    Type:int
    Default:10
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • zookeeper.metadata.migration.enable

    Enable ZK to KRaft migration

    Type:boolean
    Default:false
    Valid Values:
    Importance:high
    Update Mode:read-only
  • zookeeper.session.timeout.ms

    Zookeeper session timeout

    Type:int
    Default:18000 (18 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • zookeeper.set.acl

    Set client to use secure ACLs

    Type:boolean
    Default:false
    Valid Values:
    Importance:high
    Update Mode:read-only
  • broker.heartbeat.interval.ms

    The length of time in milliseconds between broker heartbeats. Used when running in KRaft mode.

    Type:int
    Default:2000 (2 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • broker.id.generation.enable

    Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • broker.rack

    Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: `RACK1`, `us-east-1d`

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • broker.session.timeout.ms

    The length of time in milliseconds that a broker lease lasts if no heartbeats are made. Used when running in KRaft mode.

    Type:int
    Default:9000 (9 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • connections.max.idle.ms

    Idle connections timeout: the server socket processor threads close the connections that idle more than this

    Type:long
    Default:600000 (10 minutes)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • connections.max.reauth.ms

    When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000

    Type:long
    Default:0
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controlled.shutdown.enable

    Enable controlled shutdown of the server

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controlled.shutdown.max.retries

    Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens

    Type:int
    Default:3
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controlled.shutdown.retry.backoff.ms

    Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying.

    Type:long
    Default:5000 (5 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controller.quorum.append.linger.ms

    The duration in milliseconds that the leader will wait for writes to accumulate before flushing them to disk.

    Type:int
    Default:25
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controller.quorum.request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:2000 (2 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controller.socket.timeout.ms

    The socket timeout for controller-to-broker channels

    Type:int
    Default:30000 (30 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • default.replication.factor

    The default replication factors for automatically created topics

    Type:int
    Default:1
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • delegation.token.expiry.time.ms

    The token validity time in miliseconds before the token needs to be renewed. Default value 1 day.

    Type:long
    Default:86400000 (1 day)
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • delegation.token.master.key

    DEPRECATED: An alias for delegation.token.secret.key, which should be used instead of this config.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • delegation.token.max.lifetime.ms

    The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.

    Type:long
    Default:604800000 (7 days)
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • delegation.token.secret.key

    Secret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If the key is not set or set to empty string, brokers will disable the delegation token support.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • delete.records.purgatory.purge.interval.requests

    The purge interval (in number of requests) of the delete records request purgatory

    Type:int
    Default:1
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • fetch.max.bytes

    The maximum number of bytes we will return for a fetch request. Must be at least 1024.

    Type:int
    Default:57671680 (55 mebibytes)
    Valid Values:[1024,...]
    Importance:medium
    Update Mode:read-only
  • fetch.purgatory.purge.interval.requests

    The purge interval (in number of requests) of the fetch request purgatory

    Type:int
    Default:1000
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • group.initial.rebalance.delay.ms

    The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.

    Type:int
    Default:3000 (3 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • group.max.session.timeout.ms

    The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

    Type:int
    Default:1800000 (30 minutes)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • group.max.size

    The maximum number of consumers that a single consumer group can accommodate.

    Type:int
    Default:2147483647
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • group.min.session.timeout.ms

    The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.

    Type:int
    Default:6000 (6 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • initial.broker.registration.timeout.ms

    When initially registering with the controller quorum, the number of milliseconds to wait before declaring failure and exiting the broker process.

    Type:int
    Default:60000 (1 minute)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • inter.broker.listener.name

    Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • inter.broker.protocol.version

    Specify which version of the inter-broker protocol will be used.
    This is typically bumped after all brokers were upgraded to a new version.
    Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check MetadataVersion for the full list.

    Type:string
    Default:3.5-IV2
    Valid Values:[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2]
    Importance:medium
    Update Mode:read-only
  • log.cleaner.backoff.ms

    The amount of time to sleep when there are no logs to clean

    Type:long
    Default:15000 (15 seconds)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.dedupe.buffer.size

    The total memory used for log deduplication across all cleaner threads

    Type:long
    Default:134217728
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.delete.retention.ms

    The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan).

    Type:long
    Default:86400000 (1 day)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.enable

    Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • log.cleaner.io.buffer.load.factor

    Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions

    Type:double
    Default:0.9
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.io.buffer.size

    The total memory used for log cleaner I/O buffers across all cleaner threads

    Type:int
    Default:524288
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.io.max.bytes.per.second

    The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average

    Type:double
    Default:1.7976931348623157E308
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.max.compaction.lag.ms

    The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.min.cleanable.ratio

    The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period.

    Type:double
    Default:0.5
    Valid Values:[0,...,1]
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.min.compaction.lag.ms

    The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

    Type:long
    Default:0
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.threads

    The number of background threads to use for log cleaning

    Type:int
    Default:1
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.cleanup.policy

    The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact"

    Type:list
    Default:delete
    Valid Values:[compact, delete]
    Importance:medium
    Update Mode:cluster-wide
  • log.index.interval.bytes

    The interval with which we add an entry to the offset index

    Type:int
    Default:4096 (4 kibibytes)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.index.size.max.bytes

    The maximum size in bytes of the offset index

    Type:int
    Default:10485760 (10 mebibytes)
    Valid Values:[4,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.message.format.version

    Specify the message format version the broker will use to append messages to the logs. The value should be a valid MetadataVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check MetadataVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.

    Type:string
    Default:3.0-IV1
    Valid Values:[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2]
    Importance:medium
    Update Mode:read-only
  • log.message.timestamp.difference.max.ms

    The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling.

    Type:long
    Default:9223372036854775807
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.message.timestamp.type

    Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`

    Type:string
    Default:CreateTime
    Valid Values:[CreateTime, LogAppendTime]
    Importance:medium
    Update Mode:cluster-wide
  • log.preallocate

    Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.retention.check.interval.ms

    The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • max.connection.creation.rate

    The maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, .Broker-wide connection rate limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections will be throttled if either the listener or the broker limit is reached, with the exception of inter-broker listener. Connections on the inter-broker listener will be throttled only when the listener-level rate limit is reached.listener.name.internal.max.connection.creation.rate

    Type:int
    Default:2147483647
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • max.connections

    The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, . Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case.listener.name.internal.max.connections

    Type:int
    Default:2147483647
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • max.connections.per.ip

    The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached.

    Type:int
    Default:2147483647
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • max.connections.per.ip.overrides

    A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName:100,127.0.0.1:200"

    Type:string
    Default:""
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • max.incremental.fetch.session.cache.slots

    The maximum number of incremental fetch sessions that we will maintain.

    Type:int
    Default:1000
    Valid Values:[0,...]
    Importance:medium
    Update Mode:read-only
  • num.partitions

    The default number of log partitions per topic

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • password.encoder.old.secret

    The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • password.encoder.secret

    The secret used for encoding dynamically configured passwords for this broker.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • principal.builder.class

    The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal will be derived using the rules defined by applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS.ssl.principal.mapping.rulessasl.kerberos.principal.to.local.rules

    Type:class
    Default:org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • producer.purgatory.purge.interval.requests

    The purge interval (in number of requests) of the producer request purgatory

    Type:int
    Default:1000
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • queued.max.request.bytes

    The number of queued bytes allowed before no more requests are read

    Type:long
    Default:-1
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • replica.fetch.backoff.ms

    The amount of time to sleep when fetch partition error occurs.

    Type:int
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:read-only
  • replica.fetch.max.bytes

    The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via (broker config) or (topic config).message.max.bytesmax.message.bytes

    Type:int
    Default:1048576 (1 mebibyte)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:read-only
  • replica.fetch.response.max.bytes

    Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via (broker config) or (topic config).message.max.bytesmax.message.bytes

    Type:int
    Default:10485760 (10 mebibytes)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:read-only
  • replica.selector.class

    The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • reserved.broker.max.id

    Max number that can be used for a broker.id

    Type:int
    Default:1000
    Valid Values:[0,...]
    Importance:medium
    Update Mode:read-only
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • sasl.enabled.mechanisms

    The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.

    Type:list
    Default:GSSAPI
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;loginModuleClass controlFlag (optionName=optionValue)*;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    重要性:中等
    更新模式:每个经纪人
  • sasl.kerberos.principal.to.local.rules

    从主体名称映射到短名称(通常是操作系统用户名)的规则列表。按顺序评估规则,并使用与主体名称匹配的第一个规则将其映射到短名称。列表中以后的任何规则都将被忽略。默认情况下,表单的主体名称映射到 。有关格式的更多详细信息,请参阅安全授权和 ACL。请注意,如果配置提供了扩展名 ,则会忽略此配置。{username}/{hostname}@{REALM}{username}KafkaPrincipalBuilderprincipal.builder.class

    类型:列表
    违约:违约
    有效值:
    重要性:中等
    更新模式:每个经纪人
  • sasl.kerberos.service.name

    Kafka 运行的 Kerberos 主体名称。这可以在 Kafka 的 JAAS 配置或 Kafka 的配置中定义。

    类型:字符串
    违约:
    有效值:
    重要性:中等
    更新模式:每个经纪人
  • sasl.kerberos.ticket.renew.jitter

    添加到续订时间的随机抖动百分比。

    类型:
    违约:0.05
    有效值:
    重要性:中等
    更新模式:每个经纪人
  • sasl.kerberos.ticket.renew.window.factor

    登录线程将休眠,直到达到从上次刷新到票证到期的指定时间窗口因子,此时它将尝试续订票证。

    类型:
    违约:0.8
    有效值:
    重要性:中等
    更新模式:每个经纪人
  • sasl.login.callback.handler.class

    实现 AuthenticateCallbackHandler 接口的 SASL 登录回调处理程序类的完全限定名称。对于代理,登录回调处理程序配置必须以侦听器前缀和小写的 SASL 机制名称为前缀。例如,listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    类型:.class
    违约:
    有效值:
    重要性:中等
    更新模式:只读
  • sasl.login.class

    实现登录管理器接口的类的完全限定名。对于代理,登录配置必须以侦听器前缀和小写的 SASL 机制名称为前缀。例如,listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    类型:.class
    违约:
    有效值:
    重要性:中等
    更新模式:只读
  • sasl.login.refresh.buffer.seconds

    刷新凭据时要保留的凭据过期前的缓冲时间量(以秒为单位)。如果刷新发生在接近过期时间的地方,而不是缓冲区秒数,则刷新将上移以保持尽可能多的缓冲时间。合法值介于 0 和 3600(1 小时)之间;如果未指定任何值,则使用默认值 300(5 分钟)。如果此值和 sasl.login.refresh.min.period.seconds 的总和超过凭据的剩余生存期,则它们都将被忽略。目前仅适用于 OAUTHBEARER。

    Type:short
    Default:300
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.mechanism.inter.broker.protocol

    SASL mechanism used for inter-broker communication. Default is GSSAPI.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • sasl.server.callback.handler.class

    The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • sasl.server.max.receive.size

    The maximum receive size allowed before and during initial SASL authentication. Default receive size is 512KB. GSSAPI limits requests to 64K, but we allow upto 512KB by default for custom SASL mechanisms. In practice, PLAIN, SCRAM and OAUTH mechanisms can use much smaller limits.

    Type:int
    Default:524288
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • security.inter.broker.protocol

    Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time.

    Type:string
    Default:PLAINTEXT
    Valid Values:[PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL]
    Importance:medium
    Update Mode:read-only
  • socket.connection.setup.timeout.max.ms

    The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • socket.connection.setup.timeout.ms

    The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • socket.listen.backlog.size

    The maximum number of pending connections on the socket. In Linux, you may also need to configure `somaxconn` and `tcp_max_syn_backlog` kernel parameters accordingly to make the configuration takes effect.

    Type:int
    Default:50
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:""
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.client.auth

    Configures kafka broker to request client authentication. The following settings are common:

    • ssl.client.auth=required If set to required client authentication is required.
    • ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself
    • ssl.client.auth=none This means client authentication is not needed.
    Type:string
    Default:none
    Valid Values:[required, requested, none]
    Importance:medium
    Update Mode:per-broker
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.

    Type:list
    Default:TLSv1.2
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.key.password

    The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keystore.type

    The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.2
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.truststore.type

    The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • zookeeper.clientCnxnSocket

    Typically set to when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the same-named system property.org.apache.zookeeper.ClientCnxnSocketNettyzookeeper.clientCnxnSocket

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.client.enable

    Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the system property (note the different name). Defaults to false if neither is set; when true, must be set (typically to ); other values to set may include , , , , , , , , , , , zookeeper.client.securezookeeper.clientCnxnSocketorg.apache.zookeeper.ClientCnxnSocketNettyzookeeper.ssl.cipher.suiteszookeeper.ssl.crl.enablezookeeper.ssl.enabled.protocolszookeeper.ssl.endpoint.identification.algorithmzookeeper.ssl.keystore.locationzookeeper.ssl.keystore.passwordzookeeper.ssl.keystore.typezookeeper.ssl.ocsp.enablezookeeper.ssl.protocolzookeeper.ssl.truststore.locationzookeeper.ssl.truststore.passwordzookeeper.ssl.truststore.type

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.keystore.location

    Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase).zookeeper.ssl.keyStore.location

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.keystore.password

    Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase). Note that ZooKeeper does not support a key password different from the keystore password, so be sure to set the key password in the keystore to be identical to the keystore password; otherwise the connection attempt to Zookeeper will fail.zookeeper.ssl.keyStore.password

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.keystore.type

    Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase). The default value of means the type will be auto-detected based on the filename extension of the keystore.zookeeper.ssl.keyStore.typenull

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.truststore.location

    Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase).zookeeper.ssl.trustStore.location

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.truststore.password

    Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase).zookeeper.ssl.trustStore.password

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.truststore.type

    Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase). The default value of means the type will be auto-detected based on the filename extension of the truststore.zookeeper.ssl.trustStore.typenull

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • alter.config.policy.class.name

    The alter configs policy class that should be used for validation. The class should implement the interface.org.apache.kafka.server.policy.AlterConfigPolicy

    Type:class
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • alter.log.dirs.replication.quota.window.num

    The number of samples to retain in memory for alter log dirs replication quotas

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • alter.log.dirs.replication.quota.window.size.seconds

    The time span of each sample for alter log dirs replication quotas

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • authorizer.class.name

    The fully qualified name of a class that implements interface, which is used by the broker for authorization.org.apache.kafka.server.authorizer.Authorizer

    Type:string
    Default:""
    Valid Values:non-null string
    Importance:low
    Update Mode:read-only
  • auto.include.jmx.reporter

    Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
    Update Mode:read-only
  • client.quota.callback.class

    The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. By default, the <user> and <client-id> quotas that are stored in ZooKeeper are applied. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied.

    Type:class
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • connection.failed.authentication.delay.ms

    Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout.

    Type:int
    Default:100
    Valid Values:[0,...]
    Importance:low
    Update Mode:read-only
  • controller.quorum.retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    Type:int
    Default:20
    Valid Values:
    Importance:low
    Update Mode:read-only
  • controller.quota.window.num

    The number of samples to retain in memory for controller mutation quotas

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • controller.quota.window.size.seconds

    The time span of each sample for controller mutations quotas

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • create.topic.policy.class.name

    The create topic policy class that should be used for validation. The class should implement the interface.org.apache.kafka.server.policy.CreateTopicPolicy

    Type:class
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • delegation.token.expiry.check.interval.ms

    Scan interval to remove expired delegation tokens.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • kafka.metrics.polling.interval.secs

    The metrics polling interval (in seconds) which can be used in kafka.metrics.reporters implementations.

    Type:int
    Default:10
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • kafka.metrics.reporters

    A list of classes to use as Yammer metrics custom reporters. The reporters should implement trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends trait so that the registered MBean is compliant with the standard MBean convention.kafka.metrics.KafkaMetricsReporterkafka.metrics.KafkaMetricsReporterMBean

    Type:list
    Default:""
    Valid Values:
    Importance:low
    Update Mode:read-only
  • listener.security.protocol.map

    Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: `INTERNAL:SSL,EXTERNAL:SSL`. As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ). Note that in KRaft a default mapping from the listener names defined by to PLAINTEXT is assumed if no explicit mapping is provided and no other security protocol is in use.listener.name.internal.ssl.keystore.locationssl.keystore.locationcontroller.listener.names

    Type:string
    Default:PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    Valid Values:
    Importance:low
    Update Mode:per-broker
  • log.message.downconversion.enable

    This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.falseUNSUPPORTED_VERSION

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
    Update Mode:cluster-wide
  • metadata.max.idle.interval.ms

    This configuration controls how often the active controller should write no-op records to the metadata partition. If the value is 0, no-op records are not appended to the metadata partition. The default value is 500

    Type:int
    Default:500
    Valid Values:[0,...]
    Importance:low
    Update Mode:read-only
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter

    Type:list
    Default:""
    Valid Values:
    Importance:low
    Update Mode:cluster-wide
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • metrics.recording.level

    The highest recording level for metrics.

    Type:string
    Default:INFO
    Valid Values:
    Importance:low
    Update Mode:read-only
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • password.encoder.cipher.algorithm

    The Cipher algorithm used for encoding dynamically configured passwords.

    Type:string
    Default:AES/CBC/PKCS5Padding
    Valid Values:
    Importance:low
    Update Mode:read-only
  • password.encoder.iterations

    The iteration count used for encoding dynamically configured passwords.

    Type:int
    Default:4096
    Valid Values:[1024,...]
    Importance:low
    Update Mode:read-only
  • password.encoder.key.length

    The key length used for encoding dynamically configured passwords.

    Type:int
    Default:128
    Valid Values:[8,...]
    Importance:low
    Update Mode:read-only
  • password.encoder.keyfactory.algorithm

    The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise.

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • producer.id.expiration.ms

    The time in ms that a topic partition leader will wait before expiring producer IDs. Producer IDs will not expire while a transaction associated to them is still ongoing. Note that producer IDs may expire sooner if the last write from the producer ID is deleted due to the topic's retention settings. Setting this value the same or higher than can help prevent expiration during retries and protect against message duplication, but the default should be reasonable for most use cases.delivery.timeout.ms

    Type:int
    Default:86400000 (1 day)
    Valid Values:[1,...]
    Importance:low
    Update Mode:cluster-wide
  • quota.window.num

    The number of samples to retain in memory for client quotas

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • quota.window.size.seconds

    The time span of each sample for client quotas

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • replication.quota.window.num

    The number of samples to retain in memory for replication quotas

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • replication.quota.window.size.seconds

    The time span of each sample for replication quotas

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
    Update Mode:read-only
  • security.providers

    A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the interface.org.apache.kafka.common.security.auth.SecurityProviderCreator

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
    Update Mode:per-broker
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory

    Type:class
    Default:null
    Valid Values:
    Importance:low
    Update Mode:per-broker
  • ssl.principal.mapping.rules

    A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see security authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the configuration.principal.builder.class

    Type:string
    Default:DEFAULT
    Valid Values:
    Importance:low
    Update Mode:read-only
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:per-broker
  • transaction.abort.timed.out.transaction.cleanup.interval.ms

    The interval at which to rollback transactions that have timed out

    Type:int
    Default:10000 (10 seconds)
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • transaction.remove.expired.transaction.cleanup.interval.ms

    The interval at which to remove transactions that have expired due to passingtransactional.id.expiration.ms

    Type:int
    Default:3600000 (1 hour)
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.cipher.suites

    Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the system property (note the single word "ciphersuites"). The default value of means the list of enabled cipher suites is determined by the Java runtime being used.zookeeper.ssl.ciphersuitesnull

    Type:list
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.crl.enable

    Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the system property (note the shorter name).zookeeper.ssl.crl

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.enabled.protocols

    Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the system property (note the camelCase). The default value of means the enabled protocol will be the value of the configuration property.zookeeper.ssl.enabledProtocolsnullzookeeper.ssl.protocol

    Type:list
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.endpoint.identification.algorithm

    Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" meaning ZooKeeper hostname verification is enabled and an explicit blank value meaning it is disabled (disabling it is only recommended for testing purposes). An explicit value overrides any "true" or "false" value set via the system property (note the different name and values; true implies https and false implies blank).zookeeper.ssl.hostnameVerification

    Type:string
    Default:HTTPS
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.ocsp.enable

    Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set via the system property (note the shorter name).zookeeper.ssl.ocsp

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.protocol

    Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named system property.zookeeper.ssl.protocol

    Type:string
    Default:TLSv1.2
    Valid Values:
    Importance:low
    Update Mode:read-only

More details about broker configuration can be found in the scala class .kafka.server.KafkaConfig

3.1.1 Updating Broker Configs

From Kafka version 1.1 onwards, some of the broker configs can be updated without restarting the broker. See the column in Broker Configs for the update mode of each broker config. Dynamic Update Mode
  • read-only: Requires a broker restart for update
  • per-broker: May be updated dynamically for each broker
  • cluster-wide: May be updated dynamically as a cluster-wide default. May also be updated as a per-broker value for testing.
To alter the current broker configs for broker id 0 (for example, the number of log cleaner threads): To describe the current dynamic broker configs for broker id 0: To delete a config override and revert to the statically configured or default value for broker id 0 (for example, the number of log cleaner threads): Some configs may be configured as a cluster-wide default to maintain consistent values across the whole cluster. All brokers in the cluster will process the cluster default update. For example, to update log cleaner threads on all brokers: To describe the currently configured dynamic cluster-wide default configs: All configs that are configurable at cluster level may also be configured at per-broker level (e.g. for testing). If a config value is defined at different levels, the following order of precedence is used:
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --describe
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --delete-config log.cleaner.threads
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --add-config log.cleaner.threads=2
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --describe
  • Dynamic per-broker config stored in ZooKeeper
  • Dynamic cluster-wide default config stored in ZooKeeper
  • Static broker config from server.properties
  • Kafka default, see broker configs
Updating Password Configs Dynamically

Password config values that are dynamically updated are encrypted before storing in ZooKeeper. The broker config must be configured in to enable dynamic update of password configs. The secret may be different on different brokers.password.encoder.secretserver.properties

The secret used for password encoding may be rotated with a rolling restart of brokers. The old secret used for encoding passwords currently in ZooKeeper must be provided in the static broker config and the new secret must be provided in . All dynamic password configs stored in ZooKeeper will be re-encoded with the new secret when the broker starts up.password.encoder.old.secretpassword.encoder.secret

In Kafka 1.1.x, all dynamically updated password configs must be provided in every alter request when updating configs using even if the password config is not being altered. This constraint will be removed in a future release.kafka-configs.sh

Updating Password Configs in ZooKeeper Before Starting Brokers
From Kafka 2.0.0 onwards, enables dynamic broker configs to be updated using ZooKeeper before starting brokers for bootstrapping. This enables all password configs to be stored in encrypted form, avoiding the need for clear passwords in . The broker config must also be specified if any password configs are included in the alter command. Additional encryption parameters may also be specified. Password encoder configs will not be persisted in ZooKeeper. For example, to store SSL key password for listener on broker 0: The configuration will be persisted in ZooKeeper in encrypted form using the provided encoder configs. The encoder secret and iterations are not persisted in ZooKeeper. kafka-configs.shserver.propertiespassword.encoder.secretINTERNAL
> bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --entity-type brokers --entity-name 0 --alter --add-config
    'listener.name.internal.ssl.key.password=key-password,password.encoder.secret=secret,password.encoder.iterations=8192'
listener.name.internal.ssl.key.password
Updating SSL Keystore of an Existing Listener
Brokers may be configured with SSL keystores with short validity periods to reduce the risk of compromised certificates. Keystores may be updated dynamically without restarting the broker. The config name must be prefixed with the listener prefix so that only the keystore config of a specific listener is updated. The following configs may be updated in a single alter request at per-broker level: listener.name.{listenerName}.
  • ssl.keystore.type
  • ssl.keystore.location
  • ssl.keystore.password
  • ssl.key.password
If the listener is the inter-broker listener, the update is allowed only if the new keystore is trusted by the truststore configured for that listener. For other listeners, no trust validation is performed on the keystore by the broker. Certificates must be signed by the same certificate authority that signed the old certificate to avoid any client authentication failures.
Updating SSL Truststore of an Existing Listener
Broker truststores may be updated dynamically without restarting the broker to add or remove certificates. Updated truststore will be used to authenticate new client connections. The config name must be prefixed with the listener prefix so that only the truststore config of a specific listener is updated. The following configs may be updated in a single alter request at per-broker level: listener.name.{listenerName}.
  • ssl.truststore.type
  • ssl.truststore.location
  • ssl.truststore.password
If the listener is the inter-broker listener, the update is allowed only if the existing keystore for that listener is trusted by the new truststore. For other listeners, no trust validation is performed by the broker before the update. Removal of CA certificates used to sign client certificates from the new truststore can lead to client authentication failures.
Updating Default Topic Configuration
Default topic configuration options used by brokers may be updated without broker restart. The configs are applied to topics without a topic config override for the equivalent per-topic config. One or more of these configs may be overridden at cluster-default level used by all brokers.
  • log.segment.bytes
  • log.roll.ms
  • log.roll.hours
  • log.roll.jitter.ms
  • log.roll.jitter.hours
  • log.index.size.max.bytes
  • log.flush.interval.messages
  • log.flush.interval.ms
  • log.retention.bytes
  • log.retention.ms
  • log.retention.minutes
  • log.retention.hours
  • log.index.interval.bytes
  • log.cleaner.delete.retention.ms
  • log.cleaner.min.compaction.lag.ms
  • log.cleaner.max.compaction.lag.ms
  • log.cleaner.min.cleanable.ratio
  • log.cleanup.policy
  • log.segment.delete.delay.ms
  • unclean.leader.election.enable
  • min.insync.replicas
  • max.message.bytes
  • compression.type
  • log.preallocate
  • log.message.timestamp.type
  • log.message.timestamp.difference.max.ms
From Kafka version 2.0.0 onwards, unclean leader election is automatically enabled by the controller when the config is dynamically updated. In Kafka version 1.1.x, changes to take effect only when a new controller is elected. Controller re-election may be forced by running: unclean.leader.election.enableunclean.leader.election.enable
> bin/zookeeper-shell.sh localhost
  rmr /controller
Updating Log Cleaner Configs
Log cleaner configs may be updated dynamically at cluster-default level used by all brokers. The changes take effect on the next iteration of log cleaning. One or more of these configs may be updated:
  • log.cleaner.threads
  • log.cleaner.io.max.bytes.per.second
  • log.cleaner.dedupe.buffer.size
  • log.cleaner.io.buffer.size
  • log.cleaner.io.buffer.load.factor
  • log.cleaner.backoff.ms
Updating Thread Configs
The size of various thread pools used by the broker may be updated dynamically at cluster-default level used by all brokers. Updates are restricted to the range to to ensure that config updates are handled gracefully. currentSize / 2currentSize * 2
  • num.network.threads
  • num.io.threads
  • num.replica.fetchers
  • num.recovery.threads.per.data.dir
  • log.cleaner.threads
  • background.threads
Updating ConnectionQuota Configs
The maximum number of connections allowed for a given IP/host by the broker may be updated dynamically at cluster-default level used by all brokers. The changes will apply for new connection creations and the existing connections count will be taken into account by the new limits.
  • max.connections.per.ip
  • max.connections.per.ip.overrides
Adding and Removing Listeners

Listeners may be added or removed dynamically. When a new listener is added, security configs of the listener must be provided as listener configs with the listener prefix . If the new listener uses SASL, the JAAS configuration of the listener must be provided using the JAAS configuration property with the listener and mechanism prefix. See JAAS configuration for Kafka brokers for details.listener.name.{listenerName}.sasl.jaas.config

In Kafka version 1.1.x, the listener used by the inter-broker listener may not be updated dynamically. To update the inter-broker listener to a new listener, the new listener may be added on all brokers without restarting the broker. A rolling restart is then required to update .inter.broker.listener.name

In addition to all the security configs of new listeners, the following configs may be updated dynamically at per-broker level:
  • listeners
  • advertised.listeners
  • listener.security.protocol.map
Inter-broker listener must be configured using the static broker configuration or . inter.broker.listener.namesecurity.inter.broker.protocol

3.2 Topic-Level Configs

Configurations pertinent to topics have both a server default as well an optional per-topic override. If no per-topic configuration is given the server default is used. The override can be set at topic creation time by giving one or more options. This example creates a topic named my-topic with a custom max message size and flush rate: Overrides can also be changed or set later using the alter configs command. This example updates the max message size for my-topic: To check overrides set on the topic you can do To remove an override you can do The following are the topic-level configurations. The server's default configuration for this property is given under the Server Default Property heading. A given server default config value only applies to a topic if it does not have an explicit topic config override. --config
> bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1   --replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my-topic
  --alter --add-config max.message.bytes=128000
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my-topic --describe
> bin/kafka-configs.sh --bootstrap-server localhost:9092  --entity-type topics --entity-name my-topic
  --alter --delete-config max.message.bytes
  • cleanup.policy

    This config designates the retention policy to use on log segments. The "delete" policy (which is the default) will discard old segments when their retention time or size limit has been reached. The "compact" policy will enable log compaction, which retains the latest value for each key. It is also possible to specify both policies in a comma-separated list (e.g. "delete,compact"). In this case, old segments will be discarded per the retention time and size configuration, while retained segments will be compacted.

    Type:list
    Default:delete
    Valid Values:[compact, delete]
    Server Default Property:log.cleanup.policy
    Importance:medium
  • compression.type

    Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.

    Type:string
    Default:producer
    Valid Values:[uncompressed, zstd, lz4, snappy, gzip, producer]
    Server Default Property:compression.type
    Importance:medium
  • delete.retention.ms

    The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan).

    Type:long
    Default:86400000 (1 day)
    Valid Values:[0,...]
    Server Default Property:log.cleaner.delete.retention.ms
    Importance:medium
  • file.delete.delay.ms

    The time to wait before deleting a file from the filesystem

    Type:long
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Server Default Property:log.segment.delete.delay.ms
    Importance:medium
  • flush.messages

    This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see the per-topic configuration section).

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Server Default Property:log.flush.interval.messages
    Importance:medium
  • flush.ms

    This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient.

    Type:long
    Default:9223372036854775807
    Valid Values:[0,...]
    Server Default Property:log.flush.interval.ms
    Importance:medium
  • follower.replication.throttled.replicas

    A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.

    Type:list
    Default:""
    Valid Values:[partitionId]:[brokerId],[partitionId]:[brokerId],...
    Server Default Property:null
    Importance:medium
  • index.interval.bytes

    This setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this.

    Type:int
    Default:4096 (4 kibibytes)
    Valid Values:[0,...]
    Server Default Property:log.index.interval.bytes
    Importance:medium
  • leader.replication.throttled.replicas

    A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.

    Type:list
    Default:""
    Valid Values:[partitionId]:[brokerId],[partitionId]:[brokerId],...
    Server Default Property:null
    Importance:medium
  • max.compaction.lag.ms

    The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Server Default Property:log.cleaner.max.compaction.lag.ms
    Importance:medium
  • max.message.bytes

    The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.

    Type:int
    Default:1048588
    Valid Values:[0,...]
    Server Default Property:message.max.bytes
    Importance:medium
  • message.format.version

    [DEPRECATED] Specify the message format version the broker will use to append messages to the logs. The value of this config is always assumed to be `3.0` if `inter.broker.protocol.version` is 3.0 or higher (the actual config value is ignored). Otherwise, the value should be a valid ApiVersion. Some examples are: 0.10.0, 1.1, 2.8, 3.0. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.

    Type:string
    Default:3.0-IV1
    Valid Values:[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2]
    Server Default Property:log.message.format.version
    Importance:medium
  • message.timestamp.difference.max.ms

    The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if message.timestamp.type=LogAppendTime.

    Type:long
    Default:9223372036854775807
    Valid Values:[0,...]
    Server Default Property:log.message.timestamp.difference.max.ms
    Importance:medium
  • message.timestamp.type

    Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`

    Type:string
    Default:CreateTime
    Valid Values:[CreateTime, LogAppendTime]
    Server Default Property:log.message.timestamp.type
    Importance:medium
  • min.cleanable.dirty.ratio

    This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log. If the max.compaction.lag.ms or the min.compaction.lag.ms configurations are also specified, then the log compactor considers the log to be eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the max.compaction.lag.ms period.

    Type:double
    Default:0.5
    Valid Values:[0,...,1]
    Server Default Property:log.cleaner.min.cleanable.ratio
    Importance:medium
  • min.compaction.lag.ms

    The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

    Type:long
    Default:0
    Valid Values:[0,...]
    Server Default Property:log.cleaner.min.compaction.lag.ms
    Importance:medium
  • min.insync.replicas

    When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).
    When used together, and allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set to 2, and produce with of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
    min.insync.replicasacksmin.insync.replicasacks

    Type:int
    Default:1
    Valid Values:[1,...]
    Server Default Property:min.insync.replicas
    Importance:medium
  • preallocate

    True if we should preallocate the file on disk when creating a new log segment.

    Type:boolean
    Default:false
    Valid Values:
    Server Default Property:log.preallocate
    Importance:medium
  • retention.bytes

    This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.

    Type:long
    Default:-1
    Valid Values:
    Server Default Property:log.retention.bytes
    Importance:medium
  • retention.ms

    This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to -1, no time limit is applied.

    Type:long
    Default:604800000 (7 days)
    Valid Values:[-1,...]
    Server Default Property:log.retention.ms
    Importance:medium
  • segment.bytes

    This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.

    Type:int
    Default:1073741824 (1 gibibyte)
    Valid Values:[14,...]
    Server Default Property:log.segment.bytes
    Importance:medium
  • segment.index.bytes

    This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting.

    Type:int
    Default:10485760 (10 mebibytes)
    Valid Values:[4,...]
    Server Default Property:log.index.size.max.bytes
    Importance:medium
  • segment.jitter.ms

    The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling

    Type:long
    Default:0
    Valid Values:[0,...]
    Server Default Property:log.roll.jitter.ms
    Importance:medium
  • segment.ms

    This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data.

    Type:long
    Default:604800000 (7 days)
    Valid Values:[1,...]
    Server Default Property:log.roll.ms
    Importance:medium
  • unclean.leader.election.enable

    Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.

    Type:boolean
    Default:false
    Valid Values:
    Server Default Property:unclean.leader.election.enable
    Importance:medium
  • message.downconversion.enable

    This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.falseUNSUPPORTED_VERSION

    Type:boolean
    Default:true
    Valid Values:
    Server Default Property:log.message.downconversion.enable
    Importance:low

3.3 Producer Configs

Below is the configuration of the producer:
  • key.serializer

    Serializer class for key that implements the interface.org.apache.kafka.common.serialization.Serializer

    Type:class
    Default:
    Valid Values:
    Importance:high
  • value.serializer

    Serializer class for value that implements the interface.org.apache.kafka.common.serialization.Serializer

    Type:class
    Default:
    Valid Values:
    Importance:high
  • bootstrap.servers

    A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).host1:port1,host2:port2,...

    Type:list
    Default:""
    Valid Values:non-null string
    Importance:high
  • buffer.memory

    The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for after which it will throw an exception.max.block.ms

    This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.

    Type:long
    Default:33554432
    Valid Values:[0,...]
    Importance:high
  • compression.type

    The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are , , , , or . Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression).nonegzipsnappylz4zstd

    Type:string
    Default:none
    Valid Values:[none, gzip, snappy, lz4, zstd]
    Importance:high
  • retries

    Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Produce requests will be failed before the number of retries has been exhausted if the timeout configured by expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use to control retry behavior.delivery.timeout.msdelivery.timeout.ms

    Enabling idempotence requires this config value to be greater than 0. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.

    Allowing retries while setting to and to greater than 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.enable.idempotencefalsemax.in.flight.requests.per.connection

    Type:int
    Default:2147483647
    Valid Values:[0,...,2147483647]
    Importance:high
  • ssl.key.password

    The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • batch.size

    The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes.

    No attempt will be made to batch records larger than this size.

    Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent.

    A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.

    Note: This setting gives the upper bound of the batch size to be sent. If we have fewer than this many bytes accumulated for this partition, we will 'linger' for the time waiting for more records to show up. This setting defaults to 0, which means we'll immediately send out a record even the accumulated batch size is under this setting.linger.mslinger.msbatch.size

    Type:int
    Default:16384
    Valid Values:[0,...]
    Importance:medium
  • client.dns.lookup

    Controls how the client uses DNS lookups. If set to , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as .use_all_dns_ipsresolve_canonical_bootstrap_servers_onlyuse_all_dns_ips

    Type:string
    Default:use_all_dns_ips
    Valid Values:[use_all_dns_ips, resolve_canonical_bootstrap_servers_only]
    Importance:medium
  • client.id

    An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    Type:string
    Default:""
    Valid Values:
    Importance:medium
  • connections.max.idle.ms

    Close idle connections after the number of milliseconds specified by this config.

    Type:long
    Default:540000 (9 minutes)
    Valid Values:
    Importance:medium
  • delivery.timeout.ms

    An upper bound on the time to report success or failure after a call to returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of and .send()request.timeout.mslinger.ms

    Type:int
    Default:120000 (2 minutes)
    Valid Values:[0,...]
    Importance:medium
  • linger.ms

    The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay—that is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting , for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.batch.sizelinger.ms=5

    Type:long
    Default:0
    Valid Values:[0,...]
    Importance:medium
  • max.block.ms

    The configuration controls how long the 's , , , , and methods will block. For this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For this timeout bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may timeout if the transaction coordinator could not be discovered or did not respond within the timeout.KafkaProducersend()partitionsFor()initTransactions()sendOffsetsToTransaction()commitTransaction()abortTransaction()send()partitionsFor()

    Type:long
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:medium
  • max.request.size

    The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this.

    Type:int
    Default:1048576
    Valid Values:[0,...]
    Importance:medium
  • partitioner.class

    A class to use to determine which partition to be send to when produce the records. Available options are:

    • If not set, the default partitioning logic is used. This strategy will try sticking to a partition until at least batch.size bytes is produced to the partition. It works with the strategy:
      • If no partition is specified but a key is present, choose a partition based on a hash of the key
      • If no partition or key is present, choose the sticky partition that changes when at least batch.size bytes are produced to the partition.
    • org.apache.kafka.clients.producer.RoundRobinPartitioner: This partitioning strategy is that each record in a series of consecutive records will be sent to a different partition(no matter if the 'key' is provided or not), until we run out of partitions and start over again. Note: There's a known issue that will cause uneven distribution when new batch is created. Please check KAFKA-9965 for more detail.

    Implementing the interface allows you to plug in a custom partitioner.org.apache.kafka.clients.producer.Partitioner

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • partitioner.ignore.keys

    When set to 'true' the producer won't use record keys to choose a partition. If 'false', producer would choose a partition based on a hash of the key when a key is present. Note: this setting has no effect if a custom partitioner is used.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • receive.buffer.bytes

    The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

    Type:int
    Default:32768 (32 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries.replica.lag.time.max.ms

    Type:int
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:medium
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;loginModuleClass controlFlag (optionName=optionValue)*;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
  • sasl.kerberos.service.name

    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.callback.handler.class

    The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.class

    The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.mechanism

    SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

    Type:string
    Default:PLAINTEXT
    Valid Values:[PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL]
    Importance:medium
  • send.buffer.bytes

    The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

    Type:int
    Default:131072 (128 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • socket.connection.setup.timeout.max.ms

    The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:
    Importance:medium
  • socket.connection.setup.timeout.ms

    The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:medium
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.

    Type:list
    Default:TLSv1.2
    Valid Values:
    Importance:medium
  • ssl.keystore.type

    The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.2
    Valid Values:
    Importance:medium
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • ssl.truststore.type

    The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • acks

    The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed:

    • acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to . retries-1
    • acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost.
    • acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting.

    Note that enabling idempotence requires this config value to be 'all'. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.

    Type:string
    Default:all
    Valid Values:[all, -1, 0, 1]
    Importance:low
  • auto.include.jmx.reporter

    Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • enable.idempotence

    When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires to be less than or equal to 5 (with message ordering preserved for any allowable value), to be greater than 0, and must be 'all'. max.in.flight.requests.per.connectionretriesacks

    Idempotence is enabled by default if no conflicting configurations are set. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. If idempotence is explicitly enabled and conflicting configurations are set, a is thrown.ConfigException

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • interceptor.classes

    A list of classes to use as interceptors. Implementing the interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors.org.apache.kafka.clients.producer.ProducerInterceptor

    Type:list
    Default:""
    Valid Values:non-null string
    Importance:low
  • max.in.flight.requests.per.connection

    The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this configuration is set to be greater than 1 and is set to false, there is a risk of message reordering after a failed send due to retries (i.e., if retries are enabled); if retries are disabled or if is set to true, ordering will be preserved. Additionally, enabling idempotence requires the value of this configuration to be less than or equal to 5. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. enable.idempotenceenable.idempotence

    Type:int
    Default:5
    Valid Values:[1,...]
    Importance:low
  • metadata.max.age.ms

    The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.max.idle.ms

    Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the next access to it will force a metadata fetch request.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[5000,...]
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter

    Type:list
    Default:""
    Valid Values:non-null string
    Importance:low
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
  • metrics.recording.level

    The highest recording level for metrics.

    Type:string
    Default:INFO
    Valid Values:[INFO, DEBUG, TRACE]
    Importance:low
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • partitioner.adaptive.partitioning.enable

    When set to 'true', the producer will try to adapt to broker performance and produce more messages to partitions hosted on faster brokers. If 'false', producer will try to distribute messages uniformly. Note: this setting has no effect if a custom partitioner is used

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • partitioner.availability.timeout.ms

    If a broker cannot process produce requests from a partition for time, the partitioner treats that partition as not available. If the value is 0, this logic is disabled. Note: this setting has no effect if a custom partitioner is used or is set to 'false'partitioner.availability.timeout.mspartitioner.adaptive.partitioning.enable

    Type:long
    Default:0
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.max.ms

    The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.ms

    The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.

    Type:long
    Default:50
    Valid Values:[0,...]
    Importance:low
  • retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:low
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.jitter

    Percentage of random jitter added to the renewal time.

    Type:double
    Default:0.05
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.window.factor

    Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

    Type:double
    Default:0.8
    Valid Values:
    Importance:low
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.refresh.buffer.seconds

    The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:300
    Valid Values:[0,...,3600]
    Importance:low
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:[0,...,900]
    Importance:low
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:[0.5,...,1.0]
    Importance:low
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:[0.0,...,0.25]
    Importance:low
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
  • security.providers

    A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the interface.org.apache.kafka.common.security.auth.SecurityProviderCreator

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:low
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:low
  • transaction.timeout.ms

    The maximum amount of time in milliseconds that a transaction will remain open before the coordinator proactively aborts it. The start of the transaction is set at the time that the first partition is added to it. If this value is larger than the setting in the broker, the request will fail with a error.transaction.max.timeout.msInvalidTxnTimeoutException

    Type:int
    Default:60000 (1 minute)
    Valid Values:
    Importance:low
  • transactional.id

    The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. If a TransactionalId is configured, is implied. By default the TransactionId is not configured, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting .enable.idempotencetransaction.state.log.replication.factor

    Type:string
    Default:null
    Valid Values:non-empty string
    Importance:low

3.4 Consumer Configs

Below is the configuration for the consumer:
  • key.deserializer

    Deserializer class for key that implements the interface.org.apache.kafka.common.serialization.Deserializer

    Type:class
    Default:
    Valid Values:
    Importance:high
  • value.deserializer

    Deserializer class for value that implements the interface.org.apache.kafka.common.serialization.Deserializer

    Type:class
    Default:
    Valid Values:
    Importance:high
  • bootstrap.servers

    A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).host1:port1,host2:port2,...

    Type:list
    Default:""
    Valid Values:non-null string
    Importance:high
  • fetch.min.bytes

    The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as that many byte(s) of data is available or the fetch request times out waiting for data to arrive. Setting this to a larger value will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.

    Type:int
    Default:1
    Valid Values:[0,...]
    Importance:high
  • group.id

    A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using or the Kafka-based offset management strategy.subscribe(topic)

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • heartbeat.interval.ms

    The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than , but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.session.timeout.ms

    Type:int
    Default:3000 (3 seconds)
    Valid Values:
    Importance:high
  • max.partition.fetch.bytes

    The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via (broker config) or (topic config). See fetch.max.bytes for limiting the consumer request size.message.max.bytesmax.message.bytes

    Type:int
    Default:1048576 (1 mebibyte)
    Valid Values:[0,...]
    Importance:high
  • session.timeout.ms

    The timeout used to detect client failures when using Kafka's group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by and .group.min.session.timeout.msgroup.max.session.timeout.ms

    Type:int
    Default:45000 (45 seconds)
    Valid Values:
    Importance:high
  • ssl.key.password

    The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • allow.auto.create.topics

    Allow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automatically created only if the broker allows for it using `auto.create.topics.enable` broker configuration. This configuration must be set to `false` when using brokers older than 0.11.0

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
  • auto.offset.reset

    What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted):

    • earliest: automatically reset the offset to the earliest offset
    • latest: automatically reset the offset to the latest offset
    • none: throw exception to the consumer if no previous offset is found for the consumer's group
    • anything else: throw exception to the consumer.
    Type:string
    Default:latest
    Valid Values:[latest, earliest, none]
    Importance:medium
  • client.dns.lookup

    Controls how the client uses DNS lookups. If set to , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as .use_all_dns_ipsresolve_canonical_bootstrap_servers_onlyuse_all_dns_ips

    Type:string
    Default:use_all_dns_ips
    Valid Values:[use_all_dns_ips, resolve_canonical_bootstrap_servers_only]
    Importance:medium
  • connections.max.idle.ms

    Close idle connections after the number of milliseconds specified by this config.

    Type:long
    Default:540000 (9 minutes)
    Valid Values:
    Importance:medium
  • default.api.timeout.ms

    Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a parameter.timeout

    Type:int
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:medium
  • enable.auto.commit

    If true the consumer's offset will be periodically committed in the background.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
  • exclude.internal.topics

    Whether internal topics matching a subscribed pattern should be excluded from the subscription. It is always possible to explicitly subscribe to an internal topic.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
  • fetch.max.bytes

    The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via (broker config) or (topic config). Note that the consumer performs multiple fetches in parallel.message.max.bytesmax.message.bytes

    Type:int
    Default:52428800 (50 mebibytes)
    Valid Values:[0,...]
    Importance:medium
  • group.instance.id

    A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior.

    Type:string
    Default:null
    Valid Values:non-empty string
    Importance:medium
  • isolation.level

    Controls how to read messages written transactionally. If set to , consumer.poll() will only return transactional messages which have been committed. If set to (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. read_committedread_uncommitted

    Messages will always be returned in offset order. Hence, in mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, consumers will not be able to read up to the high watermark when there are in flight transactions.read_committedread_committed

    Further, when in the seekToEnd method will return the LSOread_committed

    Type:string
    Default:read_uncommitted
    Valid Values:[read_committed, read_uncommitted]
    Importance:medium
  • max.poll.interval.ms

    The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. For consumers using a non-null which reach this timeout, partitions will not be immediately reassigned. Instead, the consumer will stop sending heartbeats and partitions will be reassigned after expiration of . This mirrors the behavior of a static consumer which has shutdown.group.instance.idsession.timeout.ms

    Type:int
    Default:300000 (5 minutes)
    Valid Values:[1,...]
    Importance:medium
  • max.poll.records

    The maximum number of records returned in a single call to poll(). Note, that does not impact the underlying fetching behavior. The consumer will cache the records from each fetch request and returns them incrementally from each poll.max.poll.records

    Type:int
    Default:500
    Valid Values:[1,...]
    Importance:medium
  • partition.assignment.strategy

    A list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use to distribute partition ownership amongst consumer instances when group management is used. Available options are:

    • org.apache.kafka.clients.consumer.RangeAssignor: Assigns partitions on a per-topic basis.
    • org.apache.kafka.clients.consumer.RoundRobinAssignor: Assigns partitions to consumers in a round-robin fashion.
    • org.apache.kafka.clients.consumer.StickyAssignor: Guarantees an assignment that is maximally balanced while preserving as many existing partition assignments as possible.
    • org.apache.kafka.clients.consumer.CooperativeStickyAssignor: Follows the same StickyAssignor logic, but allows for cooperative rebalancing.

    The default assignor is [RangeAssignor, CooperativeStickyAssignor], which will use the RangeAssignor by default, but allows upgrading to the CooperativeStickyAssignor with just a single rolling bounce that removes the RangeAssignor from the list.

    Implementing the interface allows you to plug in a custom assignment strategy.org.apache.kafka.clients.consumer.ConsumerPartitionAssignor

    Type:list
    Default:class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
    Valid Values:non-null string
    Importance:medium
  • receive.buffer.bytes

    The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

    Type:int
    Default:65536 (64 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:medium
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;loginModuleClass controlFlag (optionName=optionValue)*;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
  • sasl.kerberos.service.name

    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.callback.handler.class

    The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.class

    The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.mechanism

    SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

    Type:string
    Default:PLAINTEXT
    Valid Values:[PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL]
    Importance:medium
  • send.buffer.bytes

    The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

    Type:int
    Default:131072 (128 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • socket.connection.setup.timeout.max.ms

    The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:
    Importance:medium
  • socket.connection.setup.timeout.ms

    The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:medium
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.

    Type:list
    Default:TLSv1.2
    Valid Values:
    Importance:medium
  • ssl.keystore.type

    The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.2
    Valid Values:
    Importance:medium
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • ssl.truststore.type

    The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • auto.commit.interval.ms

    The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if is set to .enable.auto.committrue

    Type:int
    Default:5000 (5 seconds)
    Valid Values:[0,...]
    Importance:low
  • auto.include.jmx.reporter

    Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • check.crcs

    Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • client.id

    An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • client.rack

    A rack identifier for this client. This can be any string value which indicates where this client is physically located. It corresponds with the broker config 'broker.rack'

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • fetch.max.wait.ms

    The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.

    Type:int
    Default:500
    Valid Values:[0,...]
    Importance:low
  • interceptor.classes

    A list of classes to use as interceptors. Implementing the interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.org.apache.kafka.clients.consumer.ConsumerInterceptor

    Type:list
    Default:""
    Valid Values:non-null string
    Importance:low
  • metadata.max.age.ms

    The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter

    Type:list
    Default:""
    Valid Values:non-null string
    Importance:low
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
  • metrics.recording.level

    The highest recording level for metrics.

    Type:string
    Default:INFO
    Valid Values:[INFO, DEBUG, TRACE]
    Importance:low
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.max.ms

    The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.ms

    The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.

    Type:long
    Default:50
    Valid Values:[0,...]
    Importance:low
  • retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:low
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.jitter

    Percentage of random jitter added to the renewal time.

    Type:double
    Default:0.05
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.window.factor

    Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

    Type:double
    Default:0.8
    Valid Values:
    Importance:low
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.refresh.buffer.seconds

    The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:300
    Valid Values:[0,...,3600]
    Importance:low
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:[0,...,900]
    Importance:low
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:[0.5,...,1.0]
    Importance:low
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:[0.0,...,0.25]
    Importance:low
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
  • security.providers

    A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the interface.org.apache.kafka.common.security.auth.SecurityProviderCreator

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:low
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:low

3.5 Kafka Connect Configs

Below is the configuration of the Kafka Connect framework.
  • config.storage.topic

    The name of the Kafka topic where connector configurations are stored

    Type:string
    Default:
    Valid Values:
    Importance:high
  • group.id

    A unique string that identifies the Connect cluster group this worker belongs to.

    Type:string
    Default:
    Valid Values:
    Importance:high
  • key.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:
    Valid Values:
    Importance:high
  • offset.storage.topic

    The name of the Kafka topic where source connector offsets are stored

    Type:string
    Default:
    Valid Values:
    Importance:high
  • status.storage.topic

    The name of the Kafka topic where connector and task status are stored

    Type:string
    Default:
    Valid Values:
    Importance:high
  • value.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:
    Valid Values:
    Importance:high
  • bootstrap.servers

    A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).host1:port1,host2:port2,...

    Type:list
    Default:localhost:9092
    Valid Values:
    Importance:high
  • exactly.once.source.support

    Whether to enable exactly-once support for source connectors in the cluster by using transactions to write source records and their source offsets, and by proactively fencing out old task generations before bringing up new ones.
    To enable exactly-once source support on a new cluster, set this property to 'enabled'. To enable support on an existing cluster, first set to 'preparing' on every worker in the cluster, then set to 'enabled'. A rolling upgrade may be used for both changes. For more information on this feature, see the exactly-once source support documentation.

    Type:string
    Default:disabled
    Valid Values:(case insensitive) [DISABLED, ENABLED, PREPARING]
    Importance:high
  • heartbeat.interval.ms

    The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than , but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.session.timeout.ms

    Type:int
    Default:3000 (3 seconds)
    Valid Values:
    Importance:high
  • rebalance.timeout.ms

    The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures.

    Type:int
    Default:60000 (1 minute)
    Valid Values:
    Importance:high
  • session.timeout.ms

    The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by and .group.min.session.timeout.msgroup.max.session.timeout.ms

    Type:int
    Default:10000 (10 seconds)
    Valid Values:
    Importance:high
  • ssl.key.password

    The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • client.dns.lookup

    Controls how the client uses DNS lookups. If set to , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as .use_all_dns_ipsresolve_canonical_bootstrap_servers_onlyuse_all_dns_ips

    Type:string
    Default:use_all_dns_ips
    Valid Values:[use_all_dns_ips, resolve_canonical_bootstrap_servers_only]
    Importance:medium
  • connections.max.idle.ms

    Close idle connections after the number of milliseconds specified by this config.

    Type:long
    Default:540000 (9 minutes)
    Valid Values:
    Importance:medium
  • connector.client.config.override.policy

    Class name or alias of implementation of . Defines what client configurations can be overridden by the connector. The default implementation is `All`, meaning connector configurations can override all client properties. The other possible policies in the framework include `None` to disallow connectors from overriding client properties, and `Principal` to allow connectors to override only client principals.ConnectorClientConfigOverridePolicy

    Type:string
    Default:All
    Valid Values:
    Importance:medium
  • receive.buffer.bytes

    The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

    Type:int
    Default:32768 (32 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:40000 (40 seconds)
    Valid Values:[0,...]
    Importance:medium
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;loginModuleClass controlFlag (optionName=optionValue)*;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
  • sasl.kerberos.service.name

    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.callback.handler.class

    The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.class

    The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.mechanism

    SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

    Type:string
    Default:PLAINTEXT
    Valid Values:[PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL]
    Importance:medium
  • send.buffer.bytes

    The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

    Type:int
    Default:131072 (128 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.

    Type:list
    Default:TLSv1.2
    Valid Values:
    Importance:medium
  • ssl.keystore.type

    The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.2
    Valid Values:
    Importance:medium
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • ssl.truststore.type

    The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • worker.sync.timeout.ms

    When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining.

    Type:int
    Default:3000 (3 seconds)
    Valid Values:
    Importance:medium
  • worker.unsync.backoff.ms

    When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.

    Type:int
    Default:300000 (5 minutes)
    Valid Values:
    Importance:medium
  • access.control.allow.methods

    Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • access.control.allow.origin

    Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API.

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • admin.listeners

    List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank string will disable this feature. The default behavior is to use the regular listener (specified by the 'listeners' property).

    Type:list
    Default:null
    Valid Values:List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443.
    Importance:low
  • auto.include.jmx.reporter

    Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • client.id

    An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • config.providers

    Comma-separated names of classes, loaded and used in the order specified. Implementing the interface allows you to replace variable references in connector configurations, such as for externalized secrets. ConfigProviderConfigProvider

    Type:list
    Default:""
    Valid Values:
    Importance:low
  • config.storage.replication.factor

    Replication factor used when creating the configuration storage topic

    Type:short
    Default:3
    Valid Values:Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default
    Importance:low
  • connect.protocol

    Compatibility mode for Kafka Connect Protocol

    Type:string
    Default:sessioned
    Valid Values:[eager, compatible, sessioned]
    Importance:low
  • header.converter

    HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.

    Type:class
    Default:org.apache.kafka.connect.storage.SimpleHeaderConverter
    Valid Values:
    Importance:low
  • inter.worker.key.generation.algorithm

    The algorithm to use for generating internal request keys. The algorithm 'HmacSHA256' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config.

    Type:string
    Default:HmacSHA256
    Valid Values:Any KeyGenerator algorithm supported by the worker JVM
    Importance:low
  • inter.worker.key.size

    The size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm will be used.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • inter.worker.key.ttl.ms

    The TTL of generated session keys used for internal request validation (in milliseconds)

    Type:int
    Default:3600000 (1 hour)
    Valid Values:[0,...,2147483647]
    Importance:low
  • inter.worker.signature.algorithm

    The algorithm used to sign internal requestsThe algorithm 'inter.worker.signature.algorithm' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config.

    Type:string
    Default:HmacSHA256
    Valid Values:Any MAC algorithm supported by the worker JVM
    Importance:low
  • inter.worker.verification.algorithms

    A list of permitted algorithms for verifying internal requests, which must include the algorithm used for the inter.worker.signature.algorithm property. The algorithm(s) '[HmacSHA256]' will be used as a default on JVMs that provide them; on other JVMs, no default is used and a value for this property must be manually specified in the worker config.

    Type:list
    Default:HmacSHA256
    Valid Values:A list of one or more MAC algorithms, each supported by the worker JVM
    Importance:low
  • listeners

    List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS.
    Specify hostname as 0.0.0.0 to bind to all interfaces.
    Leave hostname empty to bind to default interface.
    Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084

    Type:list
    Default:http://:8083
    Valid Values:List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443.
    Importance:low
  • metadata.max.age.ms

    The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter

    Type:list
    Default:""
    Valid Values:
    Importance:low
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
  • metrics.recording.level

    The highest recording level for metrics.

    Type:string
    Default:INFO
    Valid Values:[INFO, DEBUG]
    Importance:low
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • offset.flush.interval.ms

    Interval at which to try committing offsets for tasks.

    Type:long
    Default:60000 (1 minute)
    Valid Values:
    Importance:low
  • offset.flush.timeout.ms

    Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. This property has no effect for source connectors running with exactly-once support.

    Type:long
    Default:5000 (5 seconds)
    Valid Values:
    Importance:low
  • offset.storage.partitions

    The number of partitions used when creating the offset storage topic

    Type:int
    Default:25
    Valid Values:Positive number, or -1 to use the broker's default
    Importance:low
  • offset.storage.replication.factor

    Replication factor used when creating the offset storage topic

    Type:short
    Default:3
    Valid Values:Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default
    Importance:low
  • plugin.path

    List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of:
    a) directories immediately containing jars with plugins and their dependencies
    b) uber-jars with plugins and their dependencies
    c) directories immediately containing the package directory structure of classes of plugins and their dependencies
    Note: symlinks will be followed to discover dependencies or plugins.
    Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors
    Do not use config provider variables in this property, since the raw path is used by the worker's scanner before config providers are initialized and used to replace variables.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • reconnect.backoff.max.ms

    The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.ms

    The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.

    Type:long
    Default:50
    Valid Values:[0,...]
    Importance:low
  • response.http.headers.config

    Rules for REST API HTTP response headers

    Type:string
    Default:""
    Valid Values:Comma-separated header rules, where each header rule is of the form '[action] [header name]:[header value]' and optionally surrounded by double quotes if any part of a header rule contains a comma
    Importance:low
  • rest.advertised.host.name

    If this is set, this is the hostname that will be given out to other workers to connect to.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • rest.advertised.listener

    Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • rest.advertised.port

    If this is set, this is the port that will be given out to other workers to connect to.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • rest.extension.classes

    Comma-separated names of classes, loaded and called in the order specified. Implementing the interface allows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc. ConnectRestExtensionConnectRestExtension

    Type:list
    Default:""
    Valid Values:
    Importance:low
  • retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:low
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.jitter

    Percentage of random jitter added to the renewal time.

    Type:double
    Default:0.05
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.window.factor

    Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

    Type:double
    Default:0.8
    Valid Values:
    Importance:low
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.refresh.buffer.seconds

    The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:300
    Valid Values:[0,...,3600]
    Importance:low
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:[0,...,900]
    Importance:low
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:[0.5,...,1.0]
    Importance:low
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:[0.0,...,0.25]
    Importance:low
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
  • scheduled.rebalance.max.delay.ms

    The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned

    Type:int
    Default:300000 (5 minutes)
    Valid Values:[0,...,2147483647]
    Importance:low
  • socket.connection.setup.timeout.max.ms

    The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • socket.connection.setup.timeout.ms

    The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:[0,...]
    Importance:low
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • ssl.client.auth

    Configures kafka broker to request client authentication. The following settings are common:

    • ssl.client.auth=required If set to required client authentication is required.
    • ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself
    • ssl.client.auth=none This means client authentication is not needed.
    Type:string
    Default:none
    Valid Values:[required, requested, none]
    Importance:low
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:low
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:low
  • status.storage.partitions

    The number of partitions used when creating the status storage topic

    Type:int
    Default:5
    Valid Values:Positive number, or -1 to use the broker's default
    Importance:low
  • status.storage.replication.factor

    Replication factor used when creating the status storage topic

    Type:short
    Default:3
    Valid Values:Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default
    Importance:low
  • task.shutdown.graceful.timeout.ms

    Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially.

    Type:long
    Default:5000 (5 seconds)
    Valid Values:
    Importance:low
  • topic.creation.enable

    Whether to allow automatic creation of topics used by source connectors, when source connectors are configured with `topic.creation.` properties. Each task will use an admin client to create its topics and will not depend on the Kafka brokers to create topics automatically.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • topic.tracking.allow.reset

    If set to true, it allows user requests to reset the set of active topics per connector.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • topic.tracking.enable

    Enable tracking the set of active topics per connector during runtime.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low

3.5.1 Source Connector Configs

Below is the configuration of a source connector.
  • name

    Globally unique name to use for this connector.

    Type:string
    Default:
    Valid Values:non-empty string without ISO control characters
    Importance:high
  • connector.class

    Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter

    Type:string
    Default:
    Valid Values:
    重要性:
  • 任务.max

    用于此连接器的最大任务数。

    类型:国际
    违约:1
    有效值:[1,...]
    重要性:
  • key.converter

    用于在 Kafka Connect 格式和写入 Kafka 的序列化形式之间进行转换的转换器类。这控制写入或读取 Kafka 的消息中键的格式,并且由于这与连接器无关,因此它允许任何连接器使用任何序列化格式。常见格式的示例包括 JSON 和 Avro。

    类型:.class
    违约:
    有效值:
    重要性:
  • 价值转换器

    用于在 Kafka Connect 格式和写入 Kafka 的序列化形式之间进行转换的转换器类。这控制写入 Kafka 或从 Kafka 读取的消息中值的格式,并且由于这与连接器无关,因此它允许任何连接器使用任何序列化格式。常见格式的示例包括 JSON 和 Avro。

    类型:.class
    违约:
    有效值:
    重要性:
  • header.converter

    HeaderConverter 类,用于在 Kafka Connect 格式和写入 Kafka 的序列化形式之间进行转换。这控制写入或读取 Kafka 的消息中标头值的格式,并且由于这与连接器无关,因此它允许任何连接器使用任何序列化格式。常见格式的示例包括 JSON 和 Avro。默认情况下,SimpleHeaderConverter 用于将标头值序列化为字符串,并通过推断架构来反序列化它们。

    类型:.class
    违约:
    有效值:
    重要性:
  • config.action.reload

    当外部配置提供程序中的更改导致连接器的配置属性发生更改时,Connect 应对连接器执行的操作。值“none”表示 Connect 将不执行任何操作。值“restart”表示 Connect 应使用更新的配置属性重新启动/重新加载连接器。如果外部配置提供程序指示配置值将在将来过期,则实际上可能会计划在将来重新启动。

    类型:字符串
    违约:重新启动
    有效值:[无,重新启动]
    重要性:
  • 变换

    要应用于记录的转换的别名。

    类型:列表
    违约:""
    有效值:非空字符串,唯一的转换别名
    重要性:
  • 谓词

    转换使用的谓词的别名。

    类型:列表
    违约:""
    有效值:非空字符串,唯一的谓词别名
    重要性:
  • errors.retry.timeout

    The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.

    Type:long
    Default:0
    Valid Values:
    Importance:medium
  • errors.retry.delay.max.ms

    The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.

    Type:long
    Default:60000 (1 minute)
    Valid Values:
    Importance:medium
  • errors.tolerance

    Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.

    Type:string
    Default:none
    Valid Values:[none, all]
    Importance:medium
  • errors.log.enable

    If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • errors.log.include.messages

    Whether to include in the log the Connect record that resulted in a failure.For sink records, the topic, partition, offset, and timestamp will be logged. For source records, the key and value (and their schemas), all headers, and the timestamp, Kafka topic, Kafka partition, source partition, and source offset will be logged. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • topic.creation.groups

    Groups of configurations for topics created by source connectors

    Type:list
    Default:""
    Valid Values:non-null string, unique topic creation groups
    Importance:low
  • exactly.once.support

    Permitted values are requested, required. If set to "required", forces a preflight check for the connector to ensure that it can provide exactly-once semantics with the given configuration. Some connectors may be capable of providing exactly-once semantics but not signal to Connect that they support this; in that case, documentation for the connector should be consulted carefully before creating it, and the value for this property should be set to "requested". Additionally, if the value is set to "required" but the worker that performs preflight validation does not have exactly-once support enabled for source connectors, requests to create or validate the connector will fail.

    Type:string
    Default:requested
    Valid Values:(case insensitive) [REQUIRED, REQUESTED]
    Importance:medium
  • transaction.boundary

    Permitted values are: poll, interval, connector. If set to 'poll', a new producer transaction will be started and committed for every batch of records that each task from this connector provides to Connect. If set to 'connector', relies on connector-defined transaction boundaries; note that not all connectors are capable of defining their own transaction boundaries, and in that case, attempts to instantiate a connector with this value will fail. Finally, if set to 'interval', commits transactions only after a user-defined time interval has passed.

    Type:string
    Default:poll
    Valid Values:(case insensitive) [INTERVAL, POLL, CONNECTOR]
    Importance:medium
  • transaction.boundary.interval.ms

    If 'transaction.boundary' is set to 'interval', determines the interval for producer transaction commits by connector tasks. If unset, defaults to the value of the worker-level 'offset.flush.interval.ms' property. It has no effect if a different transaction.boundary is specified.

    Type:long
    Default:null
    Valid Values:[0,...]
    Importance:low
  • offsets.storage.topic

    The name of a separate offsets topic to use for this connector. If empty or not specified, the worker’s global offsets topic name will be used. If specified, the offsets topic will be created if it does not already exist on the Kafka cluster targeted by this connector (which may be different from the one used for the worker's global offsets topic if the bootstrap.servers property of the connector's producer has been overridden from the worker's). Only applicable in distributed mode; in standalone mode, setting this property will have no effect.

    Type:string
    Default:null
    Valid Values:non-empty string
    Importance:low

3.5.2 Sink Connector Configs

Below is the configuration of a sink connector.
  • name

    Globally unique name to use for this connector.

    Type:string
    Default:
    Valid Values:non-empty string without ISO control characters
    Importance:high
  • connector.class

    Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter

    Type:string
    Default:
    Valid Values:
    Importance:high
  • tasks.max

    Maximum number of tasks to use for this connector.

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:high
  • topics

    List of topics to consume, separated by commas

    Type:list
    Default:""
    Valid Values:
    Importance:high
  • topics.regex

    Regular expression giving topics to consume. Under the hood, the regex is compiled to a . Only one of topics or topics.regex should be specified.java.util.regex.Pattern

    Type:string
    Default:""
    Valid Values:valid regex
    Importance:high
  • key.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • value.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • header.converter

    HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • config.action.reload

    The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.

    Type:string
    Default:restart
    Valid Values:[none, restart]
    Importance:low
  • transforms

    Aliases for the transformations to be applied to records.

    Type:list
    Default:""
    Valid Values:non-null string, unique transformation aliases
    Importance:low
  • predicates

    Aliases for the predicates used by transformations.

    Type:list
    Default:""
    Valid Values:non-null string, unique predicate aliases
    Importance:low
  • errors.retry.timeout

    The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.

    Type:long
    Default:0
    Valid Values:
    Importance:medium
  • errors.retry.delay.max.ms

    The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.

    Type:long
    Default:60000 (1 minute)
    Valid Values:
    Importance:medium
  • errors.tolerance

    Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.

    Type:string
    Default:none
    Valid Values:[none, all]
    Importance:medium
  • errors.log.enable

    If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • errors.log.include.messages

    Whether to include in the log the Connect record that resulted in a failure.For sink records, the topic, partition, offset, and timestamp will be logged. For source records, the key and value (and their schemas), all headers, and the timestamp, Kafka topic, Kafka partition, source partition, and source offset will be logged. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • errors.deadletterqueue.topic.name

    The name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. The topic name is blank by default, which means that no messages are to be recorded in the DLQ.

    Type:string
    Default:""
    Valid Values:
    Importance:medium
  • errors.deadletterqueue.topic.replication.factor

    Replication factor used to create the dead letter queue topic when it doesn't already exist.

    Type:short
    Default:3
    Valid Values:
    Importance:medium
  • errors.deadletterqueue.context.headers.enable

    If true, add headers containing error context to the messages written to the dead letter queue. To avoid clashing with headers from the original record, all error context header keys, all error context header keys will start with __connect.errors.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium

3.6 Kafka Streams Configs

Below is the configuration of the Kafka Streams client library.
  • application.id

    An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix.

    Type:string
    Default:
    Valid Values:
    Importance:high
  • bootstrap.servers

    A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).host1:port1,host2:port2,...

    Type:list
    Default:
    Valid Values:
    Importance:high
  • num.standby.replicas

    The number of standby replicas for each task.

    Type:int
    Default:0
    Valid Values:
    Importance:high
  • state.dir

    Directory location for state store. This path must be unique for each streams instance sharing the same underlying filesystem.

    Type:string
    Default:/var/folders/6f/c4r3kvwd76l9c0k_59qzh_r00000gn/T//kafka-streams
    Valid Values:
    Importance:high
  • acceptable.recovery.lag

    The maximum acceptable lag (number of offsets to catch up) for a client to be considered caught-up enough to receive an active task assignment. Upon assignment, it will still restore the rest of the changelog before processing. To avoid a pause in processing during rebalances, this config should correspond to a recovery time of well under a minute for a given workload. Must be at least 0.

    Type:long
    Default:10000
    Valid Values:[0,...]
    Importance:medium
  • cache.max.bytes.buffering

    Maximum number of memory bytes to be used for buffering across all threads

    Type:long
    Default:10485760
    Valid Values:[0,...]
    Importance:medium
  • client.id

    An ID prefix string used for the client IDs of internal consumer, producer and restore-consumer, with pattern .<client.id>-StreamThread-<threadSequenceNumber$gt;-<consumer|producer|restore-consumer>

    Type:string
    Default:""
    Valid Values:
    Importance:medium
  • default.deserialization.exception.handler

    Exception handling class that implements the interface.org.apache.kafka.streams.errors.DeserializationExceptionHandler

    Type:class
    Default:org.apache.kafka.streams.errors.LogAndFailExceptionHandler
    Valid Values:
    Importance:medium
  • default.key.serde

    Default serializer / deserializer class for key that implements the interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as wellorg.apache.kafka.common.serialization.Serdeorg.apache.kafka.common.serialization.Serde

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • default.list.key.serde.inner

    Default inner class of list serde for key that implements the interface. This configuration will be read if and only if configuration is set to org.apache.kafka.common.serialization.Serdedefault.key.serdeorg.apache.kafka.common.serialization.Serdes.ListSerde

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • default.list.key.serde.type

    Default class for key that implements the interface. This configuration will be read if and only if configuration is set to Note when list serde class is used, one needs to set the inner serde class that implements the interface via 'default.list.key.serde.inner'java.util.Listdefault.key.serdeorg.apache.kafka.common.serialization.Serdes.ListSerdeorg.apache.kafka.common.serialization.Serde

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • default.list.value.serde.inner

    Default inner class of list serde for value that implements the interface. This configuration will be read if and only if configuration is set to org.apache.kafka.common.serialization.Serdedefault.value.serdeorg.apache.kafka.common.serialization.Serdes.ListSerde

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • default.list.value.serde.type

    Default class for value that implements the interface. This configuration will be read if and only if configuration is set to Note when list serde class is used, one needs to set the inner serde class that implements the interface via 'default.list.value.serde.inner'java.util.Listdefault.value.serdeorg.apache.kafka.common.serialization.Serdes.ListSerdeorg.apache.kafka.common.serialization.Serde

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • default.production.exception.handler

    Exception handling class that implements the interface.org.apache.kafka.streams.errors.ProductionExceptionHandler

    Type:class
    Default:org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
    Valid Values:
    Importance:medium
  • default.timestamp.extractor

    Default timestamp extractor class that implements the interface.org.apache.kafka.streams.processor.TimestampExtractor

    Type:class
    Default:org.apache.kafka.streams.processor.FailOnInvalidTimestamp
    Valid Values:
    Importance:medium
  • default.value.serde

    Default serializer / deserializer class for value that implements the interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as wellorg.apache.kafka.common.serialization.Serdeorg.apache.kafka.common.serialization.Serde

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • max.task.idle.ms

    This config controls whether joins and merges may produce out-of-order results. The config value is the maximum amount of time in milliseconds a stream task will stay idle when it is fully caught up on some (but not all) input partitions to wait for producers to send additional records and avoid potential out-of-order record processing across multiple input streams. The default (zero) does not wait for producers to send more records, but it does wait to fetch data that is already present on the brokers. This default means that for records that are already present on the brokers, Streams will process them in timestamp order. Set to -1 to disable idling entirely and process any locally available data, even though doing so may produce out-of-order processing.

    Type:long
    Default:0
    Valid Values:
    Importance:medium
  • max.warmup.replicas

    The maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the purpose of keeping the task available on one instance while it is warming up on another instance it has been reassigned to. Used to throttle how much extra broker traffic and cluster state can be used for high availability. Must be at least 1.Note that one warmup replica corresponds to one Stream Task. Furthermore, note that each warmup replica can only be promoted to an active task during a rebalance (normally during a so-called probing rebalance, which occur at a frequency specified by the `probing.rebalance.interval.ms` config). This means that the maximum rate at which active tasks can be migrated from one Kafka Streams Instance to another instance can be determined by (`max.warmup.replicas` / `probing.rebalance.interval.ms`).

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:medium
  • num.stream.threads

    The number of threads to execute stream processing.

    Type:int
    Default:1
    Valid Values:
    Importance:medium
  • processing.guarantee

    The processing guarantee that should be used. Possible values are (default) and (requires brokers version 2.5 or higher). Deprecated options are (requires brokers version 0.11.0 or higher) and (requires brokers version 2.5 or higher). Note that exactly-once processing requires a cluster of at least three brokers by default what is the recommended setting for production; for development you can change this, by adjusting broker setting and .at_least_onceexactly_once_v2exactly_onceexactly_once_betatransaction.state.log.replication.factortransaction.state.log.min.isr

    Type:string
    Default:at_least_once
    Valid Values:[at_least_once, exactly_once, exactly_once_beta, exactly_once_v2]
    Importance:medium
  • rack.aware.assignment.tags

    List of client tag keys used to distribute standby replicas across Kafka Streams instances. When configured, Kafka Streams will make a best-effort to distribute the standby tasks over each client tag dimension.

    Type:list
    Default:""
    Valid Values:List containing maximum of 5 elements
    Importance:medium
  • replication.factor

    The replication factor for change log topics and repartition topics created by the stream processing application. The default of (meaning: use broker default replication factor) requires broker version 2.4 or newer-1

    Type:int
    Default:-1
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

    Type:string
    Default:PLAINTEXT
    Valid Values:[PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL]
    Importance:medium
  • statestore.cache.max.bytes

    Maximum number of memory bytes to be used for statestore cache across all threads

    Type:long
    Default:10485760 (10 mebibytes)
    Valid Values:[0,...]
    Importance:medium
  • task.timeout.ms

    The maximum amount of time in milliseconds a task might stall due to internal errors and retries until an error is raised. For a timeout of 0ms, a task would raise an error for the first internal error. For any timeout larger than 0ms, a task will retry at least once before an error is raised.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:medium
  • topology.optimization

    A configuration telling Kafka Streams if it should optimize the topology and what optimizations to apply. Acceptable values are: "+NO_OPTIMIZATION+", "+OPTIMIZE+", or a comma separated list of specific optimizations: ("+REUSE_KTABLE_SOURCE_TOPICS+", "+MERGE_REPARTITION_TOPICS+" + "SINGLE_STORE_SELF_JOIN+")."NO_OPTIMIZATION" by default.

    Type:string
    Default:none
    Valid Values:org.apache.kafka.streams.StreamsConfig$$Lambda$3/1996181658@5010be6
    Importance:medium
  • application.server

    A host:port pair pointing to a user-defined endpoint that can be used for state store discovery and interactive queries on this KafkaStreams instance.

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • auto.include.jmx.reporter

    Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • buffered.records.per.partition

    Maximum number of records to buffer per partition.

    Type:int
    Default:1000
    Valid Values:
    Importance:low
  • built.in.metrics.version

    Version of the built-in metrics to use.

    Type:string
    Default:latest
    Valid Values:[latest]
    Importance:low
  • commit.interval.ms

    The frequency in milliseconds with which to commit processing progress. For at-least-once processing, committing means to save the position (ie, offsets) of the processor. For exactly-once processing, it means to commit the transaction which includes to save the position and to make the committed data in the output topic visible to consumers with isolation level read_committed. (Note, if is set to , ,the default value is , otherwise the default value is .processing.guaranteeexactly_once_v2exactly_once10030000

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • connections.max.idle.ms

    Close idle connections after the number of milliseconds specified by this config.

    Type:long
    Default:540000 (9 minutes)
    Valid Values:
    Importance:low
  • default.client.supplier

    Client supplier class that implements the interface.org.apache.kafka.streams.KafkaClientSupplier

    Type:class
    Default:org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier
    Valid Values:
    Importance:low
  • default.dsl.store

    The default state store type used by DSL operators.

    Type:string
    Default:rocksDB
    Valid Values:[rocksDB, in_memory]
    Importance:low
  • metadata.max.age.ms

    The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter

    Type:list
    Default:""
    Valid Values:
    Importance:low
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
  • metrics.recording.level

    The highest recording level for metrics.

    Type:string
    Default:INFO
    Valid Values:[INFO, DEBUG, TRACE]
    Importance:low
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • poll.ms

    The amount of time in milliseconds to block waiting for input.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • probing.rebalance.interval.ms

    The maximum time in milliseconds to wait before triggering a rebalance to probe for warmup replicas that have finished warming up and are ready to become active. Probing rebalances will continue to be triggered until the assignment is balanced. Must be at least 1 minute.

    Type:long
    Default:600000 (10 minutes)
    Valid Values:[60000,...]
    Importance:low
  • receive.buffer.bytes

    The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

    Type:int
    Default:32768 (32 kibibytes)
    Valid Values:[-1,...]
    Importance:low
  • reconnect.backoff.max.ms

    The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.ms

    The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.

    Type:long
    Default:50
    Valid Values:[0,...]
    Importance:low
  • repartition.purge.interval.ms

    The frequency in milliseconds with which to delete fully consumed records from repartition topics. Purging will occur after at least this value since the last purge, but may be delayed until later. (Note, unlike , the default for this value remains unchanged when is set to ).commit.interval.msprocessing.guaranteeexactly_once_v2

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:40000 (40 seconds)
    Valid Values:[0,...]
    Importance:low
  • retries

    Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or `MAX_VALUE` and use corresponding timeout parameters to control how long a client should retry a request.

    Type:int
    Default:0
    Valid Values:[0,...,2147483647]
    Importance:low
  • retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • rocksdb.config.setter

    A Rocks DB config setter class or class name that implements the interfaceorg.apache.kafka.streams.state.RocksDBConfigSetter

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • send.buffer.bytes

    The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

    Type:int
    Default:131072 (128 kibibytes)
    Valid Values:[-1,...]
    Importance:low
  • state.cleanup.delay.ms

    The amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have not been modified for at least will be removedstate.cleanup.delay.ms

    Type:long
    Default:600000 (10 minutes)
    Valid Values:
    Importance:low
  • upgrade.from

    Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 3.3 to a newer version it is not required to specify this config. Default is `null`. Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4" (for upgrading from the corresponding old version).

    Type:string
    Default:null
    Valid Values:[null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 3.0, 3.1, 3.2, 3.3, 3.4]
    Importance:low
  • window.size.ms

    Sets window size for the deserializer in order to calculate window end times.

    Type:long
    Default:null
    Valid Values:
    Importance:low
  • windowed.inner.class.serde

    Default serializer / deserializer for the inner class of a windowed record. Must implement the interface. Note that setting this config in KafkaStreams application would result in an error as it is meant to be used only from Plain consumer client.org.apache.kafka.common.serialization.Serde

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • windowstore.changelog.additional.retention.ms

    Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day

    Type:long
    Default:86400000 (1 day)
    Valid Values:
    Importance:low

3.7 Admin Configs

Below is the configuration of the Kafka Admin client library.
  • bootstrap.servers

    A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).host1:port1,host2:port2,...

    Type:list
    Default:
    Valid Values:
    Importance:high
  • ssl.key.password

    The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • client.dns.lookup

    Controls how the client uses DNS lookups. If set to , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as .use_all_dns_ipsresolve_canonical_bootstrap_servers_onlyuse_all_dns_ips

    Type:string
    Default:use_all_dns_ips
    Valid Values:[use_all_dns_ips, resolve_canonical_bootstrap_servers_only]
    Importance:medium
  • client.id

    An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    Type:string
    Default:""
    Valid Values:
    Importance:medium
  • connections.max.idle.ms

    Close idle connections after the number of milliseconds specified by this config.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:
    Importance:medium
  • default.api.timeout.ms

    Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a parameter.timeout

    Type:int
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:medium
  • receive.buffer.bytes

    The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

    Type:int
    Default:65536 (64 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:medium
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;loginModuleClass controlFlag (optionName=optionValue)*;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
  • sasl.kerberos.service.name

    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.callback.handler.class

    The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.class

    The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.mechanism

    SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

    Type:string
    Default:PLAINTEXT
    Valid Values:[PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL]
    Importance:medium
  • send.buffer.bytes

    The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

    Type:int
    Default:131072 (128 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • socket.connection.setup.timeout.max.ms

    The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:
    Importance:medium
  • socket.connection.setup.timeout.ms

    The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:medium
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.

    Type:list
    Default:TLSv1.2
    Valid Values:
    Importance:medium
  • ssl.keystore.type

    The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.2
    Valid Values:
    Importance:medium
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • ssl.truststore.type

    The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • auto.include.jmx.reporter

    Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • metadata.max.age.ms

    The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter

    Type:list
    Default:""
    Valid Values:
    Importance:low
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
  • metrics.recording.level

    The highest recording level for metrics.

    Type:string
    Default:INFO
    Valid Values:[INFO, DEBUG, TRACE]
    Importance:low
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.max.ms

    The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.ms

    The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.

    Type:long
    Default:50
    Valid Values:[0,...]
    Importance:low
  • retries

    Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or `MAX_VALUE` and use corresponding timeout parameters to control how long a client should retry a request.

    Type:int
    Default:2147483647
    Valid Values:[0,...,2147483647]
    Importance:low
  • retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:low
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.jitter

    Percentage of random jitter added to the renewal time.

    Type:double
    Default:0.05
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.window.factor

    Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

    Type:double
    Default:0.8
    Valid Values:
    Importance:low
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.refresh.buffer.seconds

    The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:300
    Valid Values:[0,...,3600]
    Importance:low
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:[0,...,900]
    Importance:low
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:[0.5,...,1.0]
    Importance:low
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:[0.0,...,0.25]
    Importance:low
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
  • security.providers

    A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the interface.org.apache.kafka.common.security.auth.SecurityProviderCreator

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:low
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:low

3.8 System Properties

Kafka supports some configuration that can be enabled through Java system properties. System properties are usually set by passing the -D flag to the Java virtual machine in which Kafka components are running. Below are the supported system properties.
  • org.apache.kafka.disallowed.login.modules

    This system property is used to disable the problematic login modules usage in SASL JAAS configuration. This property accepts comma-separated list of loginModule names. By default com.sun.security.auth.module.JndiLoginModule loginModule is disabled.

    If users want to enable JndiLoginModule, users need to explicitly reset the system property like below. We advise the users to validate configurations and only allow trusted JNDI configurations. For more details CVE-2023-25194.

     -Dorg.apache.kafka.disallowed.login.modules=

    To disable more loginModules, update the system property with comma-separated loginModule names. Make sure to explicitly add JndiLoginModule module name to the comma-separated list like below.

     -Dorg.apache.kafka.disallowed.login.modules=com.sun.security.auth.module.JndiLoginModule,com.ibm.security.auth.module.LdapLoginModule,com.ibm.security.auth.module.Krb5LoginModule
    因为:3.4.0
    默认值:com.sun.security.auth.module.JndiLoginModule

4. 设计

4.1 动机

我们设计 Kafka 是为了能够充当一个统一的平台来处理大公司可能拥有的所有实时数据馈送。为此,我们必须考虑一系列相当广泛的用例。

它必须具有高吞吐量才能支持高容量事件流,例如实时日志聚合。

它需要优雅地处理大型数据积压工作,以便能够支持来自离线系统的定期数据加载。

这也意味着系统必须处理低延迟交付,以处理更传统的消息传递用例。

我们希望支持对这些源进行分区、分布式、实时处理,以创建新的派生源。这激发了我们的分区和消费者模型。

最后,在将流馈送到其他数据系统进行服务的情况下,我们知道系统必须能够在机器故障的情况下保证容错。

支持这些用途使我们的设计具有许多独特的元素,更类似于数据库日志,而不是传统的消息传递系统。我们将在以下部分中概述设计的一些元素。

4.2 持久性

不要害怕文件系统!

Kafka 严重依赖文件系统来存储和缓存消息。人们普遍认为“磁盘很慢”,这使人们怀疑持久结构能否提供有竞争力的性能。 事实上,磁盘的速度比人们预期的要慢得多,也快得多,这取决于它们的使用方式;设计得当的磁盘结构通常可以与网络一样快。

关于磁盘性能的关键事实是,在过去十年中,硬盘驱动器的吞吐量一直与磁盘寻道的延迟不同。因此,在具有六个 7200rpm SATA RAID-5 阵列的 JBOD 配置上,线性写入的性能约为 600MB/秒,但随机写入的性能仅为约 100k/秒,相差超过 6000 倍。这些线性读写是最多的 可预测所有使用模式,并经过操作系统的大量优化。现代操作系统提供预读和后写技术,以大块倍数和 将较小的逻辑写入分组为大型物理写入。有关此问题的进一步讨论,请参阅此 ACM 队列文章;他们实际上发现顺序磁盘访问在某些情况下可能比随机内存访问更快!

为了弥补这种性能差异,现代操作系统在使用主内存进行磁盘缓存方面变得越来越积极。现代操作系统将很乐意将所有可用内存转移到 磁盘缓存,回收内存时性能损失很小。所有磁盘读取和写入都将通过此统一缓存。如果不使用直接 I/O,则无法轻松关闭此功能,因此即使 如果进程维护数据的进程内缓存,则此数据可能会在操作系统页面缓存中复制,从而有效地将所有内容存储两次。

此外,我们正在 JVM 之上构建,任何花时间使用 Java 内存的人都知道两件事:

  1. 对象的内存开销非常高,通常会使存储的数据大小翻倍(或更糟)。
  2. 随着堆内数据的增加,Java 垃圾收集变得越来越繁琐和缓慢。

由于这些因素,使用文件系统和依赖页面缓存优于维护内存中缓存或其他结构 — 通过自动访问,我们至少使可用缓存翻倍 到所有可用内存,并且可能通过存储紧凑的字节结构而不是单个对象再次翻倍。这样做将导致在 28GB 的计算机上缓存高达 30-32GB,而不会受到 GC 处罚。 此外,即使服务重新启动,此缓存也将保持温暖,而进程内缓存将需要在内存中重建(对于 10GB 缓存可能需要 10 分钟),否则将需要启动 使用完全冷缓存(这可能意味着糟糕的初始性能)。这也大大简化了代码,因为用于维护缓存和文件系统之间一致性的所有逻辑现在都在操作系统中, 这往往比一次性的进程内尝试更有效、更正确地做到这一点。如果您的磁盘使用率有利于线性读取,则预读有效地预填充此缓存,每个缓存上的有用数据 磁盘读取。

这表明了一个非常简单的设计:当我们空间不足时,我们不是在内存中尽可能多地维护并将其全部刷新到文件系统,而是将其反转。所有数据立即 写入文件系统上的持久日志,而不必刷新到磁盘。实际上,这只是意味着它被传输到内核的页面缓存中。

这种以页面缓存为中心的设计风格在一篇关于 Varnish 设计的文章中进行了描述(以及健康的傲慢)。

恒定时间就足够了

消息传递系统中使用的持久数据结构通常是具有关联 BTree 或其他通用随机访问数据结构的每使用者队列,用于维护有关消息的元数据。 BTrees 是最通用的数据结构,可以支持消息传递系统中的各种事务和非事务语义。 不过,它们确实有相当高的成本:Btree操作是O(log N)。通常 O(log N) 基本上等同于常量时间,但对于磁盘操作并非如此。 磁盘寻道每次 10 毫秒,每个磁盘一次只能执行一个寻道,因此并行性受到限制。因此,即使是少量的磁盘寻道也会导致非常高的开销。 由于存储系统混合了非常快的缓存操作和非常慢的物理磁盘操作,因此随着数据随固定缓存而增加,树结构的观察到性能通常是超线性的,即加倍 您的数据使事情变得比两倍慢更糟糕。

直观地说,持久队列可以基于简单的读取构建并附加到文件,就像日志记录解决方案通常的情况一样。这种结构的优点是所有操作都是 O(1) 并且读取不是 块写入或相互阻塞。这具有明显的性能优势,因为性能与数据大小完全分离 — 一台服务器现在可以充分利用许多便宜的、 低转速 1+TB SATA 驱动器。尽管它们的寻道性能很差,但这些驱动器对于大型读写具有可接受的性能,并且价格仅为 1/3,容量是其 3 倍。

访问几乎无限的磁盘空间而不会降低任何性能意味着我们可以提供一些消息传递系统中通常找不到的功能。例如,在卡夫卡中,而不是尝试 删除消息 一旦被消费,我们可以将消息保留相对较长的时间(例如一周)。正如我们将要描述的,这为消费者带来了很大的灵活性。

4.3 效率

我们在效率方面付出了巨大的努力。我们的主要用例之一是处理 Web 活动数据,这是非常大的:每个页面视图可能会生成数十次写入。此外,我们假设每个 发布的消息至少由一个消费者(通常是多个)阅读,因此我们努力使消费尽可能便宜。

我们还发现,从构建和运行许多类似系统的经验中,效率是有效多租户运营的关键。如果下游基础设施服务很容易成为 瓶颈 由于应用程序的使用量略有增加,这种小的更改通常会产生问题。通过非常快的速度,我们帮助确保应用程序在基础设施之前在负载下翻倒。 当尝试在集中式群集上运行支持数十或数百个应用程序的集中式服务时,这一点尤其重要,因为使用模式的变化几乎每天都会发生。

我们在上一节中讨论了磁盘效率。一旦消除了不良的磁盘访问模式,这种类型的系统效率低下有两个常见原因:太多的小型 I/O 操作,以及 字节复制过多。

小的 I/O 问题既发生在客户端和服务器之间,也发生在服务器自身的持久操作中。

为了避免这种情况,我们的协议是围绕“消息集”抽象构建的,该抽象自然地将消息组合在一起。这允许网络请求将消息分组在一起并摊销网络开销 往返而不是一次发送一条消息。服务器反过来一次性将消息块附加到其日志中,而使用者一次获取大型线性块。

这种简单的优化可以提高几个数量级的速度。批处理会导致更大的网络数据包、更大的顺序磁盘操作、连续的内存块等,所有这些都允许 Kafka 打开 随机消息的突发流写入流向流向使用者的线性写入。

另一个低效率是字节复制。在低消息速率下,这不是问题,但在负载下,影响很大。为了避免这种情况,我们采用了由 生产者、代理和消费者(因此数据块可以在它们之间传输而无需修改)。

代理维护的消息日志本身只是一个文件目录,每个文件都由一系列消息集填充,这些消息集以生产者和使用者使用的相同格式写入磁盘。 保持这种通用格式可以优化最重要的操作:持久日志块的网络传输。现代 unix 操作系统为数据传输提供了高度优化的代码路径 从页面缓存到套接字;在 Linux 中,这是通过 sendfile system 调用完成的。

要了解 sendfile 的影响,了解将数据从文件传输到套接字的通用数据路径非常重要:

  1. 操作系统将数据从磁盘读取到内核空间中的页面缓存中
  2. 应用程序将数据从内核空间读取到用户空间缓冲区中
  3. 应用程序将数据写回内核空间到套接字缓冲区中
  4. 操作系统将数据从套接字缓冲区复制到 NIC 缓冲区,并通过网络发送数据

这显然是低效的,有四个副本和两个系统调用。使用 sendfile,通过允许操作系统将数据从页面缓存直接发送到网络来避免这种重新复制。所以在这个优化 路径,则只需要到 NIC 缓冲区的最终副本。

我们希望一个常见的用例是在一个主题上有多个使用者。使用上面的零拷贝优化,数据只复制到页面缓存中一次,并在每次使用时重复使用,而不是存储在内存中 并在每次读取时复制到用户空间。这允许以接近网络连接限制的速率使用消息。

页面缓存和发送文件的这种组合意味着,在使用者大多被赶上 Kafka 集群上,您将在磁盘上看到任何读取活动,因为它们将完全从缓存中提供数据。

TLS/SSL 库在用户空间运行(Kafka 目前不支持内核内)。由于此限制,在启用 SSL 时不使用。用于启用 SSL 配置,请参阅和SSL_sendfilesendfilesecurity.protocolsecurity.inter.broker.protocol

有关 Java 中的发送文件和零拷贝支持的更多背景信息,请参阅本文。

端到端批量压缩

在某些情况下,瓶颈实际上不是CPU或磁盘,而是网络带宽。对于需要通过广域网在数据中心之间发送消息的数据管道尤其如此。答案是肯定的 用户总是可以一次压缩一个消息,而无需 Kafka 的任何支持,但这可能导致压缩率非常差,因为大部分冗余是由于消息之间的重复 相同的类型(例如,JSON 中的字段名称或 Web 日志中的用户代理或常见字符串值)。高效压缩需要将多条消息压缩在一起,而不是单独压缩每条消息。

Kafka 通过高效的批处理格式支持这一点。一批消息可以压缩在一起,并以这种形式发送到服务器。这批消息将以压缩形式写入,并将 在日志中保持压缩状态,并且仅由使用者解压缩。

Kafka 支持 GZIP、Snappy、LZ4 和 ZStandard 压缩协议。有关压缩的更多详细信息,请参阅此处

4.4 制片人

负载平衡

创建者将数据直接发送到作为分区领导者的代理,而无需任何干预路由层。为了帮助生产者做到这一点,所有 Kafka 节点都可以回答一个关于元数据的请求 服务器处于活动状态,并且主题分区的领导者在任何给定时间都位于其中,以允许生产者适当地引导其请求。

客户端控制将消息发布到哪个分区。这可以随机完成,实现一种随机负载均衡,也可以通过一些语义分区函数来完成。我们公开接口 对于语义分区,允许用户指定要分区的键,并使用它来哈希到分区(如果需要,还有一个选项可以覆盖分区函数)。例如,如果键 选择的是用户 ID,然后给定用户的所有数据都将发送到同一分区。这反过来又将允许消费者对他们的消费做出局部假设。这种分区样式是显式的 旨在允许消费者进行对局部敏感的处理。

异步发送

批处理是效率的主要驱动因素之一,为了启用批处理,Kafka 生产者将尝试在内存中积累数据,并在单个请求中发送更大的批处理。可以配置批处理 累积的消息不超过固定数量,等待时间不超过某个固定的延迟限制(例如 64k 或 10 毫秒)。这允许累积更多的字节来发送,并且很少有更大的 I/O 操作 服务器。此缓冲是可配置的,并提供了一种机制来权衡少量的额外延迟以获得更好的吞吐量。

可以找到有关生产者的配置API 的详细信息 在文档的其他位置。

4.5 消费者

Kafka 使用者通过向领导它想要使用的分区的代理发出“获取”请求来工作。使用者在每个请求的日志中指定其偏移量,并接收回一个日志块 从那个位置开始。因此,使用者可以显著控制此位置,并可以在需要时将其倒带以重新使用数据。

推拉

我们考虑的第一个问题是消费者是否应该从经纪人那里提取数据,或者经纪人应该将数据推送给消费者。在这方面,Kafka遵循更传统的设计,大多数消息传递都共享 系统,其中数据从生产者推送到代理,并由消费者从代理中提取。一些以日志记录为中心的系统,如ScribeApache Flume,遵循非常不同的基于推送的路径,将数据推送到下游。这两种方法各有利弊。但是,基于推送的系统存在困难 处理不同的消费者,因为代理控制数据传输的速率。目标通常是让消费者能够以最大可能的速度消费;不幸的是,在推动 系统这意味着当消费者的消费率低于生产率时,消费者往往会不知所措(本质上是拒绝服务攻击)。基于拉动的系统具有更好的属性 消费者只是落后了,并在可能的时候赶上了。这可以通过某种退避协议来缓解,消费者可以通过该协议表明它不堪重负,但将传输速率提高到 充分利用(但永远不要过度利用)消费者比看起来更棘手。以前以这种方式构建系统的尝试使我们采用了更传统的拉动模型。

基于拉取的系统的另一个优点是,它适合对发送给消费者的数据进行积极的批处理。基于推送的系统必须选择立即发送请求或累积更多数据 然后在不知道下游消费者是否能够立即处理它的情况下发送它。如果针对低延迟进行了优化,这将导致一次仅针对 无论如何,转移最终都会被缓冲,这是浪费。基于拉取的设计解决了这个问题,因为使用者总是在日志中的当前位置(或最多一些可配置的最大值)之后拉取所有可用消息 大小)。因此,人们可以在不引入不必要的延迟的情况下获得最佳批处理。

基于朴素拉动的系统不足之处在于,如果代理没有数据,消费者最终可能会在紧密循环中轮询,有效地忙于等待数据到达。为了避免这种情况,我们在 拉取请求,允许使用者请求在“长轮询”中阻塞,等待数据到达(并选择性地等待给定数量的字节可用以确保较大的传输大小)。

你可以想象其他可能的设计,这些设计只是端到端的拉动。生产者将在本地写入本地日志,经纪人将从中提取,消费者将从中提取。类似类型的 “存储转发”生产者经常被提议。这很有趣,但我们认为不太适合拥有数千个生产者的目标用例。我们在运行持久数据系统的经验 规模让我们觉得,跨许多应用程序在系统中涉及数千个磁盘实际上并不能使事情变得更可靠,而且操作起来将是一场噩梦。在实践中,我们发现我们 可以大规模运行具有强 SLA 的管道,而无需生产者持久性。

消费者地位

令人惊讶的是,跟踪已使用的内容是邮件系统的关键性能点之一。

大多数消息传递系统保留有关代理上使用了哪些消息的元数据。也就是说,当消息分发给消费者时,代理要么立即在本地记录该事实,要么可以等待 用于消费者的确认。这是一个相当直观的选择,事实上,对于单机服务器,不清楚这种状态还能去哪里。由于用于存储的数据结构在许多 消息传递系统的扩展性很差,这也是一个务实的选择 - 因为代理知道消耗了什么,它可以立即删除它,保持较小的数据大小。

也许不明显的是,让经纪人和消费者就消费的内容达成一致并不是一个微不足道的问题。如果代理将消息记录为立即使用,则每隔一段时间 当它通过网络分发时,如果消费者未能处理消息(例如因为它崩溃或请求超时或其他什么),该消息将丢失。为了解决这个问题,许多消息传递 系统添加了确认功能,这意味着消息在发送时仅标记为已发送而不被使用;代理等待来自消费者的特定确认以记录 消息已使用。此策略解决了丢失消息的问题,但会产生新问题。首先,如果使用者处理消息但在发送确认之前失败,则 消息将被消耗两次。第二个问题是关于性能的,现在代理必须对每条消息保持多个状态(首先锁定它,这样它就不会第二次发出,然后标记 它被永久消耗,以便可以将其删除)。必须处理棘手的问题,例如如何处理已发送但从未确认的消息。

卡夫卡以不同的方式处理这个问题。我们的主题分为一组完全有序的分区,每个分区在任何给定时间都由每个订阅消费者组中的一个消费者使用。这意味着 消费者在每个分区中的位置只是一个整数,即要消耗的下一条消息的偏移量。这使得有关已消耗内容的状态非常小,每个分区只有一个数字。 可以定期检查此状态。这使得消息确认的等效物非常便宜。

这个决定还有一个附带的好处。使用者可以故意回退到旧的偏移量并重新使用数据。这违反了队列的通用协定,但事实证明这是一个基本功能 对于许多消费者来说。例如,如果使用者代码存在 bug,并且在使用某些消息后被发现,则使用者可以在修复错误后重新使用这些消息。

离线数据加载

可扩展的持久性允许消费者仅定期使用(例如批处理数据加载)将数据定期批量加载到离线系统(如Hadoop)或关系数据中的可能性 仓库。

在Hadoop的情况下,我们通过在各个映射任务上拆分负载来并行化数据加载,每个节点/主题/分区组合一个,允许加载完全并行。Hadoop 提供任务 管理和失败的任务可以重新启动,而不会有重复数据的危险 — 它们只是从其原始位置重新启动。

静态成员资格

静态成员资格旨在提高流应用程序、消费者组和其他建立在组重新平衡协议之上的应用程序的可用性。 再平衡协议依赖于组协调器将实体 ID 分配给组成员。这些生成的 ID 是临时的,当成员重新启动并重新加入时会发生变化。 对于基于使用者的应用,这种“动态成员资格”可能会导致在管理操作期间将大部分任务重新分配给不同的实例 例如代码部署、配置更新和定期重启。对于大型状态应用程序,随机任务在处理之前需要很长时间才能恢复其本地状态 并导致应用程序部分或完全不可用。受这一观察结果的启发,Kafka 的组管理协议允许组成员提供持久的实体 ID。 基于这些 ID 的组成员身份保持不变,因此不会触发重新平衡。

如果要使用静态成员资格,

  • 将代理集群和客户端应用程序升级到 2.3 或更高版本,并确保升级后的代理也使用 2.3 或更高版本。inter.broker.protocol.version
  • 将配置设置为一个组下每个使用者实例的唯一值。ConsumerConfig#GROUP_INSTANCE_ID_CONFIG
  • 对于 Kafka Streams 应用程序,为每个 KafkaStreams 实例设置一个唯一的实例就足够了, 与实例使用的线程数无关。ConsumerConfig#GROUP_INSTANCE_ID_CONFIG
如果您的代理使用的是 2.3 以上的版本,但您选择在客户端设置,则应用程序将检测到 代理版本,然后抛出不支持的异常。如果您不小心为不同实例配置了重复的 ID, 代理端的屏蔽机制将通过触发 . 有关更多详细信息,请参阅 KIP-345ConsumerConfig#GROUP_INSTANCE_ID_CONFIGorg.apache.kafka.common.errors.FencedInstanceIdException

4.6 消息传递语义

现在我们已经对生产者和消费者的工作方式有了一定的了解,让我们讨论一下 Kafka 在生产者和消费者之间提供的语义保证。显然,有多种可能的消息传递 可以提供的保证:

  • 最多一次 - 邮件可能会丢失,但永远不会重新递送。
  • 至少一次 - 邮件永远不会丢失,但可以重新传递。
  • 恰好一次——这是人们真正想要的,每条消息传递一次,而且只有一次。
值得注意的是,这分为两个问题:发布消息的持久性保证和使用消息时的保证。

许多系统声称提供“恰好一次”的交付语义,但阅读细则很重要,这些声明中的大多数都是误导性的(即它们没有转化为消费者或生产者的情况。 可能会失败,如果有多个使用者进程,或者写入磁盘的数据可能会丢失)。

卡夫卡的语义是直截了当的。发布消息时,我们有一个消息被“提交”到日志的概念。一旦提交已发布的消息,只要一个代理 复制写入此消息的分区保持“活动”。已提交消息的定义、活动分区以及我们尝试处理的故障类型的描述 下一节将更详细地介绍。现在,让我们假设一个完美的、无损的经纪人,并尝试了解对生产者和消费者的保证。如果生产者尝试发布消息并且 遇到网络错误 无法确定此错误是在提交消息之前还是之后发生的。这类似于使用自动生成的键插入数据库表的语义。

在 0.11.0.0 之前,如果生产者未能收到指示消息已提交的响应,则它别无选择,只能重新发送消息。这提供了至少一次传递语义,因为 如果原始请求实际上已成功,则在重新发送期间可能会再次将消息写入日志。从 0.11.0.0 开始,Kafka 生产者还支持幂等交付选项,该选项保证重新发送 不会导致日志中出现重复条目。为此,代理为每个生产者分配一个 ID,并使用生产者随每条消息一起发送的序列号对消息进行重复数据删除。 同样从 0.11.0.0 开始,生产者支持使用类似事务的语义将消息发送到多个主题分区的能力:即要么所有消息都成功写入,要么没有成功写入。 这样做的主要用例是 Kafka 主题之间的恰好一次处理(如下所述)。

并非所有用例都需要如此强大的保证。对于延迟敏感的用途,我们允许生产者指定所需的持久性级别。如果生产者指定要等待消息 提交后,这可能需要 10 毫秒的数量级。但是,生产者也可以指定它希望完全异步执行发送,或者它只想等到领导者(但不是 必然是追随者)有信息。

现在让我们从消费者的角度描述语义。所有副本都具有完全相同的日志和相同的偏移量。使用者控制其在此日志中的位置。如果消费者从未崩溃过 可以只将此位置存储在内存中,但是如果使用者失败并且我们希望此主题分区由另一个进程接管,则新进程将需要选择一个适当的位置开始 加工。假设使用者读取了一些消息 - 它有几个选项来处理消息和更新其位置。

  1. 它可以读取消息,然后将其保存在日志中的位置,最后处理消息。在这种情况下,消费者进程有可能在保存其位置后但在保存之前崩溃 其消息处理的输出。在这种情况下,接管处理的进程将从保存的位置开始,即使该位置之前的一些消息尚未处理。这对应于 对于“最多一次”语义,例如在消费者失败的情况下,可能不会处理消息。
  2. 它可以读取消息,处理消息,最后保存其位置。在这种情况下,使用者进程可能会在处理消息之后但在保存其位置之前崩溃。 在这种情况下,当新进程接管它收到的前几条消息时,它已经处理完毕。这对应于使用者失败情况下的“至少一次”语义。在许多情况下 消息具有主键,因此更新是幂等的(接收两次相同的消息只是用其自身的另一个副本覆盖记录)。

那么恰好一次语义(即你真正想要的东西)呢?当从 Kafka 主题消费并生成到另一个主题时(如在 Kafka Streams 应用程序中),我们可以利用上面提到的 0.11.0.0 中的新事务生产者功能。消费者的位置作为消息存储在主题中,因此我们可以将偏移量写入 Kafka 与接收已处理数据的输出主题相同的事务。如果交易中止,消费者的仓位将恢复为其旧值,并且输出主题上生成的数据将不可见 给其他消费者,取决于他们的“隔离水平”。在默认的“read_uncommitted”隔离级别中,所有消息对使用者都是可见的,即使它们是中止事务的一部分, 但在“read_committed”中,消费者将只返回来自已提交的事务(以及不属于事务的任何消息)的消息。

当写入外部系统时,限制在于需要协调使用者的位置与实际存储为输出的内容。实现这一目标的经典方法是引入两阶段 在消费者位置的存储和消费者输出的存储之间提交。但这可以通过让消费者将其偏移量存储在与 它的输出。这更好,因为使用者可能想要写入的许多输出系统不支持两阶段提交。例如,考虑一个 Kafka Connect 连接器,该连接器在 HDFS 中填充数据以及它读取的数据的偏移量,以便保证数据和 偏移量都已更新或均未更新。对于许多其他数据系统,我们遵循类似的模式,这些系统需要这些更强的语义,并且消息没有主键来允许重复数据删除。

因此,Kafka 有效地支持 Kafka 流中的精确一次付,并且事务生产者/消费者通常可用于提供 在 Kafka 主题之间传输和处理数据时恰好一次交付。对于其他目的地系统,精确一次交付通常需要与此类系统合作,但 Kafka 提供了 偏移量使得实现这一点变得可行(另请参阅 Kafka Connect)。否则,Kafka 默认保证至少一次交付,并允许 用户通过在处理一批消息之前禁用生产者上的重试并在使用者中提交偏移来实现最多一次传递。

4.7 复制

Kafka 在可配置数量的服务器上复制每个主题分区的日志(您可以逐个主题设置此复制因子)。这允许在以下情况下自动故障转移到这些副本 群集中的服务器出现故障,因此消息在出现故障时仍然可用。

其他消息传递系统提供了一些与复制相关的功能,但是,在我们(完全有偏见的)看来,这似乎是一个附加的东西,没有大量使用,并且有很大的缺点:副本处于非活动状态, 吞吐量受到严重影响,需要繁琐的手动配置等。默认情况下,Kafka 旨在与复制一起使用 — 事实上,我们将未复制的主题实现为复制主题,其中 复制因子是其中之一。

复制单元是主题分区。在非故障条件下,Kafka 中的每个分区都有一个领导者和零个或多个追随者。包括领导者在内的副本总数构成 复制因子。所有写入都转到分区的领导者,读取可以转到分区的领导者或追随者。通常,分区比代理多得多,领导者在代理之间均匀分布。关注者上的日志是 与领导者的日志相同 - 它们都具有相同的偏移量和相同顺序的消息(当然,在任何给定时间,领导者的日志末尾都可能有一些尚未复制的消息)。

追随者像普通的 Kafka 消费者一样使用来自领导者的消息,并将其应用于他们自己的日志。让追随者从领导者那里拉出来有一个很好的特性,可以让追随者自然地 将应用于其日志的日志条目批处理在一起。

与大多数分布式系统一样,自动处理故障需要精确定义节点“处于活动状态”的含义。在卡夫卡中,一个特殊的节点 称为“控制器”,负责管理集群中代理的注册。经纪人活跃度有两个条件:

  1. 代理必须维护与控制器的活动会话,以便接收定期元数据更新。
  2. 充当追随者的经纪人必须复制领导者的写入,并且不能“落后太多”。

“活动会话”的含义取决于群集配置。对于 KRaft 集群,活动会话由 向控制器发送定期检测信号。如果控制器在 配置的超时到期之前未能收到检测信号,则该节点将被视为脱机。broker.session.timeout.ms

对于使用 Zookeeper 的集群,活动性是通过代理在 初始化其动物园管理员会话。如果代理在 到期之前未能向 Zookeeper 发送检测信号后丢失其会话,则该节点将被删除。然后,控制器将通过 Zookeeper 手表注意到节点删除 并将代理标记为离线。zookeeper.session.timeout.ms

我们将满足这两个条件的节点称为“同步”,以避免“活动”或“失败”的模糊性。领导者跟踪“同步”副本集, 这被称为 ISR。如果这些条件中的任何一个未能满足,则代理将从 ISR 中删除。例如 如果追随者死亡,则控制器将通过其会话丢失来注意到故障,并将从 ISR 中删除代理。 另一方面,如果追随者落后于领导者太远,但仍有活动会话,则领导者也可以将其从 ISR 中删除。 滞后副本的确定通过配置进行控制。 无法在此配置设置的最大时间内赶上主节点日志末尾的副本将从 ISR 中删除。replica.lag.time.max.ms

在分布式系统术语中,我们只尝试处理“故障/恢复”故障模型,其中节点突然停止工作,然后恢复(可能不知道它们已经死亡)。卡夫卡没有 处理所谓的“拜占庭”故障,其中节点产生任意或恶意响应(可能是由于错误或犯规)。

现在,我们可以更精确地定义,当该分区的 ISR 中的所有副本都已将消息应用于其日志时,该消息被视为已提交。 只有提交的消息才会发送给使用者。这意味着消费者不必担心在领导者失败时可能会看到可能会丢失的消息。另一方面,生产者, 可以选择是否等待消息提交,具体取决于他们在延迟和持久性之间进行权衡的偏好。此首选项由 acks 设置控制,该设置是 生产者使用。 请注意,主题具有同步副本的“最小数量”设置,当生成者请求确认消息时会检查该设置。 已写入完整的同步副本集。如果生产者请求不太严格的确认,则可以提交并使用消息, 即使同步副本的数量低于最小值(例如,它可以低至领导者)。

Kafka 提供的保证是,只要同步副本中至少有一个处于活动状态,提交的消息就不会丢失。

在短暂的故障转移期后,Kafka 在节点发生故障时仍然可用,但在存在网络分区的情况下可能不可用。

复制的日志:仲裁、ISR 和状态机(天哪!

Kafka 分区的核心是复制的日志。复制日志是分布式数据系统中最基本的原语之一,有许多方法可以实现它。复制的日志可以是 被其他系统用作以状态机样式实现其他分布式系统的原语。

复制的日志模拟就一系列值的顺序达成共识的过程(通常对日志条目 0、1、2 等进行编号)。有很多方法可以实现这一点,但最简单和最快 与选择提供给它的值的顺序的领导者在一起。只要领导者还活着,所有追随者只需要复制领导者选择的值和顺序。

当然,如果领导者没有失败,我们就不需要追随者了!当领导者去世时,我们需要从追随者中选出一个新的领导者。但是追随者本身可能会落后或崩溃,因此我们必须确保我们 选择最新的关注者。日志复制算法必须提供的基本保证是,如果我们告诉客户端一条消息被提交,并且领导者失败,我们选择的新领导者也必须具有 那条消息。这就产生了一个权衡:如果领导者在宣布承诺之前等待更多的追随者确认一个信息,那么就会有更多潜在的可选举领导者。

如果您选择所需的确认数和必须比较的日志数来选举领导者,以便保证存在重叠,则这称为仲裁。

这种权衡的常见方法是在提交决策和领导者选举中使用多数票。这不是 Kafka 所做的,但无论如何让我们探索一下它以了解权衡。假设我们 有 2 个f+1 副本。如果 f+1 副本必须在领导者声明提交之前收到消息,并且如果我们通过从至少 f+1 个副本中选出具有最完整日志的追随者来选举新的领导者,那么,如果失败次数不超过 f,则保证领导者拥有所有提交的消息。这是因为在任何 f+1 副本中,必须至少有一个副本包含 所有已提交的消息。该副本的日志将是最完整的,因此将被选为新的领导者。每个算法必须处理许多剩余的细节(例如精确定义什么 使日志更完整,确保在领导者故障期间的日志一致性或更改副本集中的服务器集),但我们现在将忽略这些。

这种多数投票方法有一个非常好的属性:延迟仅取决于最快的服务器。也就是说,如果复制因子为 3,则延迟由较快的追随者而不是较慢的追随者决定。

这个系列中有各种各样的算法,包括ZooKeeper的ZabRaft, 和“查看标记复制”。 我们所知道的与Kafka的实际实现最相似的学术出版物是Microsoft的PacificA

多数票的缺点是,不需要很多失败就可以让你没有可选举的领导人。容许一个故障需要三个数据副本,容许两个故障需要五个拷贝 的数据。根据我们的经验,对于实际系统来说,只有足够的冗余来容忍单个故障是不够的,但是每次写入五次,磁盘空间要求是 5 倍,磁盘空间要求是 1/5 吞吐量,对于大容量数据问题不是很实用。这可能就是为什么仲裁算法更常出现在共享集群配置(如 ZooKeeper)中,但对于主数据不太常见的原因。 存储。例如,在HDFS中,namenode的高可用性功能建立在基于多数投票的日志上,但这更多 数据本身不使用昂贵的方法。

Kafka 在选择其仲裁集时采用了略有不同的方法。Kafka 不是多数投票,而是动态维护一组同步副本 (ISR),这些副本被捕获到领导者。仅此集合的成员 有资格当选为领导人。在所有同步副本收到写入之前,不会将写入视为已提交。每当此 ISR 集发生更改时,它都会保留在群集元数据中。 因此,ISR 中的任何副本都有资格当选为领导者。对于 Kafka 的使用模型来说,这是一个重要因素,其中有许多分区,确保领导平衡很重要。 使用此 ISR 模型和 f+1 副本,Kafka 主题可以容忍 f 故障而不会丢失已提交的消息。

对于我们希望处理的大多数用例,我们认为这种权衡是合理的。在实践中,为了容忍 f 失败,多数投票和 ISR 方法都将等待相同数量的副本 在提交消息之前确认(例如,为了在一次故障中幸存下来,多数仲裁需要三个副本和一个确认,而 ISR 方法需要两个副本和一个确认)。 在没有最慢服务器的情况下提交的能力是多数投票方法的一个优势。但是,我们认为通过允许客户端选择是否阻止消息提交来改善它, 由于所需的复制因子较低而增加的吞吐量和磁盘空间是值得的。

另一个重要的设计区别是,Kafka 不要求崩溃的节点恢复所有数据完好无损。此空间中的复制算法依赖于 “稳定存储”,在任何故障恢复情况下都不会丢失,而不会发生潜在的一致性违规。这个假设有两个主要问题。首先,磁盘错误是我们最常见的问题 在持久数据系统的实际操作中观察,它们通常不会保持数据完整。其次,即使这不是问题,我们也不想要求在每次写入时使用 fsync 以确保我们的一致性。 保证,因为这会使性能降低两到三个数量级。我们允许副本重新加入 ISR 的协议确保在重新加入之前,它必须再次完全重新同步,即使它丢失了未刷新 崩溃中的数据。

不洁领袖选举:如果他们都死了怎么办?

请注意,Kafka 对数据丢失的保证基于至少一个保持同步的副本。如果复制分区的所有节点都死了,则此保证不再成立。

然而,当所有副本死亡时,一个实用的系统需要做一些合理的事情。如果您不幸发生这种情况,重要的是要考虑会发生什么。有两种行为可能是 实现:

  1. 等待 ISR 中的复制副本复活并选择此副本作为领导者(希望它仍然拥有所有数据)。
  2. 选择作为领导者复活的第一个副本(不一定在 ISR 中)。

这是可用性和一致性之间的简单权衡。如果我们在 ISR 中等待副本,那么只要这些副本关闭,我们就会保持不可用。如果此类副本或其数据被销毁 迷路了,那么我们就永远倒下了。另一方面,如果一个非同步副本复活并且我们允许它成为领导者,那么它的日志就会成为事实的来源,即使它不能保证 拥有每条承诺的消息。默认情况下,从版本 0.11.0.0 开始,Kafka 选择第一个策略并倾向于等待一致的副本。可以使用以下方法更改此行为 配置属性unclean.leader.election.enable,以支持正常运行时间优于一致性的用例。

这种困境并非卡夫卡所特有的。它存在于任何基于仲裁的方案中。例如,在多数投票方案中,如果大多数服务器遭受永久性故障,那么您必须选择失去 100% 的 您的数据或通过将现有服务器上剩余的内容作为新的事实来源来违反一致性。

可用性和耐用性保证

写入 Kafka 时,生成者可以选择是等待消息由 0,1 还是所有 (-1) 副本确认。 请注意,“所有副本的确认”并不能保证已分配的完整副本集已收到消息。默认情况下,当 acks=all 时,一旦所有当前同步,就会发生确认 副本已收到消息。例如,如果主题仅配置了两个副本,而一个副本失败(即,同步副本中只剩下一个副本),则指定 acks=all 的写入将成功。然而,这些 如果剩余副本也失败,写入可能会丢失。 尽管这可确保分区的最大可用性,但对于某些喜欢持久性而不是可用性的用户来说,此行为可能是不可取的。因此,我们提供了两个主题级配置,它们可以 过去更喜欢消息持久性而不是可用性:
  1. 禁用不干净的领导者选举 - 如果所有副本都不可用,则分区将保持不可用,直到最新的领导者再次可用。这实际上倾向于不可用 超过消息丢失的风险。有关说明,请参阅上一节关于不洁领袖选举。
  2. 指定最小 ISR 大小 - 仅当 ISR 的大小高于某个最小值时,分区才会接受写入,以防止仅写入单个副本的消息丢失, 随后变得不可用。仅当生成者使用 acks=all 并保证消息至少由这么多同步副本确认时,此设置才会生效。 此设置提供了一致性和可用性之间的权衡。最小 ISR 大小的设置越高,越能保证一致性越好,因为消息可以保证写入更多副本,从而减少 丢失的概率。但是,它会降低可用性,因为如果同步副本的数量低于最小阈值,分区将不可用于写入。

副本管理

上面关于复制日志的讨论实际上只涵盖了单个日志,即一个主题分区。但是,Kafka 集群将管理数百或数千个这样的分区。我们尝试平衡分区 在集群中以轮循机制方式进行,以避免在少量节点上为高容量主题聚集所有分区。同样,我们试图平衡领导,以便每个节点都是比例的领导者 其分区的份额。

优化领导层选举过程也很重要,因为这是无法获得的关键窗口。领导人选举的天真实施最终将导致所有人的每个分区进行选举 对节点发生故障时托管的节点进行分区。如上文关于复制的部分所述,Kafka 集群具有称为“控制器”的特殊角色,即 负责管理经纪人的注册。如果控制器检测到代理的故障,则负责选举 ISR 的剩余成员之一作为新的领导者。 结果是,我们能够将许多必需的领导层变更通知批处理在一起,这使得选举过程对大量人来说更便宜、更快捷。 的分区。如果控制器本身发生故障,则将选择另一个控制器。

4.8 日志压缩

日志压缩可确保 Kafka 始终在单个主题分区的数据日志中至少保留每个消息键的最后一个已知值。它解决了用例和场景,例如还原 应用程序崩溃或系统故障后的状态,或在操作维护期间应用程序重新启动后重新加载缓存。让我们更详细地了解这些用例,然后描述压缩的工作原理。

到目前为止,我们只描述了更简单的数据保留方法,即在固定时间段后或当日志达到某个预定大小时丢弃旧的日志数据。这适用于时态事件 数据,例如每个记录独立存在的日志记录。但是,一类重要的数据流是对键控可变数据的更改日志(例如,对数据库表的更改)。

让我们讨论这样一个流的具体示例。假设我们有一个包含用户电子邮件地址的主题;每次用户更新其电子邮件地址时,我们都会使用其用户 ID 向本主题发送一条消息作为 主键。现在假设我们在某个时间段内为ID 为 123 的用户发送以下消息,每条消息对应于电子邮件地址的更改(省略其他 ID 的消息):

123 => bill@microsoft.com
        .
        .
        .
123 => bill@gatesfoundation.org
        .
        .
        .
123 => bill@gmail.com
日志压缩为我们提供了一种更精细的保留机制,因此我们保证至少保留每个主键的上次更新(例如 )。通过这样做,我们保证 log 包含每个键的最终值的完整快照,而不仅仅是最近更改的键。这意味着下游消费者可以恢复自己在这个主题上的状态,而我们不必保留完整的 所有更改的日志。bill@gmail.com

让我们首先看一些有用的用例,然后我们将看看如何使用它。

  1. 数据库更改订阅。通常需要在多个数据系统中拥有一个数据集,并且这些系统通常之一是某种数据库(RDBMS或新奇的键值)。 商店)。例如,您可能有一个数据库、一个缓存、一个搜索集群和一个 Hadoop 集群。对数据库的每次更改都需要反映在缓存、搜索集群中,并最终反映在 Hadoop 中。 如果一个人只处理实时更新,你只需要最近的日志。但是,如果您希望能够重新加载缓存或还原失败的搜索节点,则可能需要完整的数据集。
  2. 事件溯源。这是一种应用程序设计风格,它将查询处理与应用程序设计放在一起,并使用更改日志作为应用程序的主要存储。
  3. 用于高可用性的日记功能。执行本地计算的进程可以通过注销对其本地状态所做的更改来实现容错,以便另一个进程可以重新加载这些更改和 如果它失败了,请继续。这方面的一个具体示例是在流查询系统中处理计数、聚合和其他类似“分组依据”的处理。实时流处理框架 Samza 正是为此目的使用此功能
在每种情况下,人们主要需要处理更改的实时馈送,但偶尔,当机器崩溃或需要重新加载或重新处理数据时,需要执行完全加载。 日志压缩允许将这两个用例从同一支持主题中馈送。 此博客文章中更详细地描述了日志的这种使用方式。

总体思路很简单。如果我们有无限的日志保留,并且记录了上述情况下的每个更改,那么我们将捕获系统从最初开始的每个时间的状态。 使用此完整日志,我们可以通过重播日志中的前 N 条记录来恢复到任何时间点。这个假设的完整日志对于多次更新单个记录的系统不是很实用 因为即使对于稳定的数据集,日志也会无限制地增长。丢弃旧更新的简单日志保留机制将绑定空间,但日志不再是恢复当前状态的方法 - 现在 从日志开头还原不再重新创建当前状态,因为可能根本不会捕获旧更新。

日志压缩是一种提供更细粒度的每条记录保留的机制,而不是基于时间的更粗粒度的保留。这个想法是有选择地删除我们有更新的记录 相同的主键。这样,可以保证日志至少具有每个键的最后一个状态。

可以按主题设置此保留策略,因此单个群集可以具有某些主题,其中保留按大小或时间强制实施,而其他主题则通过压缩强制实施保留。

此功能的灵感来自LinkedIn最古老、最成功的基础结构之一,即名为 Databus 的数据库更改日志缓存服务。 与大多数日志结构存储系统不同,Kafka 是为订阅而构建的,它组织数据以实现快速线性读写。与Databus不同,Kafka充当事实来源存储,因此即使在 上游数据源无法重播的情况。

日志压缩基础知识

下面是一张高级图片,显示了 Kafka 日志的逻辑结构以及每条消息的偏移量。

日志的头部与传统的卡夫卡日志相同。它具有密集的顺序偏移量并保留所有消息。日志压缩添加了一个用于处理日志尾部的选项。上图显示了一个日志 尾巴压实。请注意,日志尾部的消息保留首次写入时分配的原始偏移量,该偏移量永远不会更改。另请注意,所有偏移量在 日志,即使具有该偏移量的消息已被压缩掉;在这种情况下,此位置与日志中出现的下一个最高偏移量无法区分。例如,在上图中 偏移量 36、37 和 38 都是等效位置,从这些偏移量中的任何一个开始读取都会返回以 38 开头的消息集。

压缩还允许删除。包含键和空有效负载的消息将被视为从日志中删除。此类记录有时称为墓碑。此删除标记将导致删除具有该键的任何先前消息(就像任何新的消息一样 带有该键的消息),但删除标记很特殊,因为它们本身将在一段时间后从日志中清除以释放空间。不再保留删除的时间点是 在上图中标记为“删除保留点”。

压缩是通过定期重新复制日志段在后台完成的。清理不会阻止读取,并且可以限制使用不超过可配置量的 I/O 吞吐量以避免影响 生产者和消费者。压缩日志段的实际过程如下所示:

日志压缩提供哪些保证

日志压缩保证以下内容:
  1. 任何在日志头部保持捕获的消费者都将看到写入的每条消息;这些消息将具有顺序偏移。该主题可用于 保证在写入消息后必须经过最短时间长度,然后才能对其进行压缩。即,它提供了每条消息在(未压缩的)头部中保留多长时间的下限。 该主题可用于保证写入消息的时间与消息符合压缩条件的时间之间的最大延迟。min.compaction.lag.msmax.compaction.lag.ms
  2. 始终保持消息的顺序。压缩永远不会对消息重新排序,只需删除一些消息即可。
  3. 消息的偏移量永远不会更改。它是日志中位置的永久标识符。
  4. 从日志开头开始的任何使用者都将至少看到所有记录的最终状态,这些状态按写入顺序排列。此外,将看到已删除记录的所有删除标记,前提是 使用者在小于主题设置的时间段内到达日志的头部(默认值为 24 小时)。换句话说:由于删除标记发生了 在读取的同时,如果使用者滞后超过 .delete.retention.msdelete.retention.ms

日志压缩详细信息

日志压缩由日志清理器处理,日志清理器是一个后台线程池,用于重新复制日志段文件,删除其键显示在日志头部的记录。每个压路机螺纹的工作原理如下:
  1. 它选择原木头与原木尾比率最高的原木
  2. 它为日志头部中的每个键创建最后一个偏移量的简洁摘要
  3. 它会从头到尾重新复制日志,删除日志中稍后出现的键。新的干净段会立即交换到日志中,因此所需的额外磁盘空间只有一个 其他日志段(不是日志的完整副本)。
  4. 日志头的摘要本质上只是一个空间紧凑的哈希表。它每个条目正好使用 24 个字节。因此,使用 8GB 的清理缓冲区,一个更干净的迭代可以清理大约 366GB 的日志头 (假设 1k 条消息)。

配置日志清理程序

默认情况下,日志清理器处于启用状态。这将启动更干净的线程池。 要对特定主题启用日志清理,请添加特定于日志的属性 该属性是定义的代理配置设置 在经纪人的文件中;它影响所有主题 在没有配置覆盖的集群中,如此所述。 可以将原木清理器配置为保留最少量的未压缩的原木“头”。这是通过设置压缩时间延迟来实现的。 这可用于防止更新于最小消息期限的消息受到压缩。如果未设置,则除最后一个段(即当前段)外,所有日志段都符合压缩条件 正在写信给。即使活动段的所有消息都早于最小压缩时间延迟,也不会压缩该段。 可以配置日志清理器以确保最大延迟,在此延迟之后,日志的未压缩“头”有资格进行日志压缩。 这可用于防止生产率较低的原木在无限的持续时间内不符合压缩条件。如果未设置,则不会压缩不超过 min.cleanable.dirty.ratio 的日志。 请注意,此压缩截止时间不是硬性保证,因为它仍受日志清理器线程的可用性和实际压缩时间的约束。 您需要监视不可清理分区计数、最大清理时间秒和最大压缩延迟秒指标。
log.cleanup.policy=compact
log.cleanup.policyserver.properties
log.cleaner.min.compaction.lag.ms
log.cleaner.max.compaction.lag.ms

此处介绍了更多更干净的配置。

4.9 配额

Kafka 集群能够对请求强制实施配额,以控制客户端使用的代理资源。两种类型 的客户端配额可由 Kafka 代理为共享配额的每组客户端强制执行:

  1. 网络带宽配额定义字节速率阈值(自 0.9 起)
  2. 请求速率配额将 CPU 利用率阈值定义为网络和 I/O 线程的百分比(自 0.11 起)

为什么需要配额

生产者和消费者有可能产生/消费非常大量的数据或以非常高的速度生成请求 率,从而垄断经纪人资源,导致网络饱和,并且通常DOS其他客户端和经纪人本身。 拥有配额可以防止这些问题,在大型多租户群集中更为重要,在大型多租户群集中,一小部分行为不良的客户端可能会降低行为良好的客户端的用户体验。 事实上,当将 Kafka 作为服务运行时,这甚至可以根据商定的合同强制执行 API 限制。

客户组

Kafka 客户端的标识是用户主体,它表示安全群集中经过身份验证的用户。在支持未经身份验证的客户端的群集中,用户主体是一组未经身份验证的客户端 用户 由代理使用可配置的 .客户端 ID 是客户端的逻辑分组,具有客户端应用程序选择的有意义的名称。元组(用户、客户端 ID)定义 共享用户主体和客户端 ID 的安全客户端逻辑组。PrincipalBuilder

配额可以应用于(用户、客户端 ID)、用户或客户端 ID 组。对于给定连接,将应用与连接匹配的最具体配额。配额组的所有连接共享为该组配置的配额。 例如,如果 (user=“test-user”, client-id=“test-client”) 的 produce 配额为 10MB/秒,则在用户 “test-user” 的所有生产者实例与 client-id “test-client” 共享。

配额配置

可以为(用户、客户端 ID)、用户和客户端 ID 组定义配额配置。可以在需要更高(甚至更低)配额的任何配额级别覆盖默认配额。 该机制类似于每个主题的日志配置覆盖。 用户和(user,client-id)配额覆盖在 /config/users 下写入 ZooKeeper,客户端 ID 配额覆盖写入 /config/clients 下。 这些覆盖由所有代理读取并立即生效。这使我们能够更改配额,而无需滚动重新启动整个群集。详情请看这里。 也可以使用相同的机制动态更新每个组的默认配额。

配额配置的优先级顺序为:

  1. /config/users/<user>/clients/<client-id>
  2. /config/users/<user>/clients/<default>
  3. /config/users/<user>
  4. /config/users/<default>/clients/<client-id>
  5. /config/users/<default>/clients/<default>
  6. /config/users/<default>
  7. /config/clients/<client-id>
  8. /config/clients/<default>

网络带宽配额

网络带宽配额定义为共享配额的每组客户端的字节速率阈值。 默认情况下,每个唯一的客户端组都会收到群集配置的固定配额(以字节/秒为单位)。 此配额基于每个代理定义。每组客户端最多可以发布/提取 X 字节/秒 在客户端受到限制之前,每个代理。

请求速率配额

请求速率配额定义为客户端可以在请求处理程序 I/O 上使用的时间百分比 配额窗口中每个代理的线程和网络线程。n% 的配额表示一个线程的 n%,因此配额超出总容量 ((num.io.threads + num.network.threads) * 100)%。 每组客户端可以在配额中的所有 I/O 和网络线程中使用最多 n% 的总百分比 窗口在被限制之前。由于为 I/O 和网络线程分配的线程数通常基于 在代理主机上可用的内核数上,请求速率配额表示 CPU 的总百分比 共享配额的每组客户端都可以使用。

执法

默认情况下,每个唯一的客户端组都会收到群集配置的固定配额。 此配额基于每个代理定义。每个客户端都可以在限制之前利用每个代理的此配额。我们决定为每个经纪人定义这些配额比 每个客户端具有固定的集群宽带宽,因为这需要一种在所有代理之间共享客户端配额使用情况的机制。这可能比配额实施本身更难正确!

当经纪商检测到配额违规时,它如何反应?在我们的解决方案中,代理首先计算将违规客户端置于其配额以下所需的延迟量 并立即返回带有延迟的响应。如果是提取请求,响应将不包含任何数据。然后,代理将客户端的通道静音, 不再处理来自客户端的请求,直到延迟结束。收到延迟持续时间为非零的响应后,Kafka 客户端也将避免 在延迟期间向经纪人发送进一步的请求。因此,来自受限制客户端的请求实际上被阻止了。 即使对于不尊重代理延迟响应的较旧客户端实现,代理通过静音其套接字通道施加的背压 仍然可以处理行为不良客户端的限制。向受限制的通道发送进一步请求的客户端只有在延迟结束后才会收到响应。

字节速率和线程利用率在多个小窗口(例如 30 个窗口,每个窗口 1 秒)上测量,以便快速检测和纠正配额违规。通常,具有较大的测量窗口 (例如,10 个窗口,每个窗口 30 秒)会导致大量流量突发,然后是长时间的延迟,这在用户体验方面不是很好。

5. 实施

5.1 网络层

网络层是一个相当简单的NIO服务器,不会详细描述。发送文件实现是通过为接口提供一个方法来完成的。这允许文件支持的消息集使用更高效的实现,而不是进程内缓冲写入。线程模型是单个接受器线程和 N 个处理器线程,每个线程处理固定数量的连接。这种设计已经在其他地方进行了非常彻底的测试,发现易于实现且快速。该协议保持非常简单,以允许将来以其他语言实现客户端。TransferableRecordswriteTotransferTo

5.2 消息

消息由可变长度标头、可变长度不透明键字节数组和可变长度不透明值字节数组组成。标头的格式将在下一节中介绍。 保持键和值不透明是正确的决定:目前序列化库取得了很大进展,任何特定的选择都不太可能适合所有用途。不用说,使用 Kafka 的特定应用程序可能会强制使用特定的序列化类型作为其使用的一部分。该接口只是消息的迭代器,具有用于批量读取和写入NIO的专用方法。RecordBatchChannel

5.3 消息格式

消息(也称为记录)始终分批写入。一批消息的技术术语是记录批,记录批包含一条或多条记录。在退化的情况下,我们可以有一个包含单个记录的记录批次。 记录批次和记录有自己的标头。每种格式如下所述。

5.3.1 记录批处理

以下是记录批处理的磁盘格式。

baseOffset: int64
batchLength: int32
partitionLeaderEpoch: int32
magic: int8 (current magic value is 2)
crc: int32
attributes: int16
    bit 0~2:
        0: no compression
        1: gzip
        2: snappy
        3: lz4
        4: zstd
    bit 3: timestampType
    bit 4: isTransactional (0 means not transactional)
    bit 5: isControlBatch (0 means not a control batch)
    bit 6: hasDeleteHorizonMs (0 means baseTimestamp is not set as the delete horizon for compaction)
    bit 7~15: unused
lastOffsetDelta: int32
baseTimestamp: int64
maxTimestamp: int64
producerId: int64
producerEpoch: int16
baseSequence: int32
records: [Record]

请注意,启用压缩后,压缩的记录数据将直接在记录数计数之后序列化。

CRC 涵盖从属性到批处理末尾的数据(即 CRC 后面的所有字节)。它位于魔术字节之后,它 意味着客户端必须先分析魔术字节,然后才能决定如何解释批长度和魔术字节之间的字节。分区领导者 epoch 字段不包括在 CRC 计算中,以避免在为接收的每个批次分配此字段时需要重新计算 CRC 经纪人。CRC-32C(卡斯塔尼奥利)多项式用于计算。

压缩时:与旧消息格式不同,magic v2 及更高版本在清理日志时保留原始批次中的第一个和最后一个偏移量/序列号。这是必需的,以便能够还原 重新加载日志时生产者的状态。例如,如果我们没有保留最后一个序列号,那么在分区前导失败后,生产者可能会看到 OutOfSequence 错误。基本序列号必须 保留以进行重复检查(代理通过验证传入批次的第一个和最后一个序列号是否与来自该生产者的最后一个序列号匹配来检查传入的 Produce 请求是否存在重复项)。结果, 当清理批次中的所有记录时,日志中可能会有空批次,但仍保留批次以保留生产者的最后一个序列号。这里的一个奇怪之处在于 baseTimestamp 字段在压缩期间不会保留,因此如果压缩批中的第一条记录,该字段将更改。

如果记录批处理包含具有空有效负载或中止事务标记的记录,则压缩也可能修改 baseTimestamp。baseTimestamp 将设置为应删除这些记录的时间戳 同时设置了删除地平线属性位。

5.3.1.1 控制批次

控制批次包含称为控制记录的单个记录。不应将控制记录传递给应用程序。相反,使用者使用它们来过滤掉中止的事务消息。

控制记录的键符合以下架构:

version: int16 (current version is 0)
type: int16 (0 indicates an abort marker, 1 indicates a commit)

控制记录值的架构取决于类型。该值对客户端不透明。

5.3.2 记录

记录级标头是在 Kafka 0.11.0 中引入的。下面描述了带有标头的记录的磁盘格式。

length: varint
attributes: int8
    bit 0~7: unused
timestampDelta: varlong
offsetDelta: varint
keyLength: varint
key: byte[]
valueLen: varint
value: byte[]
Headers => [Header]
5.3.2.1 记录标题
headerKeyLength: varint
headerKey: String
headerValueLength: varint
Value: byte[]

我们使用与Protobuf相同的变体编码。有关后者的更多信息,请参见此处。记录中的标头计数 也被编码为变量。

5.3.3 旧消息格式

在 Kafka 0.11 之前,消息被传输并存储在消息集中。在消息集中,每条消息都有自己的元数据。请注意,尽管消息集表示为数组, 它们前面不像协议中的其他数组元素那样具有 Int32 数组大小。

消息集:
MessageSet (Version: 0) => [offset message_size message]
offset => INT64
message_size => INT32
message => crc magic_byte attributes key value
    crc => INT32
    magic_byte => INT8
    attributes => INT8
        bit 0~2:
            0: no compression
            1: gzip
            2: snappy
        bit 3~7: unused
    key => BYTES
    value => BYTES
MessageSet (Version: 1) => [offset message_size message]
offset => INT64
message_size => INT32
message => crc magic_byte attributes timestamp key value
    crc => INT32
    magic_byte => INT8
    attributes => INT8
        bit 0~2:
            0: no compression
            1: gzip
            2: snappy
            3: lz4
        bit 3: timestampType
            0: create time
            1: log append time
        bit 4~7: unused
    timestamp => INT64
    key => BYTES
    value => BYTES

在 Kafka 0.10 之前的版本中,唯一支持的消息格式版本(在魔术值中指示)为 0。消息格式版本 1 在版本 0.10 中引入了时间戳支持。

  • 与上面的版本 2 类似,属性的最低位表示压缩类型。
  • 在版本 1 中,生产者应始终将时间戳类型位设置为 0。如果主题配置为使用日志追加时间, (通过代理级别配置 log.message.timestamp.type = LogAppendTime 或主题级别配置 message.timestamp.type = LogAppendTime), 代理将覆盖消息集中的时间戳类型和时间戳。
  • 属性的最高位必须设置为 0。

在消息格式版本 0 和 1 中,Kafka 支持递归消息以启用压缩。在这种情况下,必须设置消息的属性 以指示其中一种压缩类型,值字段将包含使用该类型压缩的消息集。我们经常参考 将嵌套消息作为“内部消息”,将包装消息作为“外部消息”。请注意,键应为空 对于外部消息,其偏移量将是最后一个内部消息的偏移量。

当接收递归版本 0 消息时,代理会解压缩它们,并且每个内部消息单独分配一个偏移量。 在版本 1 中,为了避免服务器端重新压缩,将仅为包装器消息分配偏移量。内在讯息 将具有相对偏移量。绝对偏移量可以使用外部消息的偏移量来计算,该偏移量对应于 到分配给最后一个内部消息的偏移量。

crc 字段包含后续消息字节(即从魔术字节到值)的 CRC32(而不是 CRC-32C)。

5.4 日志

具有两个分区的名为“my-topic”的主题的日志由两个目录(即 和)组成,其中填充了包含该主题消息的数据文件。日志文件的格式是一系列“日志条目”;每个日志条目是一个 4 字节整数 N,存储消息长度,后跟 N 个消息字节。每条消息都由一个 64 位整数偏移量唯一标识,该偏移量给出此消息开头在该分区上发送到该主题的所有消息流中的字节位置。下面给出了每条消息的磁盘格式。每个日志文件都以其包含的第一条消息的偏移量命名。因此,创建的第一个文件将是 00000000000000000000.log <> my-topic-0my-topic-1

记录的确切二进制格式作为标准接口进行版本控制和维护,因此记录批处理可以在创建者、代理和客户端之间传输,而无需在需要时重新复制或转换。上一节包含有关记录的磁盘格式的详细信息。

使用消息偏移量作为消息 ID 是不寻常的。我们最初的想法是使用生产者生成的 GUID,并在每个代理上维护从 GUID 到偏移量的映射。但是,由于使用者必须维护每个服务器的 ID,因此 GUID 的全局唯一性不提供任何值。此外,维护从随机 id 到偏移量的映射的复杂性需要必须与磁盘同步的重权索引结构,本质上需要一个完整的持久随机访问数据结构。因此,为了简化查找结构,我们决定使用一个简单的每分区原子计数器,该计数器可以与分区 id 和节点 id 结合使用以唯一标识消息;这使得查找结构更简单,尽管每个使用者请求仍可能有多个查找。然而,一旦我们确定了计数器,直接使用偏移量的跳转似乎是很自然的——毕竟两者都是分区特有的单调递增整数。由于偏移量对消费者 API 是隐藏的,因此此决定最终是一个实现细节,我们采用了更有效的方法。

日志允许串行追加,这些附加始终转到最后一个文件。当此文件达到可配置大小(例如 1GB)时,该文件将滚动到新文件。日志采用两个配置参数:M,提供在强制操作系统将文件刷新到磁盘之前要写入的消息数,以及 S,提供强制刷新的秒数。这提供了持久性保证,在系统崩溃时最多丢失 M 条消息或 S 秒的数据。

读取是通过提供消息的 64 位逻辑偏移量和 S 字节最大块大小来完成的。这将在 S 字节缓冲区中包含的消息上返回迭代器。S 旨在大于任何单个消息,但如果出现异常大的消息,可以多次重试读取,每次将缓冲区大小加倍,直到成功读取消息。可以指定最大消息和缓冲区大小,以使服务器拒绝大于某个大小的消息,并按客户端获取完整消息所需的最大值向客户端提供绑定。读取缓冲区很可能以部分消息结尾,这很容易通过大小分隔来检测。

从偏移量读取的实际过程需要首先找到存储数据的日志段文件,从全局偏移量值计算特定于文件的偏移量,然后从该文件偏移量读取。搜索是针对为每个文件维护的内存中范围的简单二叉搜索变体完成的。

日志提供了获取最新写入的消息的功能,以允许客户端从“立即”开始订阅。如果使用者未能在 SLA 指定的天数内使用其数据,这也很有用。在这种情况下,当客户端尝试使用不存在的偏移量时,会为其提供 OutOfRangeException,并且可以根据用例自行重置或失败。

以下是发送给使用者的结果的格式。

MessageSetSend (fetch result)

total length     : 4 bytes
error code       : 2 bytes
message 1        : x bytes
...
message n        : x bytes
MultiMessageSetSend (multiFetch result)

total length       : 4 bytes
error code         : 2 bytes
messageSetSend 1
...
messageSetSend n

删除

数据一次删除一个日志段。日志管理器应用两个指标来识别 符合删除条件:时间和大小。对于基于时间的策略,将考虑记录时间戳,其中 段文件中的最大时间戳(记录顺序不相关)定义 整个细分市场。默认情况下,基于大小的保留处于禁用状态。启用后,日志管理器会不断删除 最旧的段文件,直到分区的总体大小再次在配置的限制内。如果两者兼而有之 同时启用策略,由于任一策略而符合删除条件的分段将是 删除。为了避免锁定读取,同时仍然允许修改段列表的删除,我们使用写入时复制 样式段列表实现,提供一致的视图以允许二叉搜索继续 删除过程中日志段的不可变静态快照视图。

保证

日志提供了一个配置参数 M,该参数控制在强制刷新到磁盘之前写入的最大消息数。启动时,将运行日志恢复过程,该过程循环访问最新日志段中的所有消息,并验证每个消息条目是否有效。如果消息条目的大小和偏移量之和小于文件的长度,并且消息有效负载的 CRC32 与消息一起存储的 CRC 匹配,则消息条目有效。如果检测到损坏,日志将被截断到最后一个有效偏移量。

请注意,必须处理两种类型的损坏:截断(由于崩溃而丢失未写入的块)和损坏(将无意义的块添加到文件中)。这样做的原因是,通常操作系统不保证文件索引节点和实际块数据之间的写入顺序,因此除了丢失写入数据之外,如果索引节点更新为新大小,文件可能会获得无意义的数据,但在写入包含该数据的块之前发生崩溃。CRC 检测到这种极端情况,并防止它损坏日志(当然,未写入的消息会丢失)。

5.5 分布

消费者偏移跟踪

Kafka 使用者跟踪它在每个分区中消耗的最大偏移量,并能够提交偏移量,以便 它可以在重新启动时从这些偏移中恢复。Kafka 提供了存储所有偏移量的选项 指定代理(针对该组)中的给定消费者组称为组协调员。即任何消费者实例 在该使用者组中,应将其偏移提交和获取发送到该组协调器(代理)。消费群体是 根据协调员的组名称分配给协调员。使用者可以通过发出 FindCoordinatorRequest 来查找其协调器 到任何 Kafka 代理并阅读 FindCoordinatorResponse,其中包含协调器详细信息。消费者 然后,可以继续从协调代理提交或获取偏移。如果协调器移动,消费者将 需要重新发现协调器。偏移提交可以由使用者实例自动或手动完成。

当组协调器收到 OffsetCommitRequest 时,它会将该请求追加到名为 __consumer_offsets 的特殊压缩 Kafka 主题。 只有在偏移量主题的所有副本收到偏移量后,代理才会向使用者发送成功的偏移量提交响应。 如果偏移量无法在可配置的超时内复制,则偏移提交将失败,使用者可以在回退后重试提交。 代理会定期压缩偏移量主题,因为它只需要维护每个分区的最新偏移提交。 协调器还将偏移量缓存在内存表中,以便快速提供偏移量提取。

当协调器收到偏移量提取请求时,它只是从偏移量缓存中返回上次提交的偏移量。 如果协调器刚刚启动,或者它刚刚成为一组新的消费者组的协调器(通过成为偏移量主题分区的领导者), 它可能需要将偏移量主题分区加载到缓存中。在这种情况下,偏移量提取将失败,并显示 CoordinatorLoadInProgressException 和使用者可以在退出后重试 OffsetFetchRequest。

动物园管理员目录

下面给出了用于在消费者和代理之间进行协调的 ZooKeeper 结构和算法。

表示法

当路径中的元素被表示时,这意味着xyz的值不是固定的,实际上对于xyz的每个可能值都有一个ZooKeeper znode。例如,名为 /topics 的目录包含每个主题名称的子目录。还给出了数字范围,例如指示子目录 0、1、2、3、4。箭头用于指示 znode 的内容。例如,将指示包含值“world”的znode /hello。[xyz]/topics/[topic][0...5]->/hello -> world

代理节点注册表

/brokers/ids/[0...N] --> {"jmx_port":...,"timestamp":...,"endpoints":[...],"host":...,"version":...,"port":...} (ephemeral node)

这是所有当前代理节点的列表,每个代理节点都提供一个唯一的逻辑代理标识,用于向使用者标识它(必须作为其配置的一部分提供)。启动时,代理节点通过在 /brokers/ids 下创建具有逻辑代理标识的 znode 来注册自身。逻辑代理标识的目的是允许将代理移动到不同的物理机,而不会影响使用者。尝试注册已在使用的代理标识(例如,因为两台服务器配置了相同的代理标识)会导致错误。

由于代理使用临时 znode 在 ZooKeeper 中注册自身,因此此注册是动态的,如果代理关闭或死亡(从而通知消费者它不再可用),此注册将消失。

代理主题注册表

/brokers/topics/[topic]/partitions/[0...N]/state --> {"controller_epoch":...,"leader":...,"version":...,"leader_epoch":...,"isr":[...]} (ephemeral node)

每个代理在其维护的主题下注册自己,并存储该主题的分区数。

群集标识

集群 ID 是分配给 Kafka 集群的唯一且不可变的标识符。集群 ID 最多可以包含 22 个字符,允许的字符由正则表达式 [a-zA-Z0-9_\-]+ 定义,该表达式对应于 URL 安全的 Base64 变体使用的字符,没有填充。从概念上讲,它是在首次启动群集时自动生成的。

在实现方面,它是在首次成功启动版本 0.10.1 或更高版本的代理时生成的。代理尝试在启动期间从 znode 获取集群 ID。如果 znode 不存在,那么代理将生成一个新的集群标识,并使用此集群标识创建该 znode。/cluster/id

代理节点注册

代理节点基本上是独立的,因此它们只发布有关它们拥有的信息。当代理加入时,它会在代理节点注册表目录下注册自身,并写入有关其主机名和端口的信息。代理还会在代理主题注册表中注册现有主题及其逻辑分区的列表。在代理上创建新主题时,会动态注册这些主题。

6. 运营

以下是一些基于LinkedIn使用情况和经验将 Kafka 实际作为生产系统运行的信息。请向我们发送您知道的任何其他提示。

6.1 基本卡夫卡操作

本节将回顾您将在 Kafka 集群上执行的最常见操作。本节中回顾的所有工具都可以在 Kafka 发行版的目录下找到,如果运行时没有参数,每个工具都会打印所有可能的命令行选项的详细信息。bin/

添加和删除主题

您可以选择手动添加主题,也可以在首次将数据发布到不存在的主题时自动创建主题。如果主题是自动创建的,则可能需要调整用于自动创建主题的默认主题配置

使用主题工具添加和修改主题:

  > bin/kafka-topics.sh --bootstrap-server broker_host:port --create --topic my_topic_name         --partitions 20 --replication-factor 3 --config x=y
复制因子控制将复制写入的每条消息的服务器数。如果复制因子为 3,则最多 2 台服务器可能会发生故障,然后您将无法访问数据。建议使用 2 或 3 的复制因子,以便可以透明地退回计算机,而不会中断数据消耗。

分区计数控制主题将被分片到的日志数量。分区计数有几个影响。首先,每个分区必须完全适合单个服务器。因此,如果您有 20 个分区,则完整的数据集(以及读取和写入负载)将由不超过 20 台服务器(不包括副本)处理。最后,分区计数会影响使用者的最大并行度。概念部分对此进行了更详细的讨论。

每个分片分区日志都放置在 Kafka 日志目录下自己的文件夹中。此类文件夹的名称由主题名称(附加短划线 (-) 和分区 ID 组成。由于典型的文件夹名称不能超过 255 个字符,因此主题名称的长度会有限制。我们假设分区数永远不会超过 100,000。因此,主题名称不能超过 249 个字符。这会在文件夹名称中为短划线和可能 5 位数长的分区 ID 留出足够的空间。

在命令行上添加的配置会覆盖服务器的默认设置,例如应保留数据的时间长度。此处记录了每主题的完整配置集。

修改主题

您可以使用同一主题工具更改主题的配置或分区。

要添加分区,您可以这样做

  > bin/kafka-topics.sh --bootstrap-server broker_host:port --alter --topic my_topic_name         --partitions 40
请注意,分区的一个用例是语义上对数据进行分区,添加分区不会更改现有数据的分区,因此如果使用者依赖该分区,这可能会打扰使用者。也就是说,如果数据被分区,那么这种分区可能会通过添加分区来洗牌,但 Kafka 不会尝试以任何方式自动重新分发数据。hash(key) % number_of_partitions

要添加配置:

  > bin/kafka-configs.sh --bootstrap-server broker_host:port --entity-type topics --entity-name my_topic_name --alter --add-config x=y
要删除配置: 最后删除一个主题:
  > bin/kafka-configs.sh --bootstrap-server broker_host:port --entity-type topics --entity-name my_topic_name --alter --delete-config x
  > bin/kafka-topics.sh --bootstrap-server broker_host:port --delete --topic my_topic_name

Kafka 目前不支持减少主题的分区数。

在此处找到有关更改主题复制因子的说明。

正常关机

Kafka 集群将自动检测任何代理关闭或故障,并为该机器上的分区选择新的领导者。无论服务器出现故障还是出于维护或配置更改而故意关闭,都会发生这种情况。对于后一种情况,Kafka 支持一种更优雅的机制来停止服务器,而不仅仅是杀死它。 当服务器正常停止时,它将利用两个优化:
  1. 它会将其所有日志同步到磁盘,以避免在重新启动时需要进行任何日志恢复(即验证日志尾部所有消息的校验和)。日志恢复需要时间,因此这会加快有意重新启动的速度。
  2. 它会在关闭之前将服务器作为其领导者的任何分区迁移到其他副本。这将使领导转移更快,并将每个分区不可用的时间减少到几毫秒。
每当服务器停止时,同步日志将自动发生,而不是硬终止,但受控领导迁移需要使用特殊设置: 请注意,仅当代理上托管的所有分区都具有副本(即复制因子大于 1 这些副本中至少有一个是活的)。这通常是您想要的,因为关闭最后一个副本会使该主题分区不可用。
      controlled.shutdown.enable=true

平衡领导力

每当代理停止或崩溃时,该代理分区的领导权都会转移到其他副本。当代理重新启动时,它将只是其所有分区的追随者,这意味着它不会用于客户端读取和写入。

为了避免这种不平衡,Kafka 有一个首选副本的概念。如果分区的副本列表为 1,5,9,则节点 1 优先作为节点 5 或 9 的领导者,因为它在副本列表中位于前面。默认情况下,Kafka 集群将尝试将领导恢复到首选副本。此行为配置如下:

      auto.leader.rebalance.enable=true
您也可以将其设置为 false,但随后需要通过运行以下命令手动恢复对已恢复复制副本的领导:
  > bin/kafka-leader-election.sh --bootstrap-server broker_host:port --election-type preferred --all-topic-partitions

跨机架平衡副本

机架感知功能将同一分区的副本分布在不同的机架上。这扩展了 Kafka 为代理故障提供的保证,以涵盖机架故障,从而限制了机架上所有代理同时发生故障时数据丢失的风险。该功能还可以应用于其他代理分组,例如 EC2 中的可用区。 您可以通过向代理配置添加属性来指定代理属于特定机架:创建修改主题或重新分发副本时,将遵守机架约束,确保副本跨越尽可能多的机架(分区将跨越最小(#racks,复制因子)不同的机架)。 用于将副本分配给代理的算法可确保每个代理的领导者数量保持不变,无论代理如何在机架上分布。这确保了平衡的吞吐量。 但是,如果为机架分配了不同数量的代理,则副本的分配将不均匀。具有较少代理的机架将获得更多副本,这意味着它们将使用更多存储并将更多资源投入到复制中。因此,明智的做法是在每个机架上配置相同数量的代理。
  broker.rack=my-rack-id

在群集和异地复制之间镜像数据

Kafka 管理员可以定义跨越各个 Kafka 集群、数据中心或地理区域边界的数据流。有关详细信息,请参阅异地复制部分。

检查消费者地位

有时,查看消费者的位置很有用。我们有一个工具,可以显示消费者组中所有消费者的位置,以及他们落后于日志末尾的距离。若要在名为 my-group 的使用者组上运行此工具,请使用名为 my-topic 的主题,如下所示:
  > bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group

  TOPIC                          PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG        CONSUMER-ID                                       HOST                           CLIENT-ID
  my-topic                       0          2               4               2          consumer-1-029af89c-873c-4751-a720-cefd41a669d6   /127.0.0.1                     consumer-1
  my-topic                       1          2               3               1          consumer-1-029af89c-873c-4751-a720-cefd41a669d6   /127.0.0.1                     consumer-1
  my-topic                       2          2               3               1          consumer-2-42c1abd4-e3b2-425d-a8bb-e1ea49b29bb2   /127.0.0.1                     consumer-2

管理使用者组

使用ConsumerGroupCommand工具,我们可以列出,描述或删除消费者组。可以手动删除使用者组,也可以在该组的上次提交的偏移量到期时自动删除该组。仅当组没有任何活动成员时,手动删除才有效。 例如,要列出所有主题中的所有使用者组: 要查看偏移量,如前所述,我们像这样“描述”消费者组: 还有许多其他“描述”选项可用于提供有关使用者组的更多详细信息:
  > bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list

  test-consumer-group
  > bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group

  TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                    HOST            CLIENT-ID
  topic3          0          241019          395308          154289          consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2
  topic2          1          520678          803288          282610          consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2
  topic3          1          241018          398817          157799          consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2
  topic1          0          854144          855809          1665            consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1      consumer1
  topic2          0          460537          803290          342753          consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1      consumer1
  topic3          2          243655          398812          155157          consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1      consumer4
  • --members:此选项提供使用者组中所有活动成员的列表。
          > bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --members
    
          CONSUMER-ID                                    HOST            CLIENT-ID       #PARTITIONS
          consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1      consumer1       2
          consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1      consumer4       1
          consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2       3
          consumer3-ecea43e4-1f01-479f-8349-f9130b75d8ee /127.0.0.1      consumer3       0
  • --members --verbose:除了上面“--members”选项报告的信息之外,此选项还提供分配给每个成员的分区。
          > bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --members --verbose
    
          CONSUMER-ID                                    HOST            CLIENT-ID       #PARTITIONS     ASSIGNMENT
          consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1      consumer1       2               topic1(0), topic2(0)
          consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1      consumer4       1               topic3(2)
          consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2       3               topic2(1), topic3(0,1)
          consumer3-ecea43e4-1f01-479f-8349-f9130b75d8ee /127.0.0.1      consumer3       0               -
  • --offsets:这是默认的描述选项,并提供与“--describe”选项相同的输出。
  • --state:此选项提供有用的组级别信息。
          > bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --state
    
          COORDINATOR (ID)          ASSIGNMENT-STRATEGY       STATE                #MEMBERS
          localhost:9092 (0)        range                     Stable               4
要手动删除一个或多个消费者组,可以使用“--delete”选项:
  > bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --delete --group my-group --group my-other-group

  Deletion of requested consumer groups ('my-group', 'my-other-group') was successful.

要重置使用者组的偏移量,可以使用“--重置偏移量”选项。 此选项一次支持一个使用者组。它需要定义以下范围:--all-topics 或 --topic。必须选择一个范围,除非使用“--from-file”方案。此外,首先确保使用者实例处于非活动状态。 有关更多详细信息,请参阅 KIP-122

它有 3 个执行选项:

  • (默认值)以显示要重置的偏移。
  • --执行:执行 --重置偏移量过程。
  • --export :将结果导出为 CSV 格式。

--reset-offsets还有以下方案可供选择(必须至少选择一个方案):

  • --to-datetime <字符串:日期时间> :将偏移量重置为日期时间的偏移量。格式: 'YYYY-MM-DDTHH:mm:SS.sss'
  • --to-最早:将偏移量重置为最早偏移量。
  • --to-latest :将偏移量重置为最新偏移量。
  • --移位 <长:偏移数> :将偏移电流偏移重置为“n”,其中“n”可以是正数或负数。
  • --from-file :将偏移量重置为 CSV 文件中定义的值。
  • --to-current:将偏移量重置为当前偏移量。
  • --按持续时间 <字符串:持续时间> :将偏移量重置为按当前时间戳的持续时间偏移。格式: 'PnDTnHnMnS'
  • --to-offset :将偏移重置为特定偏移。
请注意,超出范围的偏移将调整为可用的偏移端。例如,如果偏移端为 10,偏移偏移请求为 的 15,然后,实际上将选择 10 的偏移量。

例如,要将使用者组的偏移量重置为最新偏移量,请执行以下操作:

  > bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --reset-offsets --group consumergroup1 --topic topic1 --to-latest

  TOPIC                          PARTITION  NEW-OFFSET
  topic1                         0          0

如果您使用的是旧的高级消费者并将组元数据存储在 ZooKeeper 中(即 ),则 pass 而不是 :offsets.storage=zookeeper--zookeeper--bootstrap-server

  > bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list

扩展集群

将服务器添加到 Kafka 集群非常简单,只需为它们分配一个唯一的代理 ID 并在新服务器上启动 Kafka。但是,这些新服务器不会自动分配任何数据分区,因此除非将分区移动到它们,否则在创建新主题之前,它们不会执行任何工作。因此,通常当您将计算机添加到群集时,您需要将一些现有数据迁移到这些计算机。

迁移数据的过程是手动启动的,但完全自动化。在幕后发生的事情是,Kafka 将添加新服务器作为它正在迁移的分区的追随者,并允许它完全复制该分区中的现有数据。当新服务器完全复制了此分区的内容并加入同步副本时,其中一个现有副本将删除其分区的数据。

分区重新分配工具可用于跨代理移动分区。理想的分区分布将确保所有代理的数据负载和分区大小均匀。分区重新分配工具无法自动研究 Kafka 集群中的数据分布并移动分区以实现均匀的负载分布。因此,管理员必须弄清楚应该移动哪些主题或分区。

分区重新分配工具可以在 3 种互斥模式下运行:

  • --generate:在此模式下,给定主题列表和代理列表,该工具将生成候选重新分配,以将指定主题的所有分区移动到新代理。此选项仅提供一种在给定主题和目标代理列表的情况下生成分区重新分配计划的便捷方法。
  • --execute:在此模式下,该工具根据用户提供的重新分配计划启动分区的重新分配。(使用 --reassignment-json-file 选项)。这可以是由管理员手动制定的自定义重新分配计划,也可以是使用 --generate 选项提供的
  • --verify:在此模式下,该工具验证上次 --execute 期间列出的所有分区的重新分配状态。状态可以是“成功完成”、“失败”或“正在进行”
自动将数据迁移到新计算机
分区重新分配工具可用于将某些主题从当前代理集移动到新添加的代理。这在扩展现有集群时通常很有用,因为将整个主题移动到新的代理集比一次移动一个分区更容易。用于执行此操作时,用户应提供应移动到新代理集的主题列表和新代理的目标列表。然后,该工具将给定主题列表的所有分区均匀分布到新的代理集上。在此移动过程中,主题的复制因子保持不变。实际上,输入主题列表的所有分区的副本都从旧的代理集移动到新添加的代理集。

例如,以下示例将主题 foo1,foo2 的所有分区移动到新的代理集 5,6。在此步骤结束时,主题 foo1 和 foo2 的所有分区将存在于代理 5,6 上。

由于该工具接受输入的主题列表作为 json 文件,因此您首先需要确定要移动的主题并创建 json 文件,如下所示:

  > cat topics-to-move.json
  {"topics": [{"topic": "foo1"},
              {"topic": "foo2"}],
  "version":1
  }
json 文件准备就绪后,使用分区重新分配工具生成候选分配:
  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --topics-to-move-json-file topics-to-move.json --broker-list "5,6" --generate
  Current partition replica assignment

  {"version":1,
  "partitions":[{"topic":"foo1","partition":0,"replicas":[2,1]},
                {"topic":"foo1","partition":1,"replicas":[1,3]},
                {"topic":"foo1","partition":2,"replicas":[3,4]},
                {"topic":"foo2","partition":0,"replicas":[4,2]},
                {"topic":"foo2","partition":1,"replicas":[2,1]},
                {"topic":"foo2","partition":2,"replicas":[1,3]}]
  }

  Proposed partition reassignment configuration

  {"version":1,
  "partitions":[{"topic":"foo1","partition":0,"replicas":[6,5]},
                {"topic":"foo1","partition":1,"replicas":[5,6]},
                {"topic":"foo1","partition":2,"replicas":[6,5]},
                {"topic":"foo2","partition":0,"replicas":[5,6]},
                {"topic":"foo2","partition":1,"replicas":[6,5]},
                {"topic":"foo2","partition":2,"replicas":[5,6]}]
  }

该工具生成一个候选赋值,该赋值会将主题 foo1,foo2 中的所有分区移动到代理 5,6。但是请注意,此时分区移动尚未开始,它只是告诉您当前分配和建议的新分配。应保存当前分配,以防您要回滚到它。新分配应保存在 json 文件(例如 expand-cluster-reassignment .json)中,以便使用 --execute 选项输入到工具中,如下所示:

  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --execute
  Current partition replica assignment

  {"version":1,
  "partitions":[{"topic":"foo1","partition":0,"replicas":[2,1]},
                {"topic":"foo1","partition":1,"replicas":[1,3]},
                {"topic":"foo1","partition":2,"replicas":[3,4]},
                {"topic":"foo2","partition":0,"replicas":[4,2]},
                {"topic":"foo2","partition":1,"replicas":[2,1]},
                {"topic":"foo2","partition":2,"replicas":[1,3]}]
  }

  Save this to use as the --reassignment-json-file option during rollback
  Successfully started partition reassignments for foo1-0,foo1-1,foo1-2,foo2-0,foo2-1,foo2-2
  

最后,--verify 选项可以与该工具一起使用,以检查分区重新分配的状态。请注意,相同的 expand-cluster-reassignment.json(与 --execute 选项一起使用)应与 --verify 选项一起使用:

  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --verify
  Status of partition reassignment:
  Reassignment of partition [foo1,0] is completed
  Reassignment of partition [foo1,1] is still in progress
  Reassignment of partition [foo1,2] is still in progress
  Reassignment of partition [foo2,0] is completed
  Reassignment of partition [foo2,1] is completed
  Reassignment of partition [foo2,2] is completed
自定义分区分配和迁移
分区重新分配工具还可用于有选择地将分区的副本移动到一组特定的代理。以这种方式使用时,假设用户知道重新分配计划,并且不需要该工具生成候选重新分配,从而有效地跳过 --generate 步骤并直接移动到 --execute 步骤

例如,以下示例将主题 foo0 的分区 1 移动到代理 5,6,将主题 foo1 的分区 2 移动到代理 2,3:

第一步是在 json 文件中手动创建自定义重新分配计划:

  > cat custom-reassignment.json
  {"version":1,"partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},{"topic":"foo2","partition":1,"replicas":[2,3]}]}
然后,使用带有 --execute 选项的 json 文件启动重新分配过程:
  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --execute
  Current partition replica assignment

  {"version":1,
  "partitions":[{"topic":"foo1","partition":0,"replicas":[1,2]},
                {"topic":"foo2","partition":1,"replicas":[3,4]}]
  }

  Save this to use as the --reassignment-json-file option during rollback
  Successfully started partition reassignments for foo1-0,foo2-1
  

--verify 选项可以与该工具一起使用,以检查分区重新分配的状态。请注意,相同的 custom-reassignment.json(与 --execute 选项一起使用)应与 --verify 选项一起使用:

  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --verify
  Status of partition reassignment:
  Reassignment of partition [foo1,0] is completed
  Reassignment of partition [foo2,1] is completed

退役经纪人

分区重新分配工具尚不能为停用代理自动生成重新分配计划。因此,管理员必须提出一个重新分配计划,将要停用的代理上托管的所有分区的副本移动到其余代理。这可能相对繁琐,因为重新分配需要确保所有副本不会从已停用的代理移动到仅一个其他代理。为了使此过程毫不费力,我们计划在未来为退役经纪人添加工具支持。

增加复制因子

增加现有分区的复制因子很容易。只需在自定义重新分配 json 文件中指定额外的副本,并将其与 --execute 选项一起使用即可增加指定分区的复制因子。

例如,以下示例将主题 foo 的分区 0 的复制因子从 1 增加到 3。在增加复制因子之前,代理 5 上存在分区的唯一副本。作为增加复制因子的一部分,我们将在代理 6 和 7 上添加更多副本。

第一步是在 json 文件中手动创建自定义重新分配计划:

  > cat increase-replication-factor.json
  {"version":1,
  "partitions":[{"topic":"foo","partition":0,"replicas":[5,6,7]}]}
然后,使用带有 --execute 选项的 json 文件启动重新分配过程:
  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --execute
  Current partition replica assignment

  {"version":1,
  "partitions":[{"topic":"foo","partition":0,"replicas":[5]}]}

  Save this to use as the --reassignment-json-file option during rollback
  Successfully started partition reassignment for foo-0

--verify 选项可以与该工具一起使用,以检查分区重新分配的状态。请注意,相同的增加复制因子.json(与 --execute 选项一起使用)应与 --verify 选项一起使用:

  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --verify
  Status of partition reassignment:
  Reassignment of partition [foo,0] is completed
您还可以使用 kafka-topics 工具验证复制因子的增加:
  > bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic foo --describe
  Topic:foo	PartitionCount:1	ReplicationFactor:3	Configs:
    Topic: foo	Partition: 0	Leader: 5	Replicas: 5,6,7	Isr: 5,6,7

在数据迁移期间限制带宽使用

Kafka 允许您对复制流量应用限制,设置用于在计算机之间移动副本的带宽上限。这在重新平衡集群、引导新代理或添加或删除代理时非常有用,因为它限制了这些数据密集型操作对用户的影响。 有两个接口可用于接合油门。最简单、最安全的方法是在调用 kafka-reassign-partitions.sh 时应用限制,但也 kafka-configs.sh 用于直接查看和更改限制值。 因此,例如,如果您要执行重新平衡,使用以下命令,它将以不超过 50MB/s 的速度移动分区。 执行此脚本时,您将看到油门接合:
$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --execute --reassignment-json-file bigger-cluster.json --throttle 50000000
  The inter-broker throttle limit was set to 50000000 B/s
  Successfully started partition reassignment for foo1-0

如果您希望在重新平衡期间更改限制,例如增加吞吐量以使其更快地完成,您可以通过使用 --extra 选项重新运行执行命令来执行此操作,该选项传递相同的重新分配 json-file:

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092  --additional --execute --reassignment-json-file bigger-cluster.json --throttle 700000000
  The inter-broker throttle limit was set to 700000000 B/s

重新平衡完成后,管理员可以使用 --verify 选项检查重新平衡的状态。 如果重新平衡已完成,将通过 --verify 命令移除油门。重要的是 重新平衡完成后,管理员通过运行以下命令及时删除限制 --验证选项。如果不这样做,可能会导致常规复制流量受到限制。

当执行 --verify 选项并且重新分配已完成时,脚本将确认限制已被移除:

  > bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092  --verify --reassignment-json-file bigger-cluster.json
  Status of partition reassignment:
  Reassignment of partition [my-topic,1] is completed
  Reassignment of partition [my-topic,0] is completed

  Clearing broker-level throttles on brokers 1,2,3
  Clearing topic-level throttles on topic my-topic

管理员还可以使用 kafka-configs.sh 验证分配的配置。有两对油门 用于管理限制过程的配置。第一对是指油门值本身。这是在代理处配置的 级别,使用动态属性:

    leader.replication.throttled.rate
    follower.replication.throttled.rate

然后是受限制副本的枚举集的配置对:

    leader.replication.throttled.replicas
    follower.replication.throttled.replicas

这是按主题配置的。

所有四个配置值都由 kafka-reassign-partitions.sh 自动分配(如下所述)。

要查看限制配置,请执行以下操作:

  > bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type brokers
  Configs for brokers '2' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000
  Configs for brokers '1' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000

这显示了应用于复制协议的领导者端和从属端的限制。默认情况下,双方 分配相同的受限制吞吐量值。

要查看受限制的副本列表,请执行以下操作:

  > bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type topics
  Configs for topic 'my-topic' are leader.replication.throttled.replicas=1:102,0:101,
      follower.replication.throttled.replicas=1:101,0:102

在这里,我们看到领导者限制应用于代理 1 上的分区 102 和代理 0 上的分区 101。同样, 从属限制应用于分区 1 上的 代理 101 和代理 0 上的分区 102。

默认情况下,kafka-reassign-partitions.sh 会将领导限制应用于 重新平衡,其中任何一个都可能是领导者。 它会将从属油门应用于所有移动目的地。因此,如果代理上存在具有副本的分区 101,102,被重新分配到102,103,一个领导油门, 对于该分区,将应用于 101,102,从属限制将仅应用于 103。

如果需要,您还可以使用 kafka-configs.sh 上的 --alter 开关手动更改油门配置。

安全使用受限制的复制

使用受限制的复制时应格外小心。特别:

(1)油门拆卸:

重新分配完成后,应及时移除油门(通过运行 kafka-reassign-partitions.sh --验证)。

(2) 确保进度:

如果限制设置得太低,与传入写入速率相比,复制可能不是 捗。在以下情况下会发生这种情况:

max(BytesInPerSec) > throttle

其中,BytesInPerSec 是监控生产者对每个代理的写入吞吐量的指标。

管理员可以使用以下指标在重新平衡期间监控复制是否取得进展:

kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+)

在复制期间,滞后应不断减少。如果指标没有减少,管理员应 增加 如上所述的限制吞吐量。

设置配额

配额覆盖和默认值可以在(用户、客户端 ID)、用户或客户端 ID 级别配置,如此所述。 默认情况下,客户端会收到无限制的配额。 可以为每个(用户、客户端 ID)、用户或客户端 ID 组设置自定义配额。

为 (用户 = 用户 1, 客户端 id=客户端 A) 配置自定义配额:

  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
  Updated config for entity: user-principal 'user1', client-id 'clientA'.
为用户=用户 1 配置自定义配额:为客户端 id=clientA 配置自定义配额:可以通过指定 --entity-default 选项而不是 --entity-name 为每个(用户、客户端 ID)、用户或客户端 ID 组设置默认配额。
  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1
  Updated config for entity: user-principal 'user1'.
  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-name clientA
  Updated config for entity: client-id 'clientA'.

为用户 = 用户 A 配置默认客户端 ID 配额:

  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-default
  Updated config for entity: user-principal 'user1', default client-id.
为用户配置默认配额:为客户端 ID 配置默认配额:下面介绍如何描述给定(用户、客户端 ID)的配额: 描述给定用户的配额: 描述给定客户端 ID 的配额:如果未指定实体名称,则描述指定类型的所有实体。例如,描述所有用户:同样适用于(用户、客户端):
  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-default
  Updated config for entity: default user-principal.
  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default
  Updated config for entity: default client-id.
  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
  Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1
  Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type clients --entity-name clientA
  Configs for client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users
  Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
  Configs for default user-principal are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
  > bin/kafka-configs.sh  --bootstrap-server localhost:9092 --describe --entity-type users --entity-type clients
  Configs for user-principal 'user1', default client-id are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
  Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200

6.2 数据中心

某些部署需要管理跨多个数据中心的数据管道。为此,我们建议的方法是在每个数据中心部署一个本地 Kafka 群集,每个数据中心中的应用程序实例仅与其本地群集交互,并在群集之间镜像数据(有关如何执行此操作,请参阅异地复制文档)。

此部署模式允许数据中心充当独立的实体,并允许我们集中管理和调整数据中心间复制。这允许每个设施独立运行,即使数据中心间链接不可用:发生这种情况时,镜像会落后,直到链接恢复,此时它会赶上。

对于需要所有数据的全局视图的应用程序,可以使用镜像来提供群集,这些群集具有从所有数据中心中的本地群集镜像的聚合数据。这些聚合集群用于需要完整数据集的应用程序进行读取。

这不是唯一可能的部署模式。可以通过 WAN 读取或写入远程 Kafka 集群,但显然这将增加获取集群所需的任何延迟。

Kafka 自然地在生产者和消费者中对数据进行批处理,因此即使在高延迟连接上也可以实现高吞吐量。为了允许这一点,可能需要使用 和 配置增加生产者、使用者和代理的 TCP 套接字缓冲区大小。此处记录了设置此值的适当方法。socket.send.buffer.bytessocket.receive.buffer.bytes

通常建议通过高延迟链接运行跨多个数据中心的单个 Kafka 群集。这将导致 Kafka 写入和 ZooKeeper 写入的复制延迟非常高,如果位置之间的网络不可用,Kafka 和 ZooKeeper 都不会在所有位置保持可用。

6.3 异地复制(跨群集数据镜像)

异地复制概述

Kafka 管理员可以定义跨越各个 Kafka 集群、数据中心或地理区域边界的数据流。组织、技术或法律要求通常需要此类事件流设置。常见方案包括:

  • 异地复制
  • 灾难恢复
  • 将边缘集群馈送到中央聚合集群中
  • 群集的物理隔离(例如生产与测试)
  • 云迁移或混合云部署
  • 法律和合规要求

管理员可以使用 Kafka 的 MirrorMaker(版本 2)设置此类群集间数据流,该工具以流式方式在不同 Kafka 环境之间复制数据。MirrorMaker建立在Kafka Connect框架之上,支持以下功能:

  • 复制主题(数据和配置)
  • 复制使用者组(包括偏移量)以在群集之间迁移应用程序
  • 复制 ACL
  • 保留分区
  • 自动检测新主题和分区
  • 提供广泛的指标,例如跨多个数据中心/集群的端到端复制延迟
  • 容错和水平可扩展的操作

注意:使用 MirrorMaker 进行异地复制可跨 Kafka 群集复制数据。这种集群间复制不同于 Kafka 的集群内复制,后者在同一 Kafka 集群内复制数据。

什么是复制流

使用 MirrorMaker,Kafka 管理员可以将主题、主题配置、使用者组及其偏移量和 ACL 从一个或多个源 Kafka 集群复制到一个或多个目标 Kafka 集群,即跨集群环境。简而言之,MirrorMaker 使用连接器从源群集使用并生成到目标群集。

这些从源集群到目标集群的方向流称为复制流。它们使用 MirrorMaker 配置文件中的格式定义,如下所述。管理员可以基于这些流创建复杂的复制拓扑。{source_cluster}->{target_cluster}

下面是一些示例模式:

  • 主动/主动高可用性部署:A->B, B->A
  • 主动/被动或主动/备用高可用性部署:A->B
  • 聚合(例如,从多个集群到一个集群):A->K, B->K, C->K
  • 扇出(例如,从一个集群到多个集群):K->A, K->B, K->C
  • 转发:A->B, B->C, C->D

默认情况会复制所有主题和使用者组。但是,可以单独配置每个复制流。例如,您可以定义仅将特定主题或使用者组从源集群复制到目标集群。

下面是有关如何配置从群集到群集的数据复制(主动/被动设置)的第一个示例:primarysecondary

# Basic settings
clusters = primary, secondary
primary.bootstrap.servers = broker3-primary:9092
secondary.bootstrap.servers = broker5-secondary:9092

# Define replication flows
primary->secondary.enabled = true
primary->secondary.topics = foobar-topic, quux-.*

配置异地复制

以下各节介绍如何配置和运行专用镜像制作群集。如果要在现有的 Kafka Connect 群集或其他受支持的部署设置中运行 MirrorMaker,请参阅 KIP-382: MirrorMaker 2.0,并注意配置设置的名称可能因部署模式而异。

除了以下各节中介绍的内容外,有关配置设置的更多示例和信息,请访问:

配置文件语法

镜像制作配置文件通常命名为 。您可以在此文件中配置各种组件:connect-mirror-maker.properties

  • 镜像制作设置:全局设置,包括群集定义(别名),以及每个复制流的自定义设置
  • 卡夫卡连接和连接器设置
  • Kafka 生产者、使用者和管理员客户端设置

示例:定义镜像制作器设置(稍后将更详细地解释)。

# Global settings
clusters = us-west, us-east   # defines cluster aliases
us-west.bootstrap.servers = broker3-west:9092
us-east.bootstrap.servers = broker5-east:9092

topics = .*   # all topics to be replicated by default

# Specific replication flow settings (here: flow from us-west to us-east)
us-west->us-east.enabled = true
us-west->us.east.topics = foo.*, bar.*  # override the default above

MirrorMaker基于Kafka Connect框架。任何 Kafka Connect、源连接器和接收器连接器设置(如有关 Kafka Connect 的文档章节中所述)都可以直接在 MirrorMaker 配置中使用,而无需更改配置设置的名称或为其添加前缀。

示例:定义 MirrorMaker 要使用的自定义 Kafka Connect 设置。

# Setting Kafka Connect defaults for MirrorMaker
tasks.max = 5

大多数默认的 Kafka Connect 设置都适用于开箱即用的 MirrorMaker,但 除外。为了在多个 MirrorMaker 进程中均匀分配工作负载,建议根据可用的硬件资源和要复制的主题分区总数,至少设置为 (最好更高)。tasks.maxtasks.max2

您可以进一步自定义每个源或目标群集的 MirrorMaker Kafka Connect 设置(更准确地说,您可以“按连接器”指定 Kafka Connect 辅助角色级配置设置)。使用 MirrorMaker 配置文件中的格式。{cluster}.{config_name}

示例:为群集定义自定义连接器设置。us-west

# us-west custom settings
us-west.offset.storage.topic = my-mirrormaker-offsets

MirrorMaker 在内部使用 Kafka 生产者、消费者和管理员客户端。通常需要这些客户端的自定义设置。若要覆盖默认值,请在镜像制作配置文件中使用以下格式:

  • {source}.consumer.{consumer_config_name}
  • {target}.producer.{producer_config_name}
  • {source_or_target}.admin.{admin_config_name}

示例:定义自定义生产者、使用者、管理员客户端设置。

# us-west cluster (from which to consume)
us-west.consumer.isolation.level = read_committed
us-west.admin.bootstrap.servers = broker57-primary:9092

# us-east cluster (to which to produce)
us-east.producer.compression.type = gzip
us-east.producer.buffer.memory = 32768
us-east.admin.bootstrap.servers = broker8-secondary:9092
创建和启用复制流

若要定义复制流,必须先在镜像程序配置文件中定义相应的源和目标 Kafka 群集。

  • clusters(必需):以逗号分隔的 Kafka 集群“别名”列表
  • {clusterAlias}.bootstrap.servers(必需):特定集群的连接信息;逗号分隔的“引导”Kafka 代理列表

示例:定义两个集群别名和 ,包括它们的连接信息。primarysecondary

clusters = primary, secondary
primary.bootstrap.servers = broker10-primary:9092,broker-11-primary:9092
secondary.bootstrap.servers = broker5-secondary:9092,broker6-secondary:9092

其次,您必须根据需要显式启用单个复制流。请记住,流是定向的:如果需要双向(双向)复制,则必须启用双向流。{source}->{target}.enabled = true

# Enable replication from primary to secondary
primary->secondary.enabled = true

默认情况下,复制流会将除少数特殊主题和使用者组之外的所有主题和使用者组从源集群复制到目标集群,并自动检测任何新创建的主题和组。目标集群中复制的主题的名称将以源集群的名称为前缀(请参阅下面的部分)。例如,源集群中的主题将被复制到目标集群中名为的主题。foous-westus-west.foous-east

后续部分将介绍如何根据需要自定义此基本设置。

配置复制流

复制流的配置是顶级默认设置(例如,)的组合,在其上应用特定于流的设置(如果有),例如,)。若要更改顶级默认值,请将相应的顶级设置添加到 MirrorMaker 配置文件中。要仅覆盖特定复制流的默认值,请使用语法格式 。topicsus-west->us-east.topics{source}->{target}.{config.name}

最重要的设置是:

  • topics:主题列表或定义要复制源集群中哪些主题的正则表达式(默认值:topics = .*)
  • topics.exclude:主题列表或正则表达式,用于随后排除与设置匹配的主题(默认:topicstopics.exclude = .*[\-\.]internal, .*\.replica, __.*)
  • groups:定义要复制源集群中哪些使用者组的主题或正则表达式列表(默认:groups = .*)
  • groups.exclude:主题列表或正则表达式,用于随后排除与设置匹配的使用者组(默认值:groupsgroups.exclude = console-consumer-.*, connect-.*, __.*)
  • {source}->{target}.enable:设置为 以启用复制流(默认值:truefalse)

例:

# Custom top-level defaults that apply to all replication flows
topics = .*
groups = consumer-group1, consumer-group2

# Don't forget to enable a flow!
us-west->us-east.enabled = true

# Custom settings for specific replication flows
us-west->us-east.topics = foo.*
us-west->us-east.groups = bar.*
us-west->us-east.emit.heartbeats = false

支持其他配置设置,下面列出了其中一些设置。在大多数情况下,您可以将这些设置保留为其默认值。有关更多详细信息,请参阅 MirrorMakerConfigMirrorConnectorConfig

  • refresh.topics.enabled:是否定期检查源集群中的新主题(默认:true)
  • refresh.topics.interval.seconds:检查源集群中新主题的频率;低于默认值的值可能会导致性能下降(默认值:600,每十分钟一次)
  • refresh.groups.enabled:是否定期检查源集群中的新消费组(默认:true)
  • refresh.groups.interval.seconds:检查源集群中新消费组的频率;低于默认值的值可能会导致性能下降(默认值:600,每十分钟一次)
  • sync.topic.configs.enabled:是否从源集群复制主题配置(默认:true)
  • sync.topic.acls.enabled:是否从源集群同步 ACL(默认值:true)
  • emit.heartbeats.enabled:是否定期发出检测信号(默认值:true)
  • emit.heartbeats.interval.seconds:发出检测信号的频率(默认值:1,每 <> 秒)
  • heartbeats.topic.replication.factor:镜像制造商内部检测信号主题的复制因子(默认值:3)
  • emit.checkpoints.enabled:是否定期发出镜像制造商的使用者偏移量(默认值:true)
  • emit.checkpoints.interval.seconds:发出检查点的频率(默认值:60,每分钟)
  • checkpoints.topic.replication.factor:镜像制造商内部检查点主题的复制因子(默认值:3)
  • sync.group.offsets.enabled:是否定期将复制的消费组(在源集群中)的转换偏移量写入目标集群中的 Topic,只要该组中没有活动使用者连接到目标集群(默认值:false)__consumer_offsets
  • sync.group.offsets.interval.seconds:使用者组偏移的同步频率(默认值:60,每分钟)
  • offset-syncs.topic.replication.factor:镜像制作内部偏移同步主题的复制因子(默认值:3)
保护复制流

MirrorMaker 支持与 Kafka Connect 相同的安全设置,因此请参阅链接部分以获取更多信息。

示例:加密镜像制作器和群集之间的通信。us-east

us-east.security.protocol=SSL
us-east.ssl.truststore.location=/path/to/truststore.jks
us-east.ssl.truststore.password=my-secret-password
us-east.ssl.keystore.location=/path/to/keystore.jks
us-east.ssl.keystore.password=my-secret-password
us-east.ssl.key.password=my-secret-password
目标集群中复制主题的自定义命名

目标集群中的复制主题(有时称为远程主题)将根据复制策略重命名。MirrorMaker 使用此策略来确保来自不同群集的事件(也称为记录、消息)不会写入同一主题分区。默认情况下,根据默认复制策略,目标集群中复制的主题的名称格式为:{source}.{source_topic_name}

us-west         us-east
=========       =================
                bar-topic
foo-topic  -->  us-west.foo-topic

您可以使用以下设置自定义分隔符(默认:):.replication.policy.separator

# Defining a custom separator
us-west->us-east.replication.policy.separator = _

如果需要进一步控制复制主题的命名方式,可以在 MirrorMaker 配置中实现自定义和覆盖(默认值为 )。ReplicationPolicyreplication.policy.classDefaultReplicationPolicy

防止配置冲突

MirrorMaker 进程通过其目标 Kafka 集群共享配置。当针对同一目标群集运行的镜像制作进程之间的配置不同时,此行为可能会导致冲突。

例如,以下两个镜像制作过程将是有问题的:

# Configuration of process 1
A->B.enabled = true
A->B.topics = foo

# Configuration of process 2
A->B.enabled = true
A->B.topics = bar

在这种情况下,两个进程将通过集群共享配置,这会导致冲突。根据两个过程中哪一个是当选的“领导者”,结果将是主题或主题被复制,但不能同时复制两者。Bfoobar

因此,在到同一目标群集的复制流中保持镜像管理器配置的一致性非常重要。例如,可以通过自动化工具或为整个组织使用单个共享的 MirrorMaker 配置文件来实现这一点。

最佳实践:从远程消费,从生产到本地消费

为了最大程度地减少延迟(“创建者滞后”),建议将 MirrorMaker 进程放置在尽可能靠近其目标群集的位置,即它生成数据的群集。这是因为 Kafka 生产者通常比 Kafka 消费者更难以处理不可靠或高延迟的网络连接。

First DC          Second DC
==========        =========================
primary --------- MirrorMaker --> secondary
(remote)                           (local)

要运行这种“从远程使用,生产到本地”设置,请在目标群集附近(最好在与目标群集相同的位置)运行 MirrorMaker 进程,并在命令行参数(以空白分隔的群集别名列表)中显式设置这些“本地”群集:--clusters

# Run in secondary's data center, reading from the remote `primary` cluster
$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters secondary
这会告知 MirrorMaker 进程给定的群集就在附近,并阻止它复制数据或将配置发送到其他远程位置的群集。--clusters secondary
示例:主动/被动高可用性部署

以下示例显示了将主题从主环境复制到辅助 Kafka 环境的基本设置,但不是从辅助环境复制回主环境。请注意,大多数生产设置都需要进一步配置,例如安全设置。

# Unidirectional flow (one-way) from primary to secondary cluster
primary.bootstrap.servers = broker1-primary:9092
secondary.bootstrap.servers = broker2-secondary:9092

primary->secondary.enabled = true
secondary->primary.enabled = false

primary->secondary.topics = foo.*  # only replicate some topics
示例:主动/主动高可用性部署

以下示例显示了以两种方式在两个集群之间复制主题的基本设置。请注意,大多数生产设置都需要进一步配置,例如安全设置。

# Bidirectional flow (two-way) between us-west and us-east clusters
clusters = us-west, us-east
us-west.bootstrap.servers = broker1-west:9092,broker2-west:9092
Us-east.bootstrap.servers = broker3-east:9092,broker4-east:9092

us-west->us-east.enabled = true
us-east->us-west.enabled = true

有关防止复制“循环”(其中主题最初将从 A 复制到 B,然后复制的主题将再次从 B 复制到 A,依此类推)的说明:只要在同一 MirrorMaker 配置文件中定义上述流,就不需要显式添加设置来防止两个群集之间的复制循环。topics.exclude

示例:多群集异地复制

让我们将前面各节中的所有信息放在一个更大的示例中。假设有三个数据中心(西、东、北),每个数据中心有两个 Kafka 集群(例如、)。本节中的示例演示如何为每个数据中心内的主动/主动复制配置 MirrorMaker (1),以及如何为 跨数据中心复制 (XDCR) 配置 (2)。west-1west-2

首先,在配置中定义源集群和目标集群及其复制流:

# Basic settings
clusters: west-1, west-2, east-1, east-2, north-1, north-2
west-1.bootstrap.servers = ...
west-2.bootstrap.servers = ...
east-1.bootstrap.servers = ...
east-2.bootstrap.servers = ...
north-1.bootstrap.servers = ...
north-2.bootstrap.servers = ...

# Replication flows for Active/Active in West DC
west-1->west-2.enabled = true
west-2->west-1.enabled = true

# Replication flows for Active/Active in East DC
east-1->east-2.enabled = true
east-2->east-1.enabled = true

# Replication flows for Active/Active in North DC
north-1->north-2.enabled = true
north-2->north-1.enabled = true

# Replication flows for XDCR via west-1, east-1, north-1
west-1->east-1.enabled  = true
west-1->north-1.enabled = true
east-1->west-1.enabled  = true
east-1->north-1.enabled = true
north-1->west-1.enabled = true
north-1->east-1.enabled = true

然后,在每个数据中心中,启动一个或多个镜像制作器,如下所示:

# In West DC:
$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters west-1 west-2

# In East DC:
$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters east-1 east-2

# In North DC:
$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters north-1 north-2

使用此配置,生成到任何集群的记录将在数据中心内复制,以及复制到其他数据中心。通过提供参数,我们确保每个镜像制作程序进程仅向附近的群集生成数据。--clusters

注意:从技术上讲,此处不需要该参数。没有它,MirrorMaker将正常工作。但是,吞吐量可能会受到数据中心之间的“生产者滞后”的影响,并且可能会产生不必要的数据传输成本。--clusters

启动异地复制

您可以根据需要运行任意数量或任意数量的 MirrorMaker 进程(想想:节点、服务器)。由于 MirrorMaker 基于 Kafka Connect,因此配置为复制相同 Kafka 群集的 MirrorMaker 进程在分布式设置中运行:它们将找到彼此、共享配置(请参阅下面的部分)、对其工作进行负载平衡等。例如,如果要提高复制流的吞吐量,一个选项是并行运行其他 MirrorMaker 进程。

要启动镜像制作进程,请运行以下命令:

$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties

启动后,可能需要几分钟时间,MirrorMaker 进程才会首次开始复制数据。

(可选)如前所述,您可以设置参数以确保 MirrorMaker 进程仅向附近的群集生成数据。--clusters

# Note: The cluster alias us-west must be defined in the configuration file
$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties             --clusters us-west

测试使用者组复制时的注意事项:默认情况下,MirrorMaker 不会复制该工具创建的使用者组,您可以使用该工具在命令行上测试 MirrorMaker 设置。如果还想复制这些使用者组,请相应地设置配置(默认:)。请记住在完成测试后再次更新配置。kafka-console-consumer.shgroups.excludegroups.exclude = console-consumer-.*, connect-.*, __.*

停止异地复制

您可以通过使用以下命令发送 SIGTERM 信号来停止正在运行的镜像制作进程:

$ kill <MirrorMaker pid>

应用配置更改

若要使配置更改生效,必须重新启动镜像制作程序进程。

监视异地复制

建议监视镜像制作进程,以确保所有定义的复制流都已启动并正常运行。MirrorMaker建立在Connect框架之上,继承了Connect的所有指标,例如。此外,镜像制作工具还会在指标组下生成自己的指标。指标使用以下属性进行标记:source-record-poll-ratekafka.connect.mirror

  • source:源集群的别名(例如,primary)
  • target:目标集群的别名(例如,secondary)
  • topic:目标集群上的复制主题
  • partition:正在复制的分区

将跟踪每个复制主题的指标。可以从主题名称推断源集群。例如,从 复制将产生如下指标:topic1primary->secondary

  • target=secondary
  • topic=primary.topic1
  • partition=1

将发出以下指标:

# MBean: kafka.connect.mirror:type=MirrorSourceConnector,target=([-.w]+),topic=([-.w]+),partition=([0-9]+)

record-count            # number of records replicated source -> target
record-age-ms           # age of records when they are replicated
record-age-ms-min
record-age-ms-max
record-age-ms-avg
replication-latency-ms  # time it takes records to propagate source->target
replication-latency-ms-min
replication-latency-ms-max
replication-latency-ms-avg
byte-rate               # average number of bytes/sec in replicated records

# MBean: kafka.connect.mirror:type=MirrorCheckpointConnector,source=([-.w]+),target=([-.w]+)

checkpoint-latency-ms   # time it takes to replicate consumer offsets
checkpoint-latency-ms-min
checkpoint-latency-ms-max
checkpoint-latency-ms-avg

这些指标不区分创建时间戳和日志追加时间戳。

6.4 多租户

多租户概述

作为一个高度可扩展的事件流平台,Kafka 被许多用户用作他们的中枢神经系统,实时连接来自不同团队和业务线的各种不同系统和应用程序。这种多租户群集环境需要适当的控制和管理,以确保这些不同需求的和平共存。本节重点介绍设置此类共享环境的功能和最佳实践,这些功能和最佳实践应有助于您运行符合 SLA/OLA 的集群,并最大限度地减少由“嘈杂邻居”造成的潜在附带损害。

多租户是一个多方面的主题,包括但不限于:

  • 为租户创建用户空间(有时称为命名空间)
  • 使用数据保留策略等配置主题
  • 使用加密、身份验证和授权保护主题和集群
  • 使用配额和速率限制隔离租户
  • 监控和计量
  • 群集间数据共享(参见异地复制)

使用主题命名为租户创建用户空间(命名空间)

操作多租户群集的 Kafka 管理员通常需要为每个租户定义用户空间。就本节而言,“用户空间”是主题的集合,这些主题在单个实体或用户的管理下组合在一起。

在卡夫卡中,数据的主要单位是主题。用户可以创建和命名每个主题。他们也可以删除它们,但不能直接重命名主题。相反,若要重命名主题,用户必须创建一个新主题,将邮件从原始主题移动到新主题,然后删除原始主题。考虑到这一点,建议根据分层主题命名结构定义逻辑空间。然后,可以将此设置与安全功能(如前缀 ACL)结合使用,以隔离不同的空间和租户,同时最大程度地减少保护群集中数据的管理开销。

这些逻辑用户空间可以以不同的方式分组,具体选择取决于您的组织更喜欢如何使用 Kafka 集群。最常见的分组如下。

按团队或单位部门:在这里,团队是主要的聚合器。在团队是 Kafka 基础结构的主要用户的组织中,这可能是最佳分组。

主题命名结构示例:

  • <organization>.<team>.<dataset>.<event-name>
    (例如,“acme.infosec.telemetry.logins”)

按项目或产品:在这里,一个团队管理多个项目。对于每个项目,它们的凭据将不同,因此所有控件和设置将始终与项目相关。

主题命名结构示例:

  • <project>.<product>.<event-name>
    (例如,“移动性.支付.可疑”)

某些信息通常不应放在主题名称中,例如可能随时间变化的信息(例如,目标使用者的名称)或其他地方可用的技术详细信息或元数据(例如,主题的分区计数和其他配置设置)。

要强制实施主题命名结构,可以使用以下几个选项:

  • 使用前缀 ACL (参见 KIP-290) 强制使用主题名称的通用前缀。例如,团队 A 可能只被允许创建名称以 开头的主题。payments.teamA.
  • 定义自定义(参见 KIP-108 和设置 create.topic.policy.class.name)以强制实施严格的命名模式。这些策略提供了最大的灵活性,可以涵盖复杂的模式和规则,以满足组织的需求。CreateTopicPolicy
  • 通过使用 ACL 拒绝普通用户的主题创建来禁用主题创建,然后依靠外部进程代表用户创建主题(例如,脚本或您喜欢的自动化工具包)。
  • 禁用 Kafka 功能以通过设置代理配置来按需自动创建主题也可能很有用。请注意,您不应仅依赖此选项。auto.create.topics.enable=false

配置主题:数据保留等

Kafka 的配置非常灵活,因为它的粒度很好,它支持大量的每个主题的配置设置,以帮助管理员设置多租户集群。例如,管理员通常需要定义数据保留策略,以控制数据在主题中存储的数量和/或时长,并具有 retention.bytes(大小)和 retention.ms(时间)等设置。这限制了群集内的存储消耗,并有助于遵守 GDPR 等法律要求。

保护集群和主题:身份验证、授权、加密

由于该文档有一章专门介绍适用于任何 Kafka 部署的安全性,因此本节重点介绍多租户环境的其他注意事项。

Kafka 的安全设置分为三个主要类别,类似于管理员保护其他客户端-服务器数据系统(如关系数据库和传统邮件系统)的方式。

  1. 对 Kafka 代理和 Kafka 客户端之间、代理之间、代理和 ZooKeeper 节点之间以及代理和其他可选工具之间传输的数据进行加密
  2. 验证从 Kafka 客户端和应用程序到 Kafka 代理的连接,以及从 Kafka 代理到 ZooKeeper 节点的连接。
  3. 客户端创建、删除、更改主题配置等操作的授权;向主题写入事件或从主题读取事件;创建和删除 ACL。管理员还可以定义自定义策略以实施其他限制,例如 和(请参阅 KIP-108 和设置 create.topic.policy.class.namealter.config.policy.class.name)。CreateTopicPolicyAlterConfigPolicy

在保护多租户 Kafka 环境时,最常见的管理任务是第三类(授权),即管理授予或拒绝访问某些主题的用户/客户端权限,从而访问集群中用户存储的数据。此任务主要通过设置访问控制列表 (ACL) 来执行。在这里,多租户环境的管理员尤其受益于如上一节所述采用分层主题命名结构,因为他们可以通过前缀 ACL () 方便地控制对主题的访问。这大大减少了在多租户环境中保护主题的管理开销:管理员可以在更高的开发人员便利性(更宽松的权限,使用更少和更广泛的 ACL)与更严格的安全性(更严格的权限,使用更多和更窄的 ACL)之间进行权衡。--resource-pattern-type Prefixed

在以下示例中,用户 Alice(ACME 公司 InfoSec 团队的新成员)被授予对名称以“acme.infosec.”开头的所有主题的写入权限,例如“acme.infosec.telemetry.logins”和“acme.infosec.syslogs.events”。

# Grant permissions to user Alice
$ bin/kafka-acls.sh     --bootstrap-server broker1:9092     --add --allow-principal User:Alice     --producer     --resource-pattern-type prefixed --topic acme.infosec.

同样,您可以使用此方法隔离同一共享群集上的不同客户。

隔离租户:配额、速率限制、限制

多租户群集通常应配置配额,以防止用户(租户)占用过多的群集资源,例如当他们尝试写入或读取非常大量的数据,或以过高的速率创建对代理的请求时。这可能会导致网络饱和、独占代理资源并影响其他客户端 — 所有这些都希望在共享环境中避免。

客户端配额:Kafka 支持不同类型的(每用户主体)客户端配额。由于无论客户端写入或读取哪些主题,客户端的配额都适用,因此它们是在多租户群集中分配资源的方便有效的工具。例如,请求速率配额通过限制代理在该用户的请求处理路径上花费的时间来帮助限制用户对代理 CPU 使用率的影响,之后将启动限制。在许多情况下,使用请求速率配额隔离用户在多租户群集中比设置传入/传出网络带宽配额的影响更大,因为用于处理请求的代理 CPU 使用率过高会降低代理可以提供的有效带宽。此外,管理员还可以定义主题操作(例如创建、删除和更改)的配额,以防止 Kafka 集群被高并发主题操作淹没(请参阅 KIP-599 和配额类型)。controller_mutation_rate

服务器配额:Kafka 还支持不同类型的代理端配额。例如,管理员可以设置代理接受新连接的速率限制、设置每个代理的最大连接数或设置允许来自特定 IP 地址的最大连接数。

有关详细信息,请参阅配额概述以及如何设置配额

监控和计量

监视是一个更广泛的主题,在文档的其他部分进行了介绍。任何 Kafka 环境(尤其是多租户环境)的管理员都应根据这些说明设置监视。Kafka 支持多种指标,例如身份验证尝试失败率、请求延迟、使用者滞后、使用者组总数、上一节中所述配额指标等。

例如,可以将监视配置为跟踪主题分区的大小(使用 JMX 指标),从而跟踪主题中存储的数据的总大小。然后,可以在共享群集上的租户接近使用过多存储空间时定义警报。kafka.log.Log.Size.<TOPIC-NAME>

多租户和异地复制

Kafka 允许您在不同的集群之间共享数据,这些集群可能位于不同的地理区域、数据中心等。除了灾难恢复等用例外,当多租户设置需要群集间数据共享时,此功能也很有用。有关详细信息,请参阅异地复制(跨群集数据镜像)部分。

进一步考虑

数据合同:您可能需要使用事件架构在集群中数据的生产者和使用者之间定义数据协定。这可确保写入 Kafka 的事件始终可以再次正确读取,并防止写入格式错误或损坏的事件。实现此目的的最佳方法是在群集旁边部署所谓的架构注册表。(Kafka 不包括架构注册表,但有可用的第三方实现。架构注册表管理事件架构并将架构映射到主题,以便生产者知道哪些主题接受哪些类型的事件(架构),并且使用者知道如何读取和分析主题中的事件。某些注册表实现提供了更多功能,例如架构演变、存储所有架构的历史记录以及架构兼容性设置。

6.5 卡夫卡配置

重要的客户端配置

最重要的生产者配置是:
  • 阿克斯
  • 压缩
  • 批量大小
最重要的使用者配置是提取大小。

所有配置都记录在配置部分中。

生产服务器配置

下面是一个生产服务器配置示例: 我们的客户端配置在不同的用例之间差异很大。
  # ZooKeeper
  zookeeper.connect=[list of ZooKeeper servers]

  # Log configuration
  num.partitions=8
  default.replication.factor=3
  log.dir=[List of directories. Kafka should have its own dedicated disk(s) or SSD(s).]

  # Other configurations
  broker.id=[An integer. Start with 0 and increment by 1 for each new broker.]
  listeners=[list of listeners]
  auto.create.topics.enable=false
  min.insync.replicas=2
  queued.max.requests=[number of concurrent requests]

6.6 Java版本

支持 Java 8、Java 11 和 Java 17。请注意,Java 8 支持自 Apache Kafka 3.0 起已被弃用,并将在 Apache Kafka 4.0 中删除。 如果启用了 TLS,Java 11 及更高版本的性能会明显更好,因此强烈建议使用它们(它们还包括许多其他 性能改进:G1GC、CRC32C、紧凑字符串、线程本地握手等)。 从安全角度来看,我们建议使用最新发布的补丁版本,因为较旧的免费版本已披露安全漏洞。 使用基于 OpenJDK 的 Java 实现(包括 Oracle JDK)运行 Kafka 的典型参数是: 作为参考,以下是使用上述 Java 参数LinkedIn最繁忙的集群之一(高峰期)的统计信息:
  -Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
  -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M
  -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -XX:+ExplicitGCInvokesConcurrent
  • 60 经纪人
  • 50k 分区(复制因子 2)
  • 800k 条消息/秒
  • 300 MB/秒入站,1 GB/秒+ 出站
该集群中的所有代理都有大约 90 毫秒的 21% GC 暂停时间,每秒少于 1 个年轻 GC。

6.7 硬件和操作系统

我们使用具有 24GB 内存的双四核英特尔至强机器。

您需要足够的内存来缓冲活动的读取器和写入器。您可以通过假设您希望能够缓冲 30 秒并将内存需求计算为 write_throughput*30,对内存需求进行粗略估计。

磁盘吞吐量很重要。我们有 8x7200 rpm SATA 驱动器。通常,磁盘吞吐量是性能瓶颈,磁盘越多越好。根据您配置刷新行为的方式,您可能会也可能不会从更昂贵的磁盘中受益(如果您经常强制刷新,那么更高的 RPM SAS 驱动器可能会更好)。

操作系统

Kafka应该可以在任何Unix系统上运行良好,并且已经在Linux和Solaris上进行了测试。

我们已经看到了在Windows上运行的一些问题,Windows目前不是一个受支持的平台,尽管我们很乐意改变这一点。

它不太可能需要太多的操作系统级调优,但有三种可能重要的操作系统级配置:

  • 文件描述符限制:Kafka 对日志段和开放连接使用文件描述符。如果代理托管多个分区,请考虑除了代理建立的连接数外,代理至少需要 (number_of_partitions)*(partition_size/segment_size) 来跟踪所有日志段。我们建议代理进程至少允许 100000 个文件描述符作为起点。注意:mmap() 函数添加了对与文件描述符 fildes 关联的文件的额外引用,该文件描述符上的后续 close() 不会删除该文件描述符。当不再有到文件的映射时,将删除此引用。
  • 最大套接字缓冲区大小:可以增加,以实现数据中心之间的高性能数据传输,如此处所述
  • 进程可以具有的最大内存映射区域数(也称为 vm.max_map_count)。请参阅 Linux 内核文档。在考虑代理可能拥有的最大分区数时,应密切关注此操作系统级属性。默认情况下,在许多 Linux 系统上,vm.max_map_count 的值约为 65535。每个分区分配的每个日志段都需要一对索引/时间索引文件,每个文件占用 1 个映射区域。换句话说,每个日志段使用 2 个地图区域。因此,每个分区至少需要 2 个映射区域,只要它托管单个日志段即可。也就是说,在代理上创建 50000 个分区将导致分配 100000 个映射区域,并可能导致代理崩溃,并在具有默认 vm.max_map_count 的系统上出现 OutOfMemoryError (映射失败)。请记住,每个分区的日志段数因段大小、负载强度、保留策略而异,并且通常往往不止一个。

磁盘和文件系统

我们建议使用多个驱动器来获得良好的吞吐量,并且不要与应用程序日志或其他操作系统活动共享用于 Kafka 数据的相同驱动器,以确保良好的延迟。您可以将这些驱动器一起 RAID 到单个卷中,也可以格式化并将每个驱动器挂载为自己的目录。由于 Kafka 具有复制功能,因此 RAID 提供的冗余也可以在应用程序级别提供。这种选择有几个权衡。

如果配置多个数据目录,则会为数据目录分配轮循机制分区。每个分区将完全位于其中一个数据目录中。如果分区之间的数据平衡不佳,这可能会导致磁盘之间的负载不平衡。

RAID 在平衡磁盘之间的负载方面可能会做得更好(尽管似乎并不总是如此),因为它在较低级别平衡负载。RAID 的主要缺点是它通常会对写入吞吐量造成很大的性能影响,并减少可用磁盘空间。

RAID 的另一个潜在好处是能够容忍磁盘故障。但是,我们的经验是,重建 RAID 阵列非常占用 I/O 资源,以至于它实际上禁用了服务器,因此这并没有提供太多真正的可用性改进。

应用程序与操作系统刷新管理

Kafka 始终立即将所有数据写入文件系统,并支持配置刷新策略的功能,该策略控制何时使用刷新将数据强制从操作系统缓存中强制流出并放到磁盘上。可以控制此刷新策略,以便在一段时间后或写入一定数量的消息后强制数据到磁盘。此配置中有多种选择。

Kafka 最终必须调用 fsync 才能知道数据已被刷新。当从任何已知为fsync的日志段的崩溃中恢复时,Kafka将通过检查其CRC来检查每条消息的完整性,并重建随附的偏移索引文件,作为启动时执行的恢复过程的一部分。

请注意,Kafka 中的持久性不需要将数据同步到磁盘,因为故障节点将始终从其副本中恢复。

我们建议使用完全禁用应用程序 fsync 的默认刷新设置。这意味着依赖于操作系统完成的背景刷新和 Kafka 自己的后台刷新。这为大多数用途提供了最好的功能:无需调整旋钮,出色的吞吐量和延迟以及完整的恢复保证。我们通常认为复制提供的保证比同步到本地磁盘更强大,但是偏执狂仍然可能更喜欢同时拥有两者,并且仍然支持应用程序级别的 fsync 策略。

使用应用程序级刷新设置的缺点是,它的磁盘使用模式效率较低(它使操作系统重新排序写入的余地较小),并且可能会引入延迟,因为大多数 Linux 文件系统中的 fsync 会阻止对文件的写入,而后台刷新会执行更精细的页面级锁定。

一般来说,你不需要对文件系统进行任何低级的调优,但在接下来的几节中,我们将讨论其中的一些,以防它有用。

了解 Linux 操作系统刷新行为

在 Linux 中,写入文件系统的数据保留在页面缓存中,直到必须将其写出到磁盘(由于应用程序级 fsync 或操作系统自己的刷新策略)。数据的刷新是由一组称为pdflush的后台线程完成的(或者在2.6.32之后的内核中“刷新线程”)。

Pdflush有一个可配置的策略,控制在缓存中可以维护多少脏数据,以及在必须将其写回磁盘之前多长时间。 此处介绍了此策略。 当 Pdflush 无法跟上写入数据的速度时,它最终会导致写入过程阻止写入中产生的延迟,从而减慢数据的积累。

您可以通过执行以下操作查看操作系统内存使用情况的当前状态

 > cat /proc/meminfo 
上面的链接中描述了这些值的含义。

与进程内缓存相比,使用页面缓存在存储将写出到磁盘的数据方面有几个优点:

  • I/O 调度程序会将连续的小写入批处理为较大的物理写入,从而提高吞吐量。
  • I/O 调度程序将尝试对写入重新排序,以最大程度地减少磁盘头的移动,从而提高吞吐量。
  • 它会自动使用计算机上的所有可用内存

文件系统选择

Kafka 使用磁盘上的常规文件,因此它对特定文件系统没有硬依赖性。但是,使用最多的两个文件系统是EXT4和XFS。从历史上看,EXT4 的使用量更多,但最近对 XFS 文件系统的改进表明,它对 Kafka 的工作负载具有更好的性能特征,而稳定性没有受到影响。

比较测试是在具有大量消息负载的集群上执行的,使用各种文件系统创建和挂载选项。在 Kafka 中,受监控的主要指标是“请求本地时间”,表示追加操作所花费的时间量。XFS 带来了更好的本地时间(160 毫秒,而最佳 EXT250 配置为 4 毫秒+),以及更短的平均等待时间。XFS 性能也显示出磁盘性能的可变性较小。

一般文件系统说明
对于用于数据目录的任何文件系统,在 Linux 系统上,建议在挂载时使用以下选项:
  • noatime:此选项在读取文件时禁用文件的 atime(上次访问时间)属性的更新。这可以消除大量的文件系统写入,尤其是在引导使用者的情况下。Kafka 根本不依赖于 atime 属性,因此禁用它是安全的。
XFS 注释
XFS 文件系统具有大量的自动调整功能,因此无论是在文件系统创建时还是在挂载时,都不需要对默认设置进行任何更改。唯一值得考虑的调整参数是:
  • largeio:这会影响统计信息调用报告的首选 I/O 大小。虽然这可以允许在较大的磁盘写入上实现更高的性能,但实际上它对性能的影响很小或没有影响。
  • nobarrier:对于具有电池备份缓存的基础设备,此选项可以通过禁用定期写入刷新来提高性能。但是,如果底层设备运行良好,它将向文件系统报告它不需要刷新,并且此选项将不起作用。
EXT4 注释
EXT4 是 Kafka 数据目录的可用文件系统选择,但是要从中获取最大性能,需要调整几个挂载选项。此外,这些选项在故障情况下通常是不安全的,并且会导致更多的数据丢失和损坏。对于单个代理故障,这不是什么大问题,因为可以擦除磁盘并从集群重建副本。在多次故障情况下,例如断电,这可能意味着底层文件系统(因此数据)损坏,不易恢复。可以调整以下选项:
  • data=writeback:ext4 默认为 data=order,这为某些写入设置了强顺序。Kafka 不需要此排序,因为它对所有未刷新的日志执行非常偏执的数据恢复。此设置消除了排序约束,似乎可以显著减少延迟。
  • 禁用日记功能:日记功能是一种权衡:它在服务器崩溃后使重新启动速度更快,但它引入了大量额外的锁定,这增加了写入性能的差异。那些不关心重新启动时间并希望减少写入延迟峰值的主要来源的人可以完全关闭日记功能。
  • commit=num_secs:这调整了 ext4 提交到其元数据日志的频率。将此值设置为较低的值可减少崩溃期间未刷新数据的丢失。将此值设置为更高的值将提高吞吐量。
  • nobh:此设置控制使用 data=写回模式时的其他排序保证。这对于 Kafka 应该是安全的,因为我们不依赖于写入顺序并提高了吞吐量和延迟。
  • delalloc:延迟分配意味着文件系统在物理写入发生之前避免分配任何块。这允许 ext4 分配较大的扩展数据块而不是较小的页面,并有助于确保数据按顺序写入。此功能非常适合吞吐量。它似乎确实涉及文件系统中的一些锁定,这增加了一些延迟差异。

更换 KRaft 控制器磁盘

当 Kafka 配置为使用 KRaft 时,控制器会将群集元数据存储在 -- 中指定的目录或第一个日志目录中(如果未配置)。有关详细信息,请参阅文档。metadata.log.dirmetadata.log.dirmetadata.log.dir

如果集群 metdata 目录中的数据由于硬件故障或需要更换硬件而丢失,则在配置新控制器节点时应小心。在大多数控制器具有所有提交的数据之前,不应格式化和启动新的控制器节点。要确定大多数控制器是否具有已提交的数据,请运行该工具以描述复制状态:kafka-metadata-quorum.sh

 > bin/kafka-metadata-quorum.sh --bootstrap-server broker_host:port describe --replication
 NodeId  LogEndOffset    Lag     LastFetchTimestamp      LastCaughtUpTimestamp   Status
 1       25806           0       1662500992757           1662500992757           Leader
 ...     ...             ...     ...                     ...                     ...
  

检查并等待,直到大多数控制器都很小。如果领先者的结束偏移量没有增加,则可以等到多数的滞后为 0;否则,您可以选择最新的引线端偏移并等待,直到所有副本都到达该偏移。检查并等待,直到大多数控制器彼此靠近。此时,格式化控制器的元数据日志目录更安全。这可以通过运行命令来完成。LagLastFetchTimestampLastCaughtUpTimestampkafka-storage.sh

 > bin/kafka-storage.sh format --cluster-id uuid --config server_properties

上述命令可能会失败,并显示类似 .当使用组合模式并且仅丢失元数据日志目录而未丢失其他目录时,可能会发生这种情况。在这种情况下,并且只有在那种情况下,您才能使用该选项运行命令。bin/kafka-storage.sh formatLog directory ... is already formattedkafka-storage.sh format--ignore-formatted

格式化日志目录后启动 KRaft 控制器。

 > /bin/kafka-server-start.sh server_properties

6.8 监控

Kafka 使用 Yammer Metrics 在服务器中报告指标。Java 客户端使用 Kafka Metrics,这是一个内置的指标注册表,可最大限度地减少拉入客户端应用程序的传递依赖项。两者都通过 JMX 公开指标,并且可以配置为使用可插拔的统计信息报告器报告统计信息,以连接到您的监控系统。

所有 Kafka 速率指标都有一个带后缀的相应累积计数指标。例如, 具有名为 的相应指标。-totalrecords-consumed-raterecords-consumed-total

查看可用指标的最简单方法是启动 jconsole 并将其指向正在运行的 kafka 客户端或服务器;这将允许使用 JMX 浏览所有指标。

使用 JMX 进行远程监视的安全注意事项

默认情况下,Apache Kafka 禁用远程 JMX。您可以通过为使用 CLI 或标准 Java 系统属性启动的进程设置环境变量,以编程方式启用远程 JMX,从而使用 JMX 启用远程监视。 在生产场景中启用远程 JMX 时,必须启用安全性,以确保未经授权的用户无法监视或 控制您的代理或应用程序以及运行它们的平台。请注意,对 的身份验证是禁用的 默认情况下,Kafka 中的 JMX 和安全配置必须通过为使用 CLI 启动的进程设置环境变量或设置适当的 Java 系统属性来覆盖生产部署。有关保护 JMX 的详细信息,请参阅使用 JMX 技术进行监视和管理JMX_PORTKAFKA_JMX_OPTS

我们对以下指标进行绘图和警报:

描述 姆比恩名称 正常值
消息传入速率 kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=([-.\w]+) 每个主题的传入消息速率。省略“topic=(...)”将产生全主题速率。
来自客户端的字节速率 kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=([-.\w]+) 每个主题的字节输入(来自客户端)速率。省略“topic=(...)”将产生全主题速率。
来自其他经纪商的字节率 kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesInPerSec,topic=([-.\w]+) 每个主题的字节输入(来自其他代理)速率。省略“topic=(...)”将产生全主题速率。
来自代理的控制器请求速率 kafka.controller:type=ControllerChannelManager,name=RequestRateAndQueueTimeMs,brokerId=([0-9]+) 控制器通道管理器从 给定代理的队列。以及请求之前保留在此队列中所需的时间 它取自队列。
控制器事件队列大小 kafka.controller:type=ControllerEventManager,name=EventQueueSize 控制器事件管理器队列的大小。
控制器事件队列时间 kafka.controller:type=ControllerEventManager,name=EventQueueTimeMs 任何事件(空闲事件除外)在控制器事件管理器中等待的时间 处理前排队
请求速率 kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce|获取消费者|FetchFollower},version=([0-9]+)
错误率 kafka.network:type=RequestMetrics,name=ErrorsPerSec,request=([-.\w]+),error=([-.\w]+) 按请求类型、按错误代码计算的响应中的错误数。如果响应包含 多个错误,全部计算在内。错误 = 无 表示响应成功。
生产请求率 kafka.server:type=BrokerTopicMetrics,name=TotalProduceRequestsPerSec,topic=([-.\w]+) 生成每个主题的请求速率。省略“topic=(...)”将产生全主题速率。
抓取请求速率 kafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec,topic=([-.\w]+) 每个主题的获取请求(来自客户端或关注者)速率。省略“topic=(...)”将产生全主题速率。
失败的生产请求率 kafka.server:type=BrokerTopicMetrics,name=FailedProduceRequestsPerSec,topic=([-.\w]+) 失败 每个主题的生成请求率。省略“topic=(...)”将产生全主题速率。
失败的抓取请求速率 kafka.server:type=BrokerTopicMetrics,name=FailedFetchRequestsPerSec,topic=([-.\w]+) 每个主题的失败提取请求(来自客户端或关注者)速率。省略“topic=(...)”将产生全主题速率。
请求大小(以字节为单位) kafka.network:type=RequestMetrics,name=RequestBytes,request=([-.\w]+) 每种请求类型的请求大小。
临时内存大小(以字节为单位) kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request={Produce|获取} 用于消息格式转换和解压缩的临时内存。
Message conversion time kafka.network:type=RequestMetrics,name=MessageConversionsTimeMs,request={Produce|Fetch} Time in milliseconds spent on message format conversions.
Message conversion rate kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec,topic=([-.\w]+) Message format conversion rate, for Produce or Fetch requests, per topic. Omitting 'topic=(...)' will yield the all-topic rate.
Request Queue Size kafka.network:type=RequestChannel,name=RequestQueueSize Size of the request queue.
Byte out rate to clients kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,topic=([-.\w]+) Byte out (to the clients) rate per topic. Omitting 'topic=(...)' will yield the all-topic rate.
Byte out rate to other brokers kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesOutPerSec,topic=([-.\w]+) Byte out (to the other brokers) rate per topic. Omitting 'topic=(...)' will yield the all-topic rate.
Rejected byte rate kafka.server:type=BrokerTopicMetrics,name=BytesRejectedPerSec,topic=([-.\w]+) Rejected byte rate per topic, due to the record batch size being greater than max.message.bytes configuration. Omitting 'topic=(...)' will yield the all-topic rate.
Message validation failure rate due to no key specified for compacted topic kafka.server:type=BrokerTopicMetrics,name=NoKeyCompactedTopicRecordsPerSec 0
Message validation failure rate due to invalid magic number kafka.server:type=BrokerTopicMetrics,name=InvalidMagicNumberRecordsPerSec 0
Message validation failure rate due to incorrect crc checksum kafka.server:type=BrokerTopicMetrics,name=InvalidMessageCrcRecordsPerSec 0
Message validation failure rate due to non-continuous offset or sequence number in batch kafka.server:type=BrokerTopicMetrics,name=InvalidOffsetOrSequenceRecordsPerSec 0
Log flush rate and time kafka.log:type=LogFlushStats,name=LogFlushRateAndTimeMs
# of offline log directories kafka.log:type=LogManager,name=OfflineLogDirectoryCount 0
Leader election rate kafka.controller:type=ControllerStats,name=LeaderElectionRateAndTimeMs non-zero when there are broker failures
Unclean leader election rate kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec 0
Is controller active on broker kafka.controller:type=KafkaController,name=ActiveControllerCount only one broker in the cluster should have 1
Pending topic deletes kafka.controller:type=KafkaController,name=TopicsToDeleteCount
Pending replica deletes kafka.controller:type=KafkaController,name=ReplicasToDeleteCount
Ineligible pending topic deletes kafka.controller:type=KafkaController,name=TopicsIneligibleToDeleteCount
Ineligible pending replica deletes kafka.controller:type=KafkaController,name=ReplicasIneligibleToDeleteCount
# of under replicated partitions (|ISR| < |all replicas|) kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions 0
# of under minIsr partitions (|ISR| < min.insync.replicas) kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount 0
# of at minIsr partitions (|ISR| = min.insync.replicas) kafka.server:type=ReplicaManager,name=AtMinIsrPartitionCount 0
Producer Id counts kafka.server:type=ReplicaManager,name=ProducerIdCount Count of all producer ids created by transactional and idempotent producers in each replica on the broker
Partition counts kafka.server:type=ReplicaManager,name=PartitionCount mostly even across brokers
Offline Replica counts kafka.server:type=ReplicaManager,name=OfflineReplicaCount 0
Leader replica counts kafka.server:type=ReplicaManager,name=LeaderCount mostly even across brokers
ISR shrink rate kafka.server:type=ReplicaManager,name=IsrShrinksPerSec If a broker goes down, ISR for some of the partitions will shrink. When that broker is up again, ISR will be expanded once the replicas are fully caught up. Other than that, the expected value for both ISR shrink rate and expansion rate is 0.
ISR expansion rate kafka.server:type=ReplicaManager,name=IsrExpandsPerSec See above
Failed ISR update rate kafka.server:type=ReplicaManager,name=FailedIsrUpdatesPerSec 0
Max lag in messages btw follower and leader replicas kafka.server:type=ReplicaFetcherManager,name=MaxLag,clientId=Replica lag should be proportional to the maximum batch size of a produce request.
Lag in messages per follower replica kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+) lag should be proportional to the maximum batch size of a produce request.
Requests waiting in the producer purgatory kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce non-zero if ack=-1 is used
Requests waiting in the fetch purgatory kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Fetch size depends on fetch.wait.max.ms in the consumer
Request total time kafka.network:type=RequestMetrics,name=TotalTimeMs,request={Produce|FetchConsumer|FetchFollower} broken into queue, local, remote and response send time
Time the request waits in the request queue kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request={Produce|FetchConsumer|FetchFollower}
Time the request is processed at the leader kafka.network:type=RequestMetrics,name=LocalTimeMs,request={Produce|FetchConsumer|FetchFollower}
Time the request waits for the follower kafka.network:type=RequestMetrics,name=RemoteTimeMs,request={Produce|FetchConsumer|FetchFollower} non-zero for produce requests when ack=-1
Time the request waits in the response queue kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request={Produce|FetchConsumer|FetchFollower}
Time to send the response kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request={Produce|FetchConsumer|FetchFollower}
Number of messages the consumer lags behind the producer by. Published by the consumer, not broker. kafka.consumer:type=consumer-fetch-manager-metrics,client-id={client-id} Attribute: records-lag-max
The average fraction of time the network processors are idle kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent between 0 and 1, ideally > 0.3
The number of connections disconnected on a processor due to a client not re-authenticating and then using the connection beyond its expiration time for anything other than re-authentication kafka.server:type=socket-server-metrics,listener=[SASL_PLAINTEXT|SASL_SSL],networkProcessor=<#>,name=expired-connections-killed-count ideally 0 when re-authentication is enabled, implying there are no longer any older, pre-2.2.0 clients connecting to this (listener, processor) combination
The total number of connections disconnected, across all processors, due to a client not re-authenticating and then using the connection beyond its expiration time for anything other than re-authentication kafka.network:type=SocketServer,name=ExpiredConnectionsKilledCount ideally 0 when re-authentication is enabled, implying there are no longer any older, pre-2.2.0 clients connecting to this broker
The average fraction of time the request handler threads are idle kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent between 0 and 1, ideally > 0.3
Bandwidth quota metrics per (user, client-id), user or client-id kafka.server:type={Produce|Fetch},user=([-.\w]+),client-id=([-.\w]+) Two attributes. throttle-time indicates the amount of time in ms the client was throttled. Ideally = 0. byte-rate indicates the data produce/consume rate of the client in bytes/sec. For (user, client-id) quotas, both user and client-id are specified. If per-client-id quota is applied to the client, user is not specified. If per-user quota is applied, client-id is not specified.
Request quota metrics per (user, client-id), user or client-id kafka.server:type=Request,user=([-.\w]+),client-id=([-.\w]+) Two attributes. throttle-time indicates the amount of time in ms the client was throttled. Ideally = 0. request-time indicates the percentage of time spent in broker network and I/O threads to process requests from client group. For (user, client-id) quotas, both user and client-id are specified. If per-client-id quota is applied to the client, user is not specified. If per-user quota is applied, client-id is not specified.
Requests exempt from throttling kafka.server:type=Request exempt-throttle-time indicates the percentage of time spent in broker network and I/O threads to process requests that are exempt from throttling.
ZooKeeper client request latency kafka.server:type=ZooKeeperClientMetrics,name=ZooKeeperRequestLatencyMs Latency in millseconds for ZooKeeper requests from broker.
ZooKeeper connection status kafka.server:type=SessionExpireListener,name=SessionState Connection status of broker's ZooKeeper session which may be one of Disconnected|SyncConnected|AuthFailed|ConnectedReadOnly|SaslAuthenticated|Expired.
Max time to load group metadata kafka.server:type=group-coordinator-metrics,name=partition-load-time-max maximum time, in milliseconds, it took to load offsets and group metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
Avg time to load group metadata kafka.server:type=group-coordinator-metrics,name=partition-load-time-avg average time, in milliseconds, it took to load offsets and group metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
Max time to load transaction metadata kafka.server:type=transaction-coordinator-metrics,name=partition-load-time-max maximum time, in milliseconds, it took to load transaction metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
Avg time to load transaction metadata kafka.server:type=transaction-coordinator-metrics,name=partition-load-time-avg average time, in milliseconds, it took to load transaction metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
Consumer Group Offset Count kafka.server:type=GroupMetadataManager,name=NumOffsets Total number of committed offsets for Consumer Groups
Consumer Group Count kafka.server:type=GroupMetadataManager,name=NumGroups Total number of Consumer Groups
Consumer Group Count, per State kafka.server:type=GroupMetadataManager,name=NumGroups[PreparingRebalance,CompletingRebalance,Empty,Stable,Dead] The number of Consumer Groups in each state: PreparingRebalance, CompletingRebalance, Empty, Stable, Dead
Number of reassigning partitions kafka.server:type=ReplicaManager,name=ReassigningPartitions The number of reassigning leader partitions on a broker.
Outgoing byte rate of reassignment traffic kafka.server:type=BrokerTopicMetrics,name=ReassignmentBytesOutPerSec 0; non-zero when a partition reassignment is in progress.
Incoming byte rate of reassignment traffic kafka.server:type=BrokerTopicMetrics,name=ReassignmentBytesInPerSec 0; non-zero when a partition reassignment is in progress.
Size of a partition on disk (in bytes) kafka.log:type=Log,name=Size,topic=([-.\w]+),partition=([0-9]+) The size of a partition on disk, measured in bytes.
Number of log segments in a partition kafka.log:type=Log,name=NumLogSegments,topic=([-.\w]+),partition=([0-9]+) The number of log segments in a partition.
分区中的第一个偏移量 kafka.log:type=Log,name=LogStartOffset,topic=([-.\w]+),partition=([0-9]+) 分区中的第一个偏移量。
分区中的最后一个偏移量 kafka.log:type=Log,name=LogEndOffset,topic=([-.\w]+),partition=([0-9]+) 分区中的最后一个偏移量。

KRaft 监控指标

允许监视 KRaft 仲裁和元数据日志的指标集。
请注意,某些公开的指标取决于节点的角色,如
process.roles
KRaft 仲裁监控指标
这些指标在 KRaft 集群中的控制器和代理上报告
衡量指标/属性名称 描述 姆比恩名称
当前状态 此成员的当前状态;可能的值包括领导者、候选人、投票、关注者、未附加、观察者。 kafka.server:type=raft-metrics,name=current-state
现任领导人 当前仲裁负责人的 ID;-1 表示未知。 kafka.server:type=raft-metrics,name=current-leader
当前投票 当前投票领导人的 ID;-1 表示不投票给任何人。 kafka.server:type=raft-metrics,name=current-vote
当前时代 当前仲裁纪元。 kafka.server:type=raft-metrics,name=current-epoch
高水位线 此成员上保持的高水位线;如果未知,则为 -1。 kafka.server:type=raft-metrics,name=high-waterl
对数结束偏移 当前筏日志结束偏移。 kafka.server:type=raft-metrics,name=log-end-offset
未知选民连接数 未缓存其连接信息的未知选民数。此指标的此值始终为 0。 kafka.server:type=raft-metrics,name=number-unknown-voter-connections
平均提交延迟 在 raft 日志中提交条目的平均时间(以毫秒为单位)。 kafka.server:type=raft-metrics,name=commit-latency-avg
最大提交延迟 在筏日志中提交条目的最长时间(以毫秒为单位)。 kafka.server:type=raft-metrics,name=commit-latency-max
平均选举延迟 选举新领导者所花费的平均时间(以毫秒为单位)。 kafka.server:type=raft-metrics,name=election-latency-avg
最大选举延迟 选举新领导者所花费的最长时间(以毫秒为单位)。 kafka.server:type=raft-metrics,name=election-latency-max
获取记录率 从筏仲裁的领导者获取的平均记录数。 kafka.server:type=raft-metrics,name=fetch-records-rate
追加记录率 筏仲裁的领导者每秒附加的平均记录数。 kafka.server:type=raft-metrics,name=append-records-rate
平均轮询空闲比率 客户端的 poll() 空闲的平均时间分数,而不是等待用户代码处理记录。 kafka.server:type=raft-metrics,name=poll-idle-ratio-avg
KRaft 控制器监控指标
衡量指标/属性名称 描述 姆比恩名称
活动控制器计数 此节点上的活动控制器数。有效值为“0”或“1”。 kafka.controller:type=KafkaController,name=ActiveControllerCount
事件队列时间毫秒 请求在控制器事件队列中等待所花费的时间(以毫秒为单位)的直方图。 kafka.controller:type=ControllerEventManager,name=EventQueueTimeMs
事件队列处理时间 MS 在控制器事件队列中处理请求所花费的时间(以毫秒为单位)的直方图。 kafka.controller:type=ControllerEventManager,name=EventQueueProcessingTimeMs
受围栏的代理计数 此控制器观察到的受围栏代理的数量。 kafka.controller:type=KafkaController,name=FencedBrokerCount
活动代理计数 此控制器观察到的活动代理数量。 kafka.controller:type=KafkaController,name=ActiveBrokerCount
全局主题计数 此控制器观察到的全局主题的数量。 kafka.controller:type=KafkaController,name=GlobalTopicCount
全局分区计数 此控制器观察到的全局分区数。 kafka.controller:type=KafkaController,name=GlobalPartitionCount
脱机分区计数 此控制器观察到的脱机主题分区(非内部)数。 kafka.controller:type=KafkaController,name=OfflinePartitionCount
首选副本不平衡计数 领导者不是首选领导者的主题分区计数。 kafka.controller:type=KafkaController,name=PreferredReplicaImbalanceCount
元数据错误计数 此控制器节点在元数据日志处理期间遇到错误的次数。 kafka.controller:type=KafkaController,name=MetadataErrorCount
Last Applied Record Offset The offset of the last record from the cluster metadata partition that was applied by the Controller. kafka.controller:type=KafkaController,name=LastAppliedRecordOffset
Last Committed Record Offset The offset of the last record committed to this Controller. kafka.controller:type=KafkaController,name=LastCommittedRecordOffset
Last Applied Record Timestamp The timestamp of the last record from the cluster metadata partition that was applied by the Controller. kafka.controller:type=KafkaController,name=LastAppliedRecordTimestamp
Last Applied Record Lag Ms The difference between now and the timestamp of the last record from the cluster metadata partition that was applied by the controller. For active Controllers the value of this lag is always zero. kafka.controller:type=KafkaController,name=LastAppliedRecordLagMs
KRaft Broker Monitoring Metrics
Metric/Attribute name Description Mbean name
Last Applied Record Offset The offset of the last record from the cluster metadata partition that was applied by the broker kafka.server:type=broker-metadata-metrics,name=last-applied-record-offset
Last Applied Record Timestamp The timestamp of the last record from the cluster metadata partition that was applied by the broker. kafka.server:type=broker-metadata-metrics,name=last-applied-record-timestamp
Last Applied Record Lag Ms The difference between now and the timestamp of the last record from the cluster metadata partition that was applied by the broker kafka.server:type=broker-metadata-metrics,name=last-applied-record-lag-ms
Metadata Load Error Count The number of errors encountered by the BrokerMetadataListener while loading the metadata log and generating a new MetadataDelta based on it. kafka.server:type=broker-metadata-metrics,name=metadata-load-error-count
Metadata Apply Error Count The number of errors encountered by the BrokerMetadataPublisher while applying a new MetadataImage based on the latest MetadataDelta. kafka.server:type=broker-metadata-metrics,name=metadata-apply-error-count

生产者/使用者/连接/流的常见监控指标

以下指标在创建者/使用者/连接器/流实例上可用。有关具体指标,请参阅以下部分。
衡量指标/属性名称 描述 姆比恩名称
连接关闭速率 窗口中每秒关闭的连接数。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
连接-关闭-总计 窗口中关闭的连接总数。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
连接创建速率 窗口中每秒建立的新连接。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
连接创建总计 窗口中建立的新连接总数。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
网络一体化速率 每秒对所有连接执行(读取或写入)的平均网络操作数。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
网络-IO-总计 所有连接上的网络操作(读取或写入)总数。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
传出字节速率 每秒发送到所有服务器的平均传出字节数。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
传出字节总数 发送到所有服务器的传出字节总数。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
请求费率 每秒发送的平均请求数。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
请求总数 发送的请求总数。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
请求大小平均值 窗口中所有请求的平均大小。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
请求最大大小 窗口中发送的任何请求的最大大小。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
传入字节速率 字节/秒读取所有套接字。 卡 夫 卡。[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
incoming-byte-total Total bytes read off all sockets. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
response-rate Responses received per second. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
response-total Total responses received. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
select-rate Number of times the I/O layer checked for new I/O to perform per second. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
select-total Total number of times the I/O layer checked for new I/O to perform. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
io-wait-time-ns-avg The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
io-wait-time-ns-total The total time the I/O thread spent waiting in nanoseconds. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
io-waittime-total *Deprecated* The total time the I/O thread spent waiting in nanoseconds. Replacement is .io-wait-time-ns-total kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
io-wait-ratio The fraction of time the I/O thread spent waiting. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
io-time-ns-avg The average length of time for I/O per select call in nanoseconds. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
io-time-ns-total The total time the I/O thread spent doing I/O in nanoseconds. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
iotime-total *Deprecated* The total time the I/O thread spent doing I/O in nanoseconds. Replacement is .io-time-ns-total kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
io-ratio The fraction of time the I/O thread spent doing I/O. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
connection-count The current number of active connections. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
successful-authentication-rate Connections per second that were successfully authenticated using SASL or SSL. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
successful-authentication-total Total connections that were successfully authenticated using SASL or SSL. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
failed-authentication-rate Connections per second that failed authentication. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
failed-authentication-total Total connections that failed authentication. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
successful-reauthentication-rate Connections per second that were successfully re-authenticated using SASL. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
successful-reauthentication-total Total connections that were successfully re-authenticated using SASL. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
reauthentication-latency-max The maximum latency in ms observed due to re-authentication. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
reauthentication-latency-avg The average latency in ms observed due to re-authentication. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
failed-reauthentication-rate Connections per second that failed re-authentication. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
failed-reauthentication-total Total connections that failed re-authentication. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)
successful-authentication-no-reauth-total Total connections that were successfully authenticated by older, pre-2.2.0 SASL clients that do not support re-authentication. May only be non-zero. kafka.[producer|consumer|connect]:type=[producer|consumer|connect]-metrics,client-id=([-.\w]+)

生产者/使用者/连接/流的常见每个代理指标

以下指标在创建者/使用者/连接器/流实例上可用。有关具体指标,请参阅以下部分。
衡量指标/属性名称 描述 姆比恩名称
传出字节速率 节点每秒发送的平均传出字节数。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
传出字节总数 为节点发送的传出字节总数。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
请求费率 节点每秒发送的平均请求数。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
请求总数 为节点发送的请求总数。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
请求大小平均值 节点窗口中所有请求的平均大小。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
请求最大大小 在窗口中为节点发送的任何请求的最大大小。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
传入字节速率 节点每秒接收的平均字节数。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
传入字节总数 节点接收的总字节数。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
请求延迟平均 节点的平均请求延迟(毫秒)。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
请求延迟最大值 节点的最大请求延迟(毫秒)。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
响应率 节点每秒收到的响应。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)
响应-总计 收到的节点响应总数。 卡 夫 卡。[producer|consumer|connect]:type=[consumer|producer|connect]-node-metrics,client-id=([-.\w]+),node-id=([0-9]+)

生产商监控

以下指标在创建者实例上可用。
衡量指标/属性名称 描述 姆比恩名称
等待线程 等待缓冲区内存将其记录排队时阻塞的用户线程数。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
缓冲区总字节数 客户端可以使用的最大缓冲区内存量(无论当前是否使用)。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
缓冲区可用字节数 未使用的缓冲区内存总量(未分配或在可用列表中)。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
缓冲池等待时间 追加程序等待空间分配的时间分数。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
缓冲池等待时间总计 *荒废的*追加程序等待空间分配的总时间(以纳秒为单位)。更换是bufferpool-wait-time-ns-total kafka.producer:type=producer-metrics,client-id=([-.\w]+)
缓冲池等待时间 ns-total 追加程序等待空间分配的总时间(以纳秒为单位)。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
刷新时间-ns-总计 生产者在 Producer.flush 中花费的总时间(以纳秒为单位)。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
txn-init-time-ns-total 生产者初始化事务所花费的总时间(以纳秒为单位)(对于 EOS)。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
txn-begin-time-ns-total 生产者在beginTransaction中花费的总时间(以纳秒为单位)(对于EOS)。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
txn-send-offsets-time-ns-total 生产者向交易发送偏移量所花费的总时间(以纳秒为单位)(对于 EOS)。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
txn-commit-time-ns-total 生产者提交事务所花费的总时间(以纳秒为单位)(对于 EOS)。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
txn-abort-time-ns-total 生产者中止事务所花费的总时间(以纳秒为单位)(对于 EOS)。 kafka.producer:type=producer-metrics,client-id=([-.\w]+)
创建者发件人指标
kafka.producer:type=producer-metrics,client-id=“{client-id}”
属性名称 描述
批量大小平均每个分区每个请求发送的平均字节数。
最大批量大小每个分区每个请求发送的最大字节数。
批量拆分率每秒平均批量拆分数
批量拆分总计批量拆分的总数
平均压缩率记录批次的平均压缩率,定义为压缩批大小与未压缩大小的平均比率。
元数据时代正在使用的当前生产者元数据的期限(以秒为单位)。
产生-节流-时间-平均请求被代理限制的平均时间(毫秒)
产生-油门-时间-最大值代理限制请求的最长时间(毫秒)
记录错误率导致错误的每秒平均记录发送数
记录错误总计导致错误的记录发送总数
记录队列时间平均在发送缓冲区中花费的平均时间(以毫秒为单位)记录批处理。
记录队列时间最大值在发送缓冲区中花费的最长时间(以毫秒为单位)记录批处理。
记录重试率每秒重试记录发送的平均次数
记录重试总数重试记录发送的总数
创纪录的发送速率每秒发送的平均记录数。
记录发送总数发送的记录总数。
记录大小平均平均记录大小
记录大小最大值最大记录大小
每个请求的平均记录数每个请求的平均记录数。
请求延迟平均平均请求延迟(毫秒)
请求延迟最大值最大请求延迟(毫秒)
正在进行的请求等待响应的当前正在进行的请求数。
kafka.producer:type=producer-topic-metrics,client-id=“{client-id}”,topic=“{topic}”
属性名称 描述
字节率主题每秒发送的平均字节数。
字节总数为主题发送的总字节数。
压缩率主题的记录批次的平均压缩率,定义为压缩批大小与未压缩大小的平均比率。
记录错误率导致主题出错的平均每秒记录发送数
记录错误总计导致主题错误的记录发送总数
记录重试率为主题发送的平均每秒重试记录数
记录重试总数主题的重试记录发送总数
创纪录的发送速率主题每秒发送的平均记录数。
记录发送总数为主题发送的记录总数。

消费者监控

以下指标在使用者实例上可用。
衡量指标/属性名称 描述 姆比恩名称
轮询间隔时间平均 调用 poll() 之间的平均延迟。 kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
轮询间隔时间最大值 调用 poll() 之间的最大延迟。 kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
最后轮询秒前 自上次 poll() 调用以来的秒数。 kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
轮询空闲比率平均 使用者的 poll() 空闲的平均时间分数,而不是等待用户代码处理记录。 kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
提交时间-ns-总计 使用者花费的总时间(以纳秒为单位)。 kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
提交同步时间 ns-总计 使用者提交偏移量所花费的总时间(以纳秒为单位)(对于 AOS)。 kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
消费者组指标
衡量指标/属性名称 描述 姆比恩名称
提交延迟平均 提交请求所花费的平均时间 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
提交延迟最大值 提交请求所用的最长时间 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
提交率 每秒提交调用数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
提交总计 提交调用总数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
分配的分区 当前分配给此使用者的分区数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
心跳-响应-最大时间-最大值 接收检测信号请求响应所用的最长时间 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
心跳率 每秒平均检测信号数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
心跳总数 检测信号总数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
加入时间平均 组重新加入所需的平均时间 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
最大加入时间 群组重新加入所需的最长时间 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
加入率 每秒的组联接数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
联接合计 加入组的总数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
同步时间平均 群组同步所用的平均时间 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
最大同步时间 群组同步所用的最长时间 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
同步速率 每秒组同步数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
同步总计 群组同步总数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
重新平衡延迟平均 组重新平衡所需的平均时间 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
重新平衡延迟最大值 组重新平衡所需的最长时间 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
重新平衡延迟总计 到目前为止,组重新平衡所花费的总时间 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
再平衡-总计 参与的组再平衡总数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
每小时再平衡率 每小时参与的组再平衡次数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
失败-重新平衡-总计 失败的组重新平衡总数 kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
failed-rebalance-rate-per-hour The number of failed group rebalance event per hour kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
last-rebalance-seconds-ago The number of seconds since the last rebalance event kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
last-heartbeat-seconds-ago The number of seconds since the last controller heartbeat kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-revoked-latency-avg The average time taken by the on-partitions-revoked rebalance listener callback kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-revoked-latency-max The max time taken by the on-partitions-revoked rebalance listener callback kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-assigned-latency-avg The average time taken by the on-partitions-assigned rebalance listener callback kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-assigned-latency-max The max time taken by the on-partitions-assigned rebalance listener callback kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-lost-latency-avg The average time taken by the on-partitions-lost rebalance listener callback kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-lost-latency-max The max time taken by the on-partitions-lost rebalance listener callback kafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
Consumer Fetch Metrics
kafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
Attribute name Description
bytes-consumed-rateThe average number of bytes consumed per second
bytes-consumed-totalThe total number of bytes consumed
fetch-latency-avgThe average time taken for a fetch request.
fetch-latency-maxThe max time taken for any fetch request.
fetch-rateThe number of fetch requests per second.
fetch-size-avgThe average number of bytes fetched per request
fetch-size-maxThe maximum number of bytes fetched per request
fetch-throttle-time-avgThe average throttle time in ms
fetch-throttle-time-maxThe maximum throttle time in ms
fetch-totalThe total number of fetch requests.
records-consumed-rateThe average number of records consumed per second
records-consumed-totalThe total number of records consumed
records-lag-maxThe maximum lag in terms of number of records for any partition in this window. NOTE: This is based on current offset and not committed offset
records-lead-minThe minimum lead in terms of number of records for any partition in this window
records-per-request-avgThe average number of records in each request
kafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}",topic="{topic}"
Attribute name Description
bytes-consumed-rateThe average number of bytes consumed per second for a topic
bytes-consumed-totalThe total number of bytes consumed for a topic
fetch-size-avgThe average number of bytes fetched per request for a topic
fetch-size-maxThe maximum number of bytes fetched per request for a topic
records-consumed-rateThe average number of records consumed per second for a topic
records-consumed-totalThe total number of records consumed for a topic
records-per-request-avgThe average number of records in each request for a topic
kafka.consumer:type=consumer-fetch-manager-metrics,partition="{partition}",topic="{topic}",client-id="{client-id}"
Attribute name Description
preferred-read-replicaThe current read replica for the partition, or -1 if reading from leader
records-lagThe latest lag of the partition
records-lag-avgThe average lag of the partition
records-lag-maxThe max lag of the partition
records-leadThe latest lead of the partition
记录-线索-平均分区的平均引线
记录-领先-分钟分区的最小引线

连接监控

Connect 工作进程包含所有创建者和使用者指标以及特定于 Connect 的指标。 工作进程本身具有许多指标,而每个连接器和任务都有其他指标。 [2023-05-22 16:22:33,884]INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:693) [2023-05-22 16:22:33,886]INFO Metrics 記者已關閉 (org.apache.kafka.common.metrics.Metrics:703)
kafka.connect:type=connect-worker-metrics
属性名称 描述
连接器数量在此辅助角色中运行的连接器数。
连接器-启动尝试-总计此辅助角色尝试的连接器启动总数。
连接器启动失败百分比此辅助角色的连接器启动失败的平均百分比。
连接器-启动-失败-总计失败的连接器启动总数。
连接器启动成功百分比此辅助角色的连接器启动成功的平均百分比。
连接器-启动-成功-总计成功的连接器启动总数。
任务计数在此辅助角色中运行的任务数。
任务启动尝试总数此辅助角色尝试的任务启动总数。
任务启动失败百分比此工作人员的任务开始失败的平均百分比。
任务启动失败总计失败的任务启动总数。
任务启动成功百分比此工作人员任务开始成功的平均百分比。
任务启动成功总计成功启动的任务总数。
kafka.connect:type=connect-worker-metrics,connector=“{connector}”
属性名称 描述
连接器销毁任务计数辅助角色上连接器的已销毁任务数。
连接器失败任务计数辅助角色上连接器的失败任务数。
连接器暂停任务计数辅助角色上连接器的暂停任务数。
连接器重新启动任务计数辅助角色上连接器的重新启动任务数。
连接器运行任务计数辅助角色上连接器的运行任务数。
连接器总任务计数辅助角色上连接器的任务数。
连接器-未分配的任务计数辅助角色上连接器的未分配任务数。
kafka.connect:type=connect-worker-rebalance-metrics
属性名称 描述
已完成-重新平衡-总计此工作人员完成的再平衡总数。
连接协议此群集使用的连接协议
时代此工作人员的纪元或代号。
领导者姓名组长的名称。
重新平衡-平均时间-毫秒此工作线程重新平衡所花费的平均时间(以毫秒为单位)。
重新平衡-最大时间-毫秒此工作线程重新平衡所花费的最长时间(以毫秒为单位)。
平衡此工作人员当前是否正在重新平衡。
自上次重新平衡以来的时间-MS自此工作线程完成最近一次重新平衡以来的时间(以毫秒为单位)。
kafka.connect:type=connector-metrics,connector=“{connector}”
属性名称 描述
连接器类连接器类的名称。
连接器类型连接器的类型。“源”或“汇”之一。
连接器版本连接器报告的连接器类的版本。
地位连接器的状态。“未分配”、“正在运行”、“已暂停”、“失败”或“正在重新启动”之一。
kafka.connect:type=connector-task-metrics,connector=“{connector}”,task=“{task}”
属性名称 描述
批量大小平均到目前为止,任务已处理的批次中的平均记录数。
最大批量大小到目前为止,任务已处理的最大批次中的记录数。
offset-commit-avg-time-ms此任务提交偏移量所用的平均时间(以毫秒为单位)。
偏移-提交-失败-百分比此任务的偏移提交尝试失败的平均百分比。
offset-commit-max-time-ms此任务提交偏移所用的最长时间(以毫秒为单位)。
偏移-提交-成功-百分比此任务的偏移提交尝试的平均百分比。
暂停比率此任务在暂停状态下花费的时间分数。
运行比率此任务在运行状态下花费的时间分数。
地位连接器任务的状态。“未分配”、“正在运行”、“已暂停”、“失败”或“正在重新启动”之一。
kafka.connect:type=sink-task-metrics,connector=“{connector}”,task=“{task}”
属性名称 描述
偏移提交完成率成功完成的每秒平均偏移提交完成数。
偏移-提交-完成-总计成功完成的偏移提交完成总数。
offset-commit-seq-no偏移提交的当前序列号。
偏移-提交-跳过率每秒收到太晚且跳过/忽略的偏移提交完成的平均次数。
偏移-提交-跳过-总计接收太晚且跳过/忽略的偏移提交完成总数。
分区计数分配给此任务的主题分区数,属于此辅助角色中的命名接收器连接器。
放置-批处理-平均时间-ms此任务放置一批接收器记录所花费的平均时间。
放置批处理最大时间毫秒此任务放置一批接收器记录所花费的最长时间。
接收器记录活动计数已从 Kafka 读取但尚未由接收器任务完全提交/刷新/确认的记录数。
接收器-记录-活动-计数-平均已从 Kafka 读取但尚未完全提交/刷新/由接收器任务确认的平均记录数。
接收器-记录-活动计数-最大值已从 Kafka 读取但接收器任务尚未完全提交/刷新/确认的最大记录数。
汇记录滞后最大值接收器任务在任何主题分区中落后于使用者位置的记录数方面的最大滞后。
接收记录读取速率从 Kafka 读取的此任务的平均每秒记录数属于此辅助角色中的命名接收器连接器。这是在应用转换之前。
接收器-记录-读取-总计自上次重新启动任务以来,此任务从 Kafka 读取的记录总数属于此辅助角色中的命名接收器连接器。
接收记录发送速率转换输出的平均每秒记录数,并发送/放置到此任务,属于此辅助角色中的命名接收器连接器。这是在应用转换之后,并排除转换筛选出的任何记录。
汇-记录-发送-总计自上次重新启动任务以来,从转换输出并发送/放置到此任务的记录总数,属于此辅助角色中的命名接收器连接器。
kafka.connect:type=source-task-metrics,connector=“{connector}”,task=“{task}”
属性名称 描述
轮询批处理平均时间毫秒此任务轮询一批源记录所花费的平均时间(以毫秒为单位)。
轮询批处理最大时间毫秒此任务轮询一批源记录所用的最长时间(以毫秒为单位)。
源-记录-活动-计数此任务已生成但尚未完全写入 Kafka 的记录数。
源-记录-活动-计数-平均此任务已生成但尚未完全写入 Kafka 的平均记录数。
源-记录-活动-计数-最大值此任务已生成但尚未完全写入 Kafka 的最大记录数。
源-记录-轮询率此任务每秒生成/轮询(转换前)的平均记录数,属于此辅助角色中的命名源连接器。
源-记录-轮询-总计此任务生成/轮询(转换前)的记录总数,属于此辅助角色中的指定源连接器。
源-记录-写入速率自上次重新启动任务以来,每秒写入此任务的记录的平均每秒记录数属于此工作线程中的指定源连接器。这是在应用转换之后,并排除由转换筛选出的任何记录。
源-记录-写入-总计自上次重新启动任务以来,为此任务写入 Kafka 的记录数,属于此工作线程中的命名源连接器。这是在应用转换之后,并排除由转换筛选出的任何记录。
事务大小平均到目前为止,任务已提交的事务中的平均记录数。
事务大小最大值到目前为止任务已提交的最大事务中的记录数。
事务大小最小值到目前为止任务提交的最小事务中的记录数。
kafka.connect:type=task-error-metrics,connector=“{connector}”,task=“{task}”
属性名称 描述
死信队列生成失败写入死信队列的失败次数。
死信队列生产请求尝试写入死信队列的次数。
上次错误时间戳此任务上次遇到错误时的纪元时间戳。
记录的总错误数记录的错误数。
总记录错误此任务中的记录处理错误数。
总记录失败数此任务中的记录处理失败次数。
跳过的总记录数由于错误而跳过的记录数。
总重试次数重试的操作数。

流监控

Kafka Streams 实例包含所有创建者和使用者指标以及特定于流的其他指标。 指标有三个记录级别:、 和 。infodebugtrace

请注意,指标具有 4 层层次结构。在顶层,每个启动都有客户端级指标 Kafka Streams 客户端。每个客户端都有流线程,具有自己的指标。每个流线程都有任务,其 自己的指标。每个任务都有多个处理器节点,它们有自己的指标。每个任务还具有多个状态存储 和记录缓存,所有这些都有自己的指标。

使用以下配置选项指定哪些指标 您希望收集:
metrics.recording.level="info"
客户端指标
以下所有指标的记录级别均为:info
衡量指标/属性名称 描述 姆比恩名称
版本 Kafka Streams 客户端的版本。 kafka.streams:type=stream-metrics,client-id=([-.\w]+)
提交标识 Kafka Streams 客户端的版本控制提交 ID。 kafka.streams:type=stream-metrics,client-id=([-.\w]+)
应用程序标识 Kafka Streams 客户端的应用程序 ID。 kafka.streams:type=stream-metrics,client-id=([-.\w]+)
拓扑描述 在 Kafka Streams 客户端中执行的拓扑的描述。 kafka.streams:type=stream-metrics,client-id=([-.\w]+)
Kafka Streams 客户端的状态。 kafka.streams:type=stream-metrics,client-id=([-.\w]+)
失败的流线程 自 Kafka 流客户端启动以来失败的流线程数。 kafka.streams:type=stream-metrics,client-id=([-.\w]+)
线程指标
以下所有指标的记录级别均为:info
衡量指标/属性名称 描述 姆比恩名称
提交延迟平均 此线程的所有正在运行的任务中提交的平均执行时间(以毫秒为单位)。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
提交延迟最大值 此线程的所有正在运行的任务中用于提交的最大执行时间(以毫秒为单位)。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
轮询延迟平均 使用者轮询的平均执行时间(以毫秒为单位)。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
轮询延迟最大值 使用者轮询的最大执行时间(以毫秒为单位)。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
进程延迟平均 用于处理的平均执行时间(以毫秒为单位)。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
进程延迟最大值 用于处理的最大执行时间(以毫秒为单位)。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
标点符号延迟平均 标点符号的平均执行时间(以毫秒为单位)。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
标点符号延迟最大值 标点符号的最大执行时间(毫秒)。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
提交率 每秒的平均提交数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
提交总计 提交调用的总数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
投票率 每秒消费者轮询呼叫的平均次数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
投票总数 消费者投票呼叫的总数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
处理速率 每秒处理的平均记录数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
进程总计 已处理记录的总数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
标点率 每秒的平均标点符号调用数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
标点符号-总计 标点符号呼叫的总数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
任务创建率 每秒创建的平均任务数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
任务创建总计 创建的任务总数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
任务关闭率 每秒关闭的平均任务数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
任务已关闭总计 已关闭任务的总数。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
阻塞时间-ns-总计 线程在 kafka 上阻塞的总时间。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
线程启动时间 线程启动的时间。 kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
任务指标
以下所有指标的记录级别均为 ,但丢弃的记录 - * 和 记录级别为 :debuginfo
衡量指标/属性名称 描述 姆比恩名称
进程延迟平均 用于处理的平均执行时间(以 ns 为单位)。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
进程延迟最大值 用于处理的最大执行时间(以 ns 为单位)。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
处理速率 此任务的所有源处理器节点每秒处理的平均记录数。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
进程总计 此任务的所有源处理器节点上处理的记录总数。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
提交延迟平均 提交的平均执行时间(以 ns 为单位)。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
提交延迟最大值 提交的最大执行时间(以 ns 为单位)。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
提交率 每秒的平均提交调用数。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
提交总计 提交调用的总数。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
记录迟到平均 观察到的平均记录延迟(流时间 - 记录时间戳)。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
记录延迟最大值 观察到的最大记录延迟(流时间 - 记录时间戳)。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
强制处理速率 每秒强制处理的平均次数。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
强制处理总计 强制处理的总数。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
下降记录率 此任务中丢弃的平均记录数。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
丢弃的记录总数 此任务中删除的记录总数。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
主动进程比率 在所有分配的活动任务中,流线程处理此任务所花费的时间分数。 kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
处理器节点指标
以下指标仅在某些类型的节点上可用,即 process-* 指标仅适用于 源处理器节点、抑制-发射-* 指标仅适用于抑制操作节点,并且 记录 E2e-延迟-* 指标仅适用于源处理器节点和终端节点(没有后继节点的节点) 节点)。 所有指标的记录级别均为 ,但记录 e2e-latency-* 指标除外,该指标具有 记录电平为 :debuginfo
衡量指标/属性名称 描述 姆比恩名称
消耗的总字节数 源处理器节点消耗的总字节数。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+),topic=([-.\w]+)
产生的字节总数 接收器处理器节点生成的字节总数。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+),topic=([-.\w]+)
处理速率 源处理器节点每秒处理的平均记录数。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
进程总计 源处理器节点每秒处理的记录总数。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
抑制发射速率 从抑制操作节点下游发出的记录的速率。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
抑制-发射-总计 从抑制操作节点下游发出的记录总数。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
记录-e2e-延迟-平均 记录的平均端到端延迟,通过将记录时间戳与节点完全处理记录的系统时间进行比较来衡量。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
记录-e2e-延迟-最大值 记录的最大端到端延迟,通过将记录时间戳与节点完全处理记录的系统时间进行比较来衡量。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
记录 e2e-延迟分钟 记录的最小端到端延迟,通过将记录时间戳与节点完全处理记录的系统时间进行比较来衡量。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
记录-消耗-总计 源处理器节点使用的记录总数。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+),topic=([-.\w]+)
记录产生的总数 接收器处理器节点生成的记录总数。 kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+),topic=([-.\w]+)
状态存储指标
以下所有指标的记录级别均为 ,但记录 e2e-latency-* 指标除外,该指标具有记录级别。 请注意,该值是在 中为用户的自定义状态存储指定的; 对于内置状态存储,目前我们有:debugtracestore-scopeStoreSupplier#metricsScope()
  • in-memory-state
  • in-memory-lru-state
  • in-memory-window-state
  • in-memory-suppression(用于抑制缓冲液)
  • rocksdb-state(对于 RocksDB 支持的键值存储)
  • rocksdb-window-state(对于 RocksDB 支持的窗口存储)
  • rocksdb-session-state(对于 RocksDB 支持的会话存储)
指标抑制缓冲区大小平均值、抑制缓冲区大小最大值、抑制缓冲区计数平均值和抑制缓冲区计数最大值 仅适用于抑制缓冲液。所有其他指标不可用于抑制缓冲区。
衡量指标/属性名称 描述 姆比恩名称
放置延迟平均 平均放置执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
放置延迟最大值 最长放置执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
放置如果不存在延迟平均 平均不存在的放置执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
放置如果不存在延迟最大值 最大放置(如果不存在)执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
获取延迟平均 平均获取执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
获取最大延迟 最长获取执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
删除延迟平均 平均删除执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
删除延迟最大值 最长删除执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
放置所有延迟平均 平均全部放置执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
放置所有延迟最大值 最大全部放置执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
全延迟平均值 所有操作执行的平均时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
所有延迟最大值 所有操作执行的最长时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
平均范围延迟 平均范围执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
最大范围延迟 最大范围执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
平均刷新延迟 平均刷新执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
最大刷新延迟 最大刷新执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
恢复延迟平均 平均还原执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
恢复延迟最大值 最长还原执行时间(以 ns 为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-rate The average put rate for this store. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-if-absent-rate The average put-if-absent rate for this store. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
get-rate The average get rate for this store. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
delete-rate The average delete rate for this store. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-all-rate The average put-all rate for this store. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
all-rate The average all operation rate for this store. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
range-rate The average range rate for this store. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
flush-rate The average flush rate for this store. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
restore-rate The average restore rate for this store. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
suppression-buffer-size-avg The average total size, in bytes, of the buffered data over the sampling window. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),in-memory-suppression-id=([-.\w]+)
suppression-buffer-size-max The maximum total size, in bytes, of the buffered data over the sampling window. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),in-memory-suppression-id=([-.\w]+)
suppression-buffer-count-avg The average number of records buffered over the sampling window. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),in-memory-suppression-id=([-.\w]+)
suppression-buffer-count-max The maximum number of records buffered over the sampling window. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),in-memory-suppression-id=([-.\w]+)
record-e2e-latency-avg The average end-to-end latency of a record, measured by comparing the record timestamp with the system time when it has been fully processed by the node. kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
记录-e2e-延迟-最大值 记录的最大端到端延迟,通过将记录时间戳与节点完全处理记录的系统时间进行比较来衡量。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
记录 e2e-延迟分钟 记录的最小端到端延迟,通过将记录时间戳与节点完全处理记录的系统时间进行比较来衡量。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
RocksDB 指标
RocksDB 指标分为基于统计信息的指标和基于属性的指标。 前者是从 RocksDB 状态存储收集的统计信息中记录的,而后者是从 RocksDB 公开的属性。 RocksDB 收集的统计信息提供随时间推移的累积度量,例如写入状态存储的字节数。 RocksDB 公开的属性提供当前测量值,例如当前使用的内存量。 请注意,内置 RocksDB 状态存储目前如下:store-scope
  • rocksdb-state(对于 RocksDB 支持的键值存储)
  • rocksdb-window-state(对于 RocksDB 支持的窗口存储)
  • rocksdb-session-state(对于 RocksDB 支持的会话存储)
RocksDB 基于统计的指标:以下所有基于统计信息的指标的记录级别为 因为收集 RocksDB 中的统计数据 可能会影响性能。 每分钟从 RocksDB 状态存储中收集一次基于统计信息的指标。 如果状态存储由多个 RocksDB 实例组成,例如 WindowStores 和 SessionStores, 每个指标报告状态存储的 RocksDB 实例上的聚合。debug
衡量指标/属性名称 描述 姆比恩名称
字节写入速率 每秒写入 RocksDB 状态存储的平均字节数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
写入的字节总数 写入 RocksDB 状态存储的总字节数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
字节读取速率 每秒从 RocksDB 状态存储读取的平均字节数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
字节读取总数 从 RocksDB 状态存储读取的总字节数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
内存表字节刷新速率 每秒从内存表刷新到磁盘的平均字节数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
内存表-字节-刷新-总计 从内存表刷新到磁盘的总字节数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
内存表命中率 可忆性命中数相对于所有查找与可记忆量之间的比率。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
内存表刷新时间平均 内存表刷新到光盘的平均持续时间(以毫秒为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
内存表-刷新时间-最小值 内存表刷新到光盘的最短持续时间(以毫秒为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
内存表刷新时间最大值 内存表刷新到光盘的最长持续时间(以毫秒为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
块缓存数据命中率 数据块的块缓存命中数相对于数据块的所有查找与块缓存的比率。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
块缓存索引命中率 索引块的块缓存命中数相对于索引块的所有查找与块缓存的比率。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
块缓存过滤器命中率 筛选器块的块缓存命中数相对于筛选器块的所有查找与块缓存的比率。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
写入停顿持续时间平均 写入的平均持续时间以毫秒为单位停滞。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
写入停顿持续时间总计 写入的总持续时间以毫秒为单位停止。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
字节读取压缩速率 压缩期间每秒读取的平均字节数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
字节写入压缩率 压缩期间每秒写入的平均字节数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
压缩时间平均 圆盘压缩的平均持续时间(毫秒)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
压实时间最小 圆盘压缩的最短持续时间(以毫秒为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
压实时间最长 光盘压缩的最长持续时间(毫秒)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
编号打开文件 当前打开的文件数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
数字-文件-错误-总计 发生的文件错误总数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
RocksDB 基于属性的指标:以下所有基于属性的指标的记录级别为 ,并在以下情况下记录 访问指标。 如果状态存储由多个 RocksDB 实例组成,例如 WindowStores 和 SessionStores, 每个指标报告状态存储的所有 RocksDB 实例的总和,块缓存指标除外。如果每个实例都使用其 自己的块缓存,如果共享单个块缓存,它们仅报告一个实例的记录值 在所有实例中。infoblock-cache-*
衡量指标/属性名称 描述 姆比恩名称
num-immutable-table 尚未刷新的不可变内存表的数量。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
当前大小-活动-内存表 活动内存表的近似大小(以字节为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
所有内存表的大小 活动和未刷新的不可变内存表的近似大小(以字节为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
大小全内存表 活动、未刷新的不可变和固定的不可变内存表的大致大小(以字节为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-entryries-active-mem-table 活动内存表中的条目数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-entryries-imm-mem-tables 未刷新的不可变内存表中的条目数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-deletes-active-mem-table 活动内存表中的删除条目数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-deletes-imm-mem-tables 未刷新的不可变内存表中的删除条目数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
内存表刷新挂起 如果内存表刷新挂起,则此指标报告 1,否则报告 0。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
运行刷新数 当前正在运行的刷新次数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
压缩挂起 如果至少有一个压缩挂起,则此指标报告 1,否则报告 0。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
运行压缩数 当前正在运行的压缩数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
估计挂起压缩字节数 压缩需要在磁盘上重写以使所有级别降至以下的估计总字节数 目标大小(仅对水平压实有效)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
总文件大小 所有 SST 文件的总大小(以字节为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
实时 SST 文件大小 属于最新 LSM 树的所有 SST 文件的总大小(以字节为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
实时版本数 LSM 树的实时版本数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
块缓存容量 块缓存的容量(以字节为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
块缓存使用情况 驻留在块缓存中的条目的内存大小(以字节为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
块缓存固定用法 固定在块缓存中的条目的内存大小(以字节为单位)。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
估计键数 活动和未刷新的不可变内存表和存储中的估计密钥数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
估计表读取器内存 用于读取 SST 表的估计内存(以字节为单位),不包括块缓存中使用的内存。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
后台错误 后台错误的总数。 kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
记录缓存指标
以下所有指标的记录级别均为:debug
衡量指标/属性名称 描述 姆比恩名称
命中率平均 平均缓存命中率定义为缓存读取命中数占缓存读取请求总数的比率。 kafka.streams:type=stream-record-cache-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),record-cache-id=([-.\w]+)
命中率最小值 最小缓存命中率。 kafka.streams:type=stream-record-cache-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),record-cache-id=([-.\w]+)
命中率最大值 最大缓存命中率。 kafka.streams:type=stream-record-cache-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),record-cache-id=([-.\w]+)

别人

我们建议监视 GC 时间和其他统计信息以及各种服务器统计信息,例如 CPU 利用率、I/O 服务时间等。 在客户端,我们建议监控消息/字节速率(全局和每个主题)、请求速率/大小/时间,在使用者端,监视所有分区中消息的最大延迟和最小提取请求速率。要让使用者跟上,最大延迟需要小于阈值,最小提取率需要大于 0。

6.9 动物园管理员

稳定版

当前稳定分支为 3.5。Kafka 会定期更新,以包含 3.5 系列中的最新版本。

动物园管理员弃用

随着Apache Kafka 3.5的发布,Zookeeper现在被标记为弃用。ZooKeeper 计划在 Apache Kafka(4.0 版)的下一个主要版本中删除,该版本计划不早于 2024 年 <> 月进行。在弃用阶段,Kafka 集群的元数据管理仍支持 ZooKeeper,但不建议用于新部署。KRaft 中仍有一小部分功能有待实现,有关更多信息,请参阅当前缺少的功能

迁移

将现有的基于 ZooKeeper 的 Kafka 集群迁移到 KRaft 目前是预览版,我们希望它在 3.6 版中可用于生产。建议用户开始计划迁移到 KRaft 并开始测试以提供任何反馈。请参阅 ZooKeeper 到 KRaft 迁移,了解有关如何执行从 ZooKeeper 到 KRaft 的实时迁移以及当前限制的详细信息。

3.x 和 ZooKeeper 支持

支持ZooKeeper模式的最终3.x次要版本将在发布后的12个月内收到关键错误修复和安全修复。

ZooKeeper和KRaft时间表

有关删除 ZooKeeper 和计划中的 KRaft 功能发布的暂定时间表的详细信息和更新,请参阅 KIP-833

操作 ZooKeeper

在操作上,我们为健康的 ZooKeeper 安装执行以下操作:
  • 物理/硬件/网络布局中的冗余:尽量不要将它们全部放在同一个机架中,体面(但不要发疯)硬件,尽量保持冗余电源和网络路径等。一个典型的 ZooKeeper 集合有 5 或 7 台服务器,分别允许 2 台和 3 台服务器停机。如果您的部署规模较小,则可以使用 3 台服务器,但请记住,在这种情况下,您只能容忍 1 台服务器停机。
  • I/O 隔离:如果您执行大量写入类型流量,则几乎肯定会希望事务日志位于专用磁盘组上。对事务日志的写入是同步的(但为了提高性能而进行批处理),因此,并发写入会显著影响性能。ZooKeeper 快照可以是这样的并发写入源之一,理想情况下应写入与事务日志分开的磁盘组。快照以异步方式写入磁盘,因此通常可以与操作系统和消息日志文件共享。可以将服务器配置为将单独的磁盘组与 dataLogDir 参数一起使用。
  • 应用程序隔离:除非您真正了解要安装在同一机器上的其他应用程序的应用程序模式,否则最好单独运行ZooKeeper(尽管这可能是与硬件功能的平衡行为)。
  • 谨慎使用虚拟化:它可以工作,具体取决于您的集群布局和读/写模式以及SLA,但是虚拟化层引入的微小开销可能会增加并抛弃ZooKeeper,因为它可能非常具有时间敏感性
  • ZooKeeper配置:它是java,确保给它“足够”的堆空间(我们通常使用3-5G运行它们,但这主要是由于我们这里的数据集大小)。不幸的是,我们没有一个好的公式,但请记住,允许更多的 ZooKeeper 状态意味着快照可能会变大,而大快照会影响恢复时间。事实上,如果快照变得太大(几千兆字节),那么您可能需要增加 initLimit 参数,以便有足够的时间让服务器恢复并加入整体。
  • 监控:JMX 和 4 个字母的单词 (4lw) 命令都非常有用,它们在某些情况下确实重叠(在这些情况下,我们更喜欢 4 个字母的命令,它们似乎更可预测,或者至少,它们与 LI 监控基础设施配合得更好)
  • 不要过度构建集群:大型集群,尤其是在写入密集型使用模式中,意味着大量的集群内通信(写入和后续集群成员更新的仲裁),但不要构建不足(并冒着淹没集群的风险)。拥有更多服务器会增加读取容量。
总的来说,我们尽量保持ZooKeeper系统尽可能小,以处理负载(加上标准的增长容量规划),并尽可能简单。与正式版本相比,我们尽量不要对配置或应用程序布局做任何花哨的事情,并尽可能保持其独立。由于这些原因,我们倾向于跳过操作系统打包版本,因为它倾向于尝试将内容放在操作系统标准层次结构中,这可能是“混乱的”,因为缺乏更好的措辞方式。

6.10 克拉夫特

配置

流程角色

在 KRaft 模式下,可以使用属性将每个 Kafka 服务器配置为控制器、代理或两者。此属性可以具有以下值:process.roles

  • 如果设置为 ,则服务器充当代理。process.rolesbroker
  • 如果设置为 ,则服务器充当控制器。process.rolescontroller
  • 如果设置为 ,则服务器同时充当代理和控制器。process.rolesbroker,controller
  • 如果根本没有设置,则假定它处于 ZooKeeper 模式。process.roles

同时充当代理和控制器的 Kafka 服务器称为“组合”服务器。对于开发环境等小型用例,组合服务器的操作更简单。主要缺点是控制器与系统其余部分的隔离程度较低。例如,无法在组合模式下将控制器与代理分开滚动或缩放。不建议在关键部署环境中使用组合模式。

控制器

在 KRaft 模式下,特定的 Kafka 服务器被选为控制器(与基于 ZooKeeper 的模式不同,在这种模式下,任何服务器都可以成为控制器)。被选为控制器的服务器将参与元数据仲裁。每个控制器都是当前主动控制器的活动或热备用控制器。

Kafka 管理员通常会为此角色选择 3 或 5 台服务器,具体取决于成本和系统在不影响可用性的情况下应承受的并发故障数等因素。大多数控制器必须处于活动状态才能保持可用性。使用 3 个控制器,集群可以容忍 1 个控制器故障;使用 5 个控制器时,群集可以容忍 2 个控制器故障。

Kafka 群集中的所有服务器都使用该属性发现仲裁投票者。这标识应使用的仲裁控制器服务器。必须枚举所有控制器。每个控制器都用其 和信息进行标识。例如:controller.quorum.votersidhostport

controller.quorum.voters=id1@host1:port1,id2@host2:port2,id3@host3:port3

如果 Kafka 集群有 3 个控制器,分别名为控制器 1、控制器 2 和控制器 3,则控制器 1 可能具有以下配置:


process.roles=controller
node.id=1
listeners=CONTROLLER://controller1.example.com:9093
controller.quorum.voters=1@controller1.example.com:9093,2@controller2.example.com:9093,3@controller3.example.com:9093

每个代理和控制器都必须设置该属性。属性中提供的节点 ID 必须与控制器服务器上的相应 ID 匹配。例如,在控制器 1 上,node.id 必须设置为 1,依此类推。每个节点 ID 在特定群集中的所有服务器中必须是唯一的。无论其值如何,任何两个服务器都不能具有相同的节点 ID。controller.quorum.voterscontroller.quorum.votersprocess.roles

存储工具

该命令可用于为新集群生成集群 ID。使用命令格式化群集中的每个服务器时,必须使用此群集 ID。kafka-storage.sh random-uuidkafka-storage.sh format

这与卡夫卡过去的运作方式不同。以前,Kafka 会自动格式化空白存储目录,并自动生成新的集群 ID。更改的一个原因是自动格式化有时会掩盖错误条件。这对于控制器和代理服务器维护的元数据日志尤其重要。如果大多数控制器能够从空日志目录开始,则可能会在缺少提交数据的情况下选出领导者。

调试

元数据仲裁工具

该工具可用于描述群集元数据分区的运行时状态。例如,以下命令显示元数据仲裁的摘要:kafka-metadata-quorum

  > bin/kafka-metadata-quorum.sh --bootstrap-server  broker_host:port describe --status
ClusterId:              fMCL8kv1SWm87L_Md-I2hg
LeaderId:               3002
LeaderEpoch:            2
HighWatermark:          10
MaxFollowerLag:         0
MaxFollowerLagTimeMs:   -1
CurrentVoters:          [3000,3001,3002]
CurrentObservers:       [0,1,2]
转储日志工具

该工具可用于调试集群元数据目录的日志段和快照。该工具将扫描提供的文件并解码元数据记录。例如,此命令解码并打印第一个日志段中的记录:kafka-dump-log

  > bin/kafka-dump-log.sh --cluster-metadata-decoder --files metadata_log_dir/__cluster_metadata-0/00000000000000000000.log

此命令解码并打印集群元数据快照中的 recrods:

  > bin/kafka-dump-log.sh --cluster-metadata-decoder --files metadata_log_dir/__cluster_metadata-0/00000000000000000100-0000000001.checkpoint
元数据外壳

该工具可用于以交互方式检查群集元数据分区的状态:kafka-metadata-shell


  > bin/kafka-metadata-shell.sh  --snapshot metadata_log_dir/__cluster_metadata-0/00000000000000000000.log
>> ls /
brokers  local  metadataQuorum  topicIds  topics
>> ls /topics
foo
>> cat /topics/foo/0/data
{
  "partitionId" : 0,
  "topicId" : "5zoAlv-xEh9xRANKXt1Lbg",
  "replicas" : [ 1 ],
  "isr" : [ 1 ],
  "removingReplicas" : null,
  "addingReplicas" : null,
  "leader" : 1,
  "leaderEpoch" : 0,
  "partitionEpoch" : 0
}
>> exit
  

部署注意事项

  • Kafka 服务器应设置为其中一个,但不能同时设置为两者。组合模式可以在开发环境中使用,但在关键部署环境中应避免使用。process.rolebrokercontroller
  • 为了实现冗余,Kafka 集群应使用 3 个控制器。在关键环境中,不建议使用 3 个以上的控制器。在极少数情况下,部分网络故障可能会使群集元数据仲裁不可用。此限制将在 Kafka 的未来版本中解决。
  • Kafka 控制器将群集的所有元数据存储在内存和磁盘上。我们认为,对于典型的 Kafka 集群,5GB 的主内存和元数据日志控制器上的 5GB 磁盘空间就足够了。

缺少的功能

以下功能在 KRaft 模式下未完全实现:

  • 支持具有多个存储目录的 JBOD 配置
  • 修改独立 KRaft 控制器上的某些动态配置
  • 委派令牌

ZooKeeper to KRaft Migration

ZooKeeper 到 KRaft 的迁移被视为抢先体验功能,不建议用于生产集群。

ZK 到 KRaft 的迁移尚不支持以下功能:

请使用项目JIRA和“kraft”组件报告ZooKeeper到KRaft迁移的问题。

术语

我们在这里使用术语“迁移”来指代更改 Kafka 集群元数据的过程 系统从 ZooKeeper 到 KRaft 并迁移现有元数据。“升级”是指安装较新版本的 Kafka。不建议 在执行元数据迁移的同时升级软件。

我们还使用术语“ZK 模式”来指代使用 ZooKeeper 作为元数据的 Kafka 代理。 系统。“KRaft 模式”是指使用 KRaft 控制器仲裁作为其元数据系统的 Kafka 代理。

准备迁移

在开始迁移之前,必须将 Kafka 代理升级到软件版本 3.5.0 并具有 “inter.broker.protocol.version”配置设置为“3.5”。请参阅升级到 3.5.0 以了解 升级说明。

建议在迁移处于活动状态时为迁移组件启用 TRACE 级别日志记录。这可以 通过将以下 log4j 配置添加到每个 KRaft 控制器的“log4j.properties”文件中来完成。

log4j.logger.org.apache.kafka.metadata.migration=TRACE

在迁移期间,在 KRaft 控制器和 ZK 代理上启用 DEBUG 日志记录通常很有用。

设置 KRaft 控制器仲裁

在开始迁移之前,需要做两件事。首先,必须将代理配置为支持迁移,其次, 必须部署 KRaft 控制器仲裁。应使用 与 相同的群集 ID 预配 KRaft 控制器 现有的卡夫卡集群。这可以通过检查数据目录中的“meta.properties”文件之一找到 ,或通过运行以下命令。

./bin/zookeeper-shell.sh localhost:2181 get /cluster/id

KRaft 控制器仲裁也应使用最新的“3.4”进行预配。 有关 KRaft 部署的进一步说明,请参阅上述文档metadata.version

除了标准的 KRaft 配置外,KRaft 控制器还需要启用对迁移的支持。 以及提供ZooKeeper连接配置。

以下是准备迁移的 KRaft 控制器的示例配置:

# Sample KRaft cluster controller.properties listening on 9093
process.roles=controller
node.id=3000
controller.quorum.voters=3000@localhost:9093
controller.listener.names=CONTROLLER
listeners=CONTROLLER://:9093

# Enable the migration
zookeeper.metadata.migration.enable=true

# ZooKeeper client configuration
zookeeper.connect=localhost:2181

# Other configs ...

注: KRaft 集群 node.id 值必须不同于任何现有的 ZK 代理 broker.id。 在 KRaft 模式下,代理和控制器共享相同的节点 ID 命名空间。

在代理上启用迁移

启动 KRaft 控制器仲裁后,需要重新配置并重新启动代理。经纪人 可以滚动方式重新启动,以避免影响群集可用性。每个经纪人都需要 以下配置与 KRaft 控制器通信并启用迁移。

下面是准备迁移的代理的示例配置:

# Sample ZK broker server.properties listening on 9092
broker.id=0
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://localhost:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT

# Set the IBP
inter.broker.protocol.version=3.5

# Enable the migration
zookeeper.metadata.migration.enable=true

# ZooKeeper client configuration
zookeeper.connect=localhost:2181

# KRaft controller quorum configuration
controller.quorum.voters=3000@localhost:9093
controller.listener.names=CONTROLLER

注: 使用必要的配置重新启动最终 ZK 代理后,迁移将自动开始。迁移完成后,可以在主动控制器上观察到 INFO 级别日志:

Completed migration of metadata from Zookeeper to KRaft

将经纪人迁移到 KRaft

一旦 KRaft 控制器完成元数据迁移,代理仍将以 ZK 模式运行。虽然 KRaft 控制器处于迁移模式,它将继续向 ZK 模式代理发送控制器 RPC。这包括 RPC 如 UpdateMetadata 和 LeaderAndIsr。

要将代理迁移到 KRaft,只需将它们重新配置为 KRaft 代理并重新启动即可。使用上述 以代理配置为例,我们将 替换为 并添加 .重要的是,代理在重新启动时应保持相同的代理/节点标识。 此时应删除动物园管理员配置。broker.idnode.idprocess.roles=broker

# Sample KRaft broker server.properties listening on 9092
process.roles=broker
node.id=0
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://localhost:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT

# Don't set the IBP, KRaft uses "metadata.version" feature flag
# inter.broker.protocol.version=3.5

# Remove the migration enabled flag
# zookeeper.metadata.migration.enable=true

# Remove ZooKeeper client configuration
# zookeeper.connect=localhost:2181

# Keep the KRaft controller quorum configuration
controller.quorum.voters=3000@localhost:9093
controller.listener.names=CONTROLLER

每个代理都使用 KRaft 配置重新启动,直到整个集群以 KRaft 方式运行。

完成迁移

在 KRaft 模式下重新启动所有代理后,完成迁移的最后一步是采取 KRaft 控制器退出迁移模式。这是通过删除“zookeeper.metadata.migration.enable”来完成的 属性,并一次重新启动一个。

# Sample KRaft cluster controller.properties listening on 9093
process.roles=controller
node.id=3000
controller.quorum.voters=3000@localhost:9093
controller.listener.names=CONTROLLER
listeners=CONTROLLER://:9093

# Disable the migration
# zookeeper.metadata.migration.enable=true

# Remove ZooKeeper client configuration
# zookeeper.connect=localhost:2181

# Other configs ...

7. 安全

7.1 安全概述

在 0.9.0.0 版本中,Kafka 社区添加了许多功能,这些功能可以单独使用或一起使用,从而提高 Kafka 集群的安全性。目前支持以下安全措施:
  1. 使用 SSL 或 SASL 对来自客户端(生产者和消费者)、其他代理和工具的代理连接进行身份验证。Kafka 支持以下 SASL 机制:
    • SASL/GSSAPI (Kerberos) - 从版本 0.9.0.0 开始
    • SASL/普通 - 从版本 0.10.0.0 开始
    • SASL/SCRAM-SHA-256 和 SASL/SCRAM-SHA-512 - 从版本 0.10.2.0 开始
    • SASL/OAUTHBEARER - 从版本 2.0 开始
  2. 从代理到 ZooKeeper 的连接身份验证
  3. 使用 SSL 对代理和客户端之间、代理之间或代理和工具之间传输的数据进行加密(请注意,启用 SSL 时性能会下降,其程度取决于 CPU 类型和 JVM 实现。
  4. 客户端授权读/写操作
  5. 授权可插拔,支持与外部授权服务集成
值得注意的是,安全性是可选的 - 支持不安全的群集,以及经过身份验证、未经身份验证、加密和未加密的客户端的组合。 以下指南说明了如何在客户端和代理中配置和使用安全功能。

7.2 监听配置

为了保护 Kafka 集群,有必要保护用于保护用于 与服务器通信。每个服务器都必须定义用于 接收来自客户端以及其他服务器的请求。可以配置每个侦听器 使用各种机制对客户端进行身份验证,并确保 服务器和客户端已加密。本节提供配置入门 的听众。

Kafka 服务器支持侦听多个端口上的连接。这是通过以下方式配置的 服务器配置中的属性,接受逗号分隔 要启用的侦听器列表。必须在每台服务器上至少定义一个侦听器。格式 中定义的每个侦听器如下所示:listenerslisteners

{LISTENER_NAME}://{hostname}:{port}

通常是一个描述性名称,用于定义 侦听器。例如,许多配置对客户端流量使用单独的侦听器, 因此,它们可能会引用配置中的相应侦听器:LISTENER_NAMECLIENT

listeners=CLIENT://localhost:9092

每个侦听器的安全协议在单独的配置中定义:。该值是以逗号分隔的列表 映射到其安全协议的每个侦听器。例如,跟随值 配置指定侦听器将使用 SSL,而侦听器将使用明文。listener.security.protocol.mapCLIENTBROKER

listener.security.protocol.map=CLIENT:SSL,BROKER:PLAINTEXT

下面给出了安全协议的可能选项:

  1. 明文
  2. 静态存储地址
  3. SASL_PLAINTEXT
  4. SASL_SSL

明文协议不提供安全性,也不需要任何其他配置。 在以下各节中,本文档介绍如何配置其余协议。

如果每个必需的侦听器使用单独的安全协议,则还可以使用 安全协议名称作为侦听器名称。使用上面的例子, 我们可以跳过 和 侦听器的定义 使用以下定义:listenersCLIENTBROKER

listeners=SSL://localhost:9092,PLAINTEXT://localhost:9093

但是,我们建议用户为侦听器提供显式名称,因为它 使每个侦听器的预期用途更加清晰。

在此列表中的侦听器中,可以声明要用于的侦听器 通过设置配置进行代理间通信 到侦听器的名称。代理间侦听器的主要目的是 分区复制。如果未定义,则确定代理间侦听器 通过由 定义的安全协议,其中 默认为 。inter.broker.listener.namesecurity.inter.broker.protocolPLAINTEXT

对于依赖 Zookeeper 存储集群元数据的传统集群,可以 声明用于从主动控制器传播元数据的单独侦听器 给经纪人。这由 定义。主动控制器 当需要将元数据更新推送到集群中的代理时,将使用此侦听器。 使用控制平面侦听器的好处是它使用单独的处理线程, 这使得应用程序流量不太可能阻碍元数据更改的及时传播 (如分区前导和 ISR 更新)。请注意,默认值为 null,即 表示控制器将使用由 定义的相同侦听器control.plane.listener.nameinter.broker.listener

In a KRaft cluster, a broker is any server which has the role enabled in and a controller is any server which has the role enabled. Listener configuration depends on the role. The listener defined by is used exclusively for requests between brokers. Controllers, on the other hand, must use separate listener which is defined by the configuration. This cannot be set to the same value as the inter-broker listener.brokerprocess.rolescontrollerinter.broker.listener.namecontroller.listener.names

Controllers receive requests both from other controllers and from brokers. For this reason, even if a server does not have the role enabled (i.e. it is just a broker), it must still define the controller listener along with any security properties that are needed to configure it. For example, we might use the following configuration on a standalone broker:controller

process.roles=broker
listeners=BROKER://localhost:9092
inter.broker.listener.name=BROKER
controller.quorum.voters=0@localhost:9093
controller.listener.names=CONTROLLER
listener.security.protocol.map=BROKER:SASL_SSL,CONTROLLER:SASL_SSL

The controller listener is still configured in this example to use the security protocol, but it is not included in since the broker does not expose the controller listener itself. The port that will be used in this case comes from the configuration, which defines the complete list of controllers.SASL_SSLlistenerscontroller.quorum.voters

For KRaft servers which have both the broker and controller role enabled, the configuration is similar. The only difference is that the controller listener must be included in :listeners

process.roles=broker,controller
listeners=BROKER://localhost:9092,CONTROLLER://localhost:9093
inter.broker.listener.name=BROKER
controller.quorum.voters=0@localhost:9093
controller.listener.names=CONTROLLER
listener.security.protocol.map=BROKER:SASL_SSL,CONTROLLER:SASL_SSL

It is a requirement for the port defined in to exactly match one of the exposed controller listeners. For example, here the listener is bound to port 9093. The connection string defined by must then also use port 9093, as it does here.controller.quorum.votersCONTROLLERcontroller.quorum.voters

The controller will accept requests on all listeners defined by . Typically there would be just one controller listener, but it is possible to have more. For example, this provides a way to change the active listener from one port or security protocol to another through a roll of the cluster (one roll to expose the new listener, and one roll to remove the old listener). When multiple controller listeners are defined, the first one in the list will be used for outbound requests.controller.listener.names

It is conventional in Kafka to use a separate listener for clients. This allows the inter-cluster listeners to be isolated at the network level. In the case of the controller listener in KRaft, the listener should be isolated since clients do not work with it anyway. Clients are expected to connect to any other listener configured on a broker. Any requests that are bound for the controller will be forwarded as described below

In the following section, this document covers how to enable SSL on a listener for encryption as well as authentication. The subsequent section will then cover additional authentication mechanisms using SASL.

7.3 Encryption and Authentication using SSL

Apache Kafka allows clients to use SSL for encryption of traffic as well as authentication. By default, SSL is disabled but can be turned on if needed. The following paragraphs explain in detail how to set up your own PKI infrastructure, use it to create certificates and configure Kafka to use these.
  1. Generate SSL key and certificate for each Kafka broker

    The first step of deploying one or more brokers with SSL support is to generate a public/private keypair for every server. Since Kafka expects all keys and certificates to be stored in keystores we will use Java's keytool command for this task. The tool supports two different keystore formats, the Java specific jks format which has been deprecated by now, as well as PKCS12. PKCS12 is the default format as of Java version 9, to ensure this format is being used regardless of the Java version in use all following commands explicitly specify the PKCS12 format. You need to specify two parameters in the above command:
    > keytool -keystore {keystorefile} -alias localhost -validity {validity} -genkey -keyalg RSA -storetype pkcs12
    1. keystorefile: the keystore file that stores the keys (and later the certificate) for this broker. The keystore file contains the private and public keys of this broker, therefore it needs to be kept safe. Ideally this step is run on the Kafka broker that the key will be used on, as this key should never be transmitted/leave the server that it is intended for.
    2. validity: the valid time of the key in days. Please note that this differs from the validity period for the certificate, which will be determined in Signing the certificate. You can use the same key to request multiple certificates: if your key has a validity of 10 years, but your CA will only sign certificates that are valid for one year, you can use the same key with 10 certificates over time.

    To obtain a certificate that can be used with the private key that was just created a certificate signing request needs to be created. This signing request, when signed by a trusted CA results in the actual certificate which can then be installed in the keystore and used for authentication purposes.
    To generate certificate signing requests run the following command for all server keystores created so far. This command assumes that you want to add hostname information to the certificate, if this is not the case, you can omit the extension parameter . Please see below for more information on this.
    > keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -destkeystoretype pkcs12 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}
    -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}
    Host Name Verification
    Host name verification, when enabled, is the process of checking attributes from the certificate that is presented by the server you are connecting to against the actual hostname or ip address of that server to ensure that you are indeed connecting to the correct server.
    The main reason for this check is to prevent man-in-the-middle attacks. For Kafka, this check has been disabled by default for a long time, but as of Kafka 2.0.0 host name verification of servers is enabled by default for client connections as well as inter-broker connections.
    Server host name verification may be disabled by setting to an empty string.
    For dynamically configured broker listeners, hostname verification may be disabled using :
    ssl.endpoint.identification.algorithmkafka-configs.sh
    > bin/kafka-configs.sh --bootstrap-server localhost:9093 --entity-type brokers --entity-name 0 --alter --add-config "listener.name.internal.ssl.endpoint.identification.algorithm="

    Note:

    Normally there is no good reason to disable hostname verification apart from being the quickest way to "just get it to work" followed by the promise to "fix it later when there is more time"!
    Getting hostname verification right is not that hard when done at the right time, but gets much harder once the cluster is up and running - do yourself a favor and do it now!

    If host name verification is enabled, clients will verify the server's fully qualified domain name (FQDN) or ip address against one of the following two fields:

    1. Common Name (CN)
    2. Subject Alternative Name (SAN)

    While Kafka checks both fields, usage of the common name field for hostname verification has been deprecated since 2000 and should be avoided if possible. In addition the SAN field is much more flexible, allowing for multiple DNS and IP entries to be declared in a certificate.
    Another advantage is that if the SAN field is used for hostname verification the common name can be set to a more meaningful value for authorization purposes. Since we need the SAN field to be contained in the signed certificate, it will be specified when generating the signing request. It can also be specified when generating the keypair, but this will not automatically be copied into the signing request.
    To add a SAN field append the following argument to the keytool command:
    -ext SAN=DNS:{FQDN},IP:{IPADDRESS}
    > keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -destkeystoretype pkcs12 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}
  2. Creating your own CA

    After this step each machine in the cluster has a public/private key pair which can already be used to encrypt traffic and a certificate signing request, which is the basis for creating a certificate. To add authentication capabilities this signing request needs to be signed by a trusted authority, which will be created in this step.

    A certificate authority (CA) is responsible for signing certificates. CAs works likes a government that issues passports - the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have a strong assurance that they are connecting to the authentic machines.

    For this guide we will be our own Certificate Authority. When setting up a production cluster in a corporate environment these certificates would usually be signed by a corporate CA that is trusted throughout the company. Please see Common Pitfalls in Production for some things to consider for this case.

    Due to a bug in OpenSSL, the x509 module will not copy requested extension fields from CSRs into the final certificate. Since we want the SAN extension to be present in our certificate to enable hostname verification, we'll use the ca module instead. This requires some additional configuration to be in place before we generate our CA keypair.
    Save the following listing into a file called openssl-ca.cnf and adjust the values for validity and common attributes as necessary.

    HOME            = .
    RANDFILE        = $ENV::HOME/.rnd
    
    ####################################################################
    [ ca ]
    default_ca    = CA_default      # The default ca section
    
    [ CA_default ]
    
    base_dir      = .
    certificate   = $base_dir/cacert.pem   # The CA certifcate
    private_key   = $base_dir/cakey.pem    # The CA private key
    new_certs_dir = $base_dir              # Location for new certs after signing
    database      = $base_dir/index.txt    # Database index file
    serial        = $base_dir/serial.txt   # The current serial number
    
    default_days     = 1000         # How long to certify for
    default_crl_days = 30           # How long before next CRL
    default_md       = sha256       # Use public key default MD
    preserve         = no           # Keep passed DN ordering
    
    x509_extensions = ca_extensions # The extensions to add to the cert
    
    email_in_dn     = no            # Don't concat the email in the DN
    copy_extensions = copy          # Required to copy SANs from CSR to cert
    
    ####################################################################
    [ req ]
    default_bits       = 4096
    default_keyfile    = cakey.pem
    distinguished_name = ca_distinguished_name
    x509_extensions    = ca_extensions
    string_mask        = utf8only
    
    ####################################################################
    [ ca_distinguished_name ]
    countryName         = Country Name (2 letter code)
    countryName_default = DE
    
    stateOrProvinceName         = State or Province Name (full name)
    stateOrProvinceName_default = Test Province
    
    localityName                = Locality Name (eg, city)
    localityName_default        = Test Town
    
    organizationName            = Organization Name (eg, company)
    organizationName_default    = Test Company
    
    organizationalUnitName         = Organizational Unit (eg, division)
    organizationalUnitName_default = Test Unit
    
    commonName         = Common Name (e.g. server FQDN or YOUR name)
    commonName_default = Test Name
    
    emailAddress         = Email Address
    emailAddress_default = test@test.com
    
    ####################################################################
    [ ca_extensions ]
    
    subjectKeyIdentifier   = hash
    authorityKeyIdentifier = keyid:always, issuer
    basicConstraints       = critical, CA:true
    keyUsage               = keyCertSign, cRLSign
    
    ####################################################################
    [ signing_policy ]
    countryName            = optional
    stateOrProvinceName    = optional
    localityName           = optional
    organizationName       = optional
    organizationalUnitName = optional
    commonName             = supplied
    emailAddress           = optional
    
    ####################################################################
    [ signing_req ]
    subjectKeyIdentifier   = hash
    authorityKeyIdentifier = keyid,issuer
    basicConstraints       = CA:FALSE
    keyUsage               = digitalSignature, keyEncipherment
    Then create a database and serial number file, these will be used to keep track of which certificates were signed with this CA. Both of these are simply text files that reside in the same directory as your CA keys. With these steps done you are now ready to generate your CA that will be used to sign certificates later. The CA is simply a public/private key pair and certificate that is signed by itself, and is only intended to sign other certificates.
    This keypair should be kept very safe, if someone gains access to it, they can create and sign certificates that will be trusted by your infrastructure, which means they will be able to impersonate anybody when connecting to any service that trusts this CA.
    The next step is to add the generated CA to the **clients' truststore** so that the clients can trust this CA: Note: If you configure the Kafka brokers to require client authentication by setting ssl.client.auth to be "requested" or "required" in the Kafka brokers config then you must provide a truststore for the Kafka brokers as well and it should have all the CA certificates that clients' keys were signed by. In contrast to the keystore in step 1 that stores each machine's own identity, the truststore of a client stores all the certificates that the client should trust. Importing a certificate into one's truststore also means trusting all certificates that are signed by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying SSL on a large Kafka cluster. You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. That way all machines can authenticate all other machines.
    > echo 01 > serial.txt
    > touch index.txt
    > openssl req -x509 -config openssl-ca.cnf -newkey rsa:4096 -sha256 -nodes -out cacert.pem -outform PEM
    > keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
    > keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
  3. Signing the certificate

    Then sign it with the CA: Finally, you need to import both the certificate of the CA and the signed certificate into the keystore: The definitions of the parameters are the following:
    > openssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out {server certificate} -infiles {certificate signing request}
    > keytool -keystore {keystore} -alias CARoot -import -file {CA certificate}
    > keytool -keystore {keystore} -alias localhost -import -file cert-signed
    1. keystore: the location of the keystore
    2. CA certificate: the certificate of the CA
    3. certificate signing request: the csr created with the server key
    4. server certificate: the file to write the signed certificate of the server to
    This will leave you with one truststore called truststore.jks - this can be the same for all clients and brokers and does not contain any sensitive information, so there is no need to secure this.
    Additionally you will have one server.keystore.jks file per node which contains that nodes keys, certificate and your CAs certificate, please refer to Configuring Kafka Brokers and Configuring Kafka Clients for information on how to use these files.

    For some tooling assistance on this topic, please check out the easyRSA project which has extensive scripting in place to help with these steps.

    SSL key and certificates in PEM format
    From 2.7.0 onwards, SSL key and trust stores can be configured for Kafka brokers and clients directly in the configuration in PEM format. This avoids the need to store separate files on the file system and benefits from password protection features of Kafka configuration. PEM may also be used as the store type for file-based key and trust stores in addition to JKS and PKCS12. To configure PEM key store directly in the broker or client configuration, private key in PEM format should be provided in and the certificate chain in PEM format should be provided in . To configure trust store, trust certificates, e.g. public certificate of CA, should be provided in . Since PEM is typically stored as multi-line base-64 strings, the configuration value can be included in Kafka configuration as multi-line strings with lines terminating in backslash ('\') for line continuation. ssl.keystore.keyssl.keystore.certificate.chainssl.truststore.certificates

    Store password configs and are not used for PEM. If private key is encrypted using a password, the key password must be provided in . Private keys may be provided in unencrypted form without a password. In production deployments, configs should be encrypted or externalized using password protection feature in Kafka in this case. Note that the default SSL engine factory has limited capabilities for decryption of encrypted private keys when external tools like OpenSSL are used for encryption. Third party libraries like BouncyCastle may be integrated with a custom to support a wider range of encrypted private keys.ssl.keystore.passwordssl.truststore.passwordssl.key.passwordSslEngineFactory

  4. Common Pitfalls in Production

    The above paragraphs show the process to create your own CA and use it to sign certificates for your cluster. While very useful for sandbox, dev, test, and similar systems, this is usually not the correct process to create certificates for a production cluster in a corporate environment. Enterprises will normally operate their own CA and users can send in CSRs to be signed with this CA, which has the benefit of users not being responsible to keep the CA secure as well as a central authority that everybody can trust. However it also takes away a lot of control over the process of signing certificates from the user. Quite often the persons operating corporate CAs will apply tight restrictions on certificates that can cause issues when trying to use these certificates with Kafka.
    1. Extended Key Usage
      Certificates may contain an extension field that controls the purpose for which the certificate can be used. If this field is empty, there are no restrictions on the usage, but if any usage is specified in here, valid SSL implementations have to enforce these usages.
      Relevant usages for Kafka are:
      • Client authentication
      • Server authentication
      Kafka brokers need both these usages to be allowed, as for intra-cluster communication every broker will behave as both the client and the server towards other brokers. It is not uncommon for corporate CAs to have a signing profile for webservers and use this for Kafka as well, which will only contain the serverAuth usage value and cause the SSL handshake to fail.
    2. Intermediate Certificates
      Corporate Root CAs are often kept offline for security reasons. To enable day-to-day usage, so called intermediate CAs are created, which are then used to sign the final certificates. When importing a certificate into the keystore that was signed by an intermediate CA it is necessary to provide the entire chain of trust up to the root CA. This can be done by simply cating the certificate files into one combined certificate file and then importing this with keytool.
    3. Failure to copy extension fields
      CA operators are often hesitant to copy and requested extension fields from CSRs and prefer to specify these themselves as this makes it harder for a malicious party to obtain certificates with potentially misleading or fraudulent values. It is adviseable to double check signed certificates, whether these contain all requested SAN fields to enable proper hostname verification. The following command can be used to print certificate details to the console, which should be compared with what was originally requested:
      > openssl x509 -in certificate.crt -text -noout
  5. Configuring Kafka Brokers

    If SSL is not enabled for inter-broker communication (see below for how to enable it), both PLAINTEXT and SSL ports will be necessary. Following SSL configs are needed on the broker side Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled. Optional settings that are worth considering:
    listeners=PLAINTEXT://host.name:port,SSL://host.name:port
    ssl.keystore.location=/var/private/ssl/server.keystore.jks
    ssl.keystore.password=test1234
    ssl.key.password=test1234
    ssl.truststore.location=/var/private/ssl/server.truststore.jks
    ssl.truststore.password=test1234
    1. ssl.client.auth=none ("required" => client authentication is required, "requested" => client authentication is requested and client without certs can still connect. The usage of "requested" is discouraged as it provides a false sense of security and misconfigured clients will still connect successfully.)
    2. ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. (Default is an empty list)
    3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (list out the SSL protocols that you are going to accept from clients. Do note that SSL is deprecated in favor of TLS and using SSL in production is not recommended)
    4. ssl.keystore.type=JKS
    5. ssl.truststore.type=JKS
    6. ssl.secure.random.implementation=SHA1PRNG
    If you want to enable SSL for inter-broker communication, add the following to the server.properties file (it defaults to PLAINTEXT)
    security.inter.broker.protocol=SSL

    Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the JCE Unlimited Strength Jurisdiction Policy Files must be obtained and installed in the JDK/JRE. See the JCA Providers Documentation for more information.

    The JRE/JDK will have a default pseudo-random number generator (PRNG) that is used for cryptography operations, so it is not required to configure the implementation used with the . However, there are performance issues with some implementations (notably, the default chosen on Linux systems, , utilizes a global lock). In cases where performance of SSL connections becomes an issue, consider explicitly setting the implementation to be used. The implementation is non-blocking, and has shown very good performance characteristics under heavy load (50 MB/sec of produced messages, plus replication traffic, per-broker). ssl.secure.random.implementationNativePRNGSHA1PRNG

    Once you start the broker you should be able to see in the server.log To check quickly if the server keystore and truststore are setup properly you can run the following command (Note: TLSv1 should be listed under ssl.enabled.protocols)
    In the output of this command you should see server's certificate: If the certificate does not show up or if there are any other error messages then your keystore is not setup properly.
    with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)
    > openssl s_client -debug -connect localhost:9093 -tls1
    -----BEGIN CERTIFICATE-----
    {variable sized random bytes}
    -----END CERTIFICATE-----
    subject=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=Sriharsha Chintalapani
    issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com
  6. Configuring Kafka Clients

    SSL is supported only for the new Kafka Producer and Consumer, the older API is not supported. The configs for SSL will be the same for both producer and consumer.
    If client authentication is not required in the broker, then the following is a minimal configuration example: Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled. If client authentication is required, then a keystore must be created like in step 1 and the following must also be configured: Other configuration settings that may also be needed depending on our requirements and the broker configuration:
    security.protocol=SSL
    ssl.truststore.location=/var/private/ssl/client.truststore.jks
    ssl.truststore.password=test1234
    ssl.keystore.location=/var/private/ssl/client.keystore.jks
    ssl.keystore.password=test1234
    ssl.key.password=test1234
    1. ssl.provider (Optional). The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.
    2. ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.
    3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. It should list at least one of the protocols configured on the broker side
    4. ssl.truststore.type=JKS
    5. ssl.keystore.type=JKS

    Examples using console-producer and console-consumer:
    > kafka-console-producer.sh --bootstrap-server localhost:9093 --topic test --producer.config client-ssl.properties
    > kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties

7.4 Authentication using SASL

  1. JAAS configuration

    Kafka uses the Java Authentication and Authorization Service (JAAS) for SASL configuration.

    1. JAAS configuration for Kafka brokers

      KafkaServer is the section name in the JAAS file used by each KafkaServer/Broker. This section provides SASL configuration options for the broker including any SASL client connections made by the broker for inter-broker communication. If multiple listeners are configured to use SASL, the section name may be prefixed with the listener name in lower-case followed by a period, e.g. sasl_ssl.KafkaServer.

      Client section is used to authenticate a SASL connection with zookeeper. It also allows the brokers to set SASL ACL on zookeeper nodes which locks these nodes down so that only the brokers can modify it. It is necessary to have the same principal name across all brokers. If you want to use a section name other than Client, set the system property zookeeper.sasl.clientconfig to the appropriate name (e.g., -Dzookeeper.sasl.clientconfig=ZkClient).

      ZooKeeper uses "zookeeper" as the service name by default. If you want to change this, set the system property zookeeper.sasl.client.username to the appropriate name (e.g., -Dzookeeper.sasl.client.username=zk).

      Brokers may also configure JAAS using the broker configuration property . The property name must be prefixed with the listener prefix including the SASL mechanism, i.e. . Only one login module may be specified in the config value. If multiple mechanisms are configured on a listener, configs must be provided for each mechanism using the listener and mechanism prefix. For example,sasl.jaas.configlistener.name.{listenerName}.{saslMechanism}.sasl.jaas.config

      listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required     username="admin"     password="admin-secret";
      listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required     username="admin"     password="admin-secret"     user_admin="admin-secret"     user_alice="alice-secret";
      If JAAS configuration is defined at different levels, the order of precedence used is:
      • Broker configuration property listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config
      • {listenerName}.KafkaServer section of static JAAS configuration
      • KafkaServer section of static JAAS configuration
      Note that ZooKeeper JAAS config may only be configured using static JAAS configuration.

      See GSSAPI (Kerberos), PLAIN, SCRAM or OAUTHBEARER for example broker configurations.

    2. JAAS configuration for Kafka clients

      Clients may configure JAAS using the client configuration property sasl.jaas.config or using the static JAAS config file similar to brokers.

      1. JAAS configuration using client configuration property

        Clients may specify JAAS configuration as a producer or consumer property without creating a physical configuration file. This mode also enables different producers and consumers within the same JVM to use different credentials by specifying different properties for each client. If both static JAAS configuration system property and client property are specified, the client property will be used.java.security.auth.login.configsasl.jaas.config

        See GSSAPI (Kerberos), PLAIN, SCRAM or OAUTHBEARER for example configurations.

      2. JAAS configuration using static config file
        To configure SASL authentication on the clients using static JAAS config file:
        1. Add a JAAS config file with a client login section named KafkaClient. Configure a login module in KafkaClient for the selected mechanism as described in the examples for setting up GSSAPI (Kerberos), PLAIN, SCRAM or OAUTHBEARER. For example, GSSAPI credentials may be configured as:
          KafkaClient {
              com.sun.security.auth.module.Krb5LoginModule required
              useKeyTab=true
              storeKey=true
              keyTab="/etc/security/keytabs/kafka_client.keytab"
              principal="kafka-client-1@EXAMPLE.COM";
          };
        2. Pass the JAAS config file location as JVM parameter to each client JVM. For example:
          -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf
  2. SASL configuration

    SASL may be used with PLAINTEXT or SSL as the transport layer using the security protocol SASL_PLAINTEXT or SASL_SSL respectively. If SASL_SSL is used, then SSL must also be configured.

    1. SASL mechanisms
      Kafka supports the following SASL mechanisms:
    2. SASL configuration for Kafka brokers
      1. Configure a SASL port in server.properties, by adding at least one of SASL_PLAINTEXT or SASL_SSL to the listeners parameter, which contains one or more comma-separated values: If you are only configuring a SASL port (or if you want the Kafka brokers to authenticate each other using SASL) then make sure you set the same SASL protocol for inter-broker communication:
        listeners=SASL_PLAINTEXT://host.name:port
        security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)
      2. Select one or more supported mechanisms to enable in the broker and follow the steps to configure SASL for the mechanism. To enable multiple mechanisms in the broker, follow the steps here.
    3. SASL configuration for Kafka clients

      SASL authentication is only supported for the new Java Kafka producer and consumer, the older API is not supported.

      To configure SASL authentication on the clients, select a SASL mechanism that is enabled in the broker for client authentication and follow the steps to configure SASL for the selected mechanism.

      Note: When establishing connections to brokers via SASL, clients may perform a reverse DNS lookup of the broker address. Due to how the JRE implements reverse DNS lookups, clients may observe slow SASL handshakes if fully qualified domain names are not used, for both the client's and a broker's .bootstrap.serversadvertised.listeners

  3. Authentication using SASL/Kerberos

    1. Prerequisites
      1. Kerberos
        If your organization is already using a Kerberos server (for example, by using Active Directory), there is no need to install a new server just for Kafka. Otherwise you will need to install one, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (Ubuntu, Redhat). Note that if you are using Oracle Java, you will need to download JCE policy files for your Java version and copy them to $JAVA_HOME/jre/lib/security.
      2. Create Kerberos Principals
        If you are using the organization's Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
        If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:
        > sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
        > sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"
      3. Make sure all hosts can be reachable using hostnames - it is a Kerberos requirement that all your hosts can be resolved with their FQDNs.
    2. Configuring Kafka Brokers
      1. Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example (note that each broker should have its own keytab): KafkaServer section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It allows the broker to login using the keytab specified in this section. See notes for more details on Zookeeper SASL configuration.
        KafkaServer {
            com.sun.security.auth.module.Krb5LoginModule required
            useKeyTab=true
            storeKey=true
            keyTab="/etc/security/keytabs/kafka_server.keytab"
            principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
        };
        
        // Zookeeper client authentication
        Client {
            com.sun.security.auth.module.Krb5LoginModule required
            useKeyTab=true
            storeKey=true
            keyTab="/etc/security/keytabs/kafka_server.keytab"
            principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
        };
      2. Pass the JAAS and optionally the krb5 file locations as JVM parameters to each Kafka broker (see here for more details):
        -Djava.security.krb5.conf=/etc/kafka/krb5.conf
        -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
      3. Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting kafka broker.
      4. Configure SASL port and SASL mechanisms in server.properties as described here. For example: We must also configure the service name in server.properties, which should match the principal name of the kafka brokers. In the above example, principal is "kafka/kafka1.hostname.com@EXAMPLE.com", so:
        listeners=SASL_PLAINTEXT://host.name:port
        security.inter.broker.protocol=SASL_PLAINTEXT
        sasl.mechanism.inter.broker.protocol=GSSAPI
        sasl.enabled.mechanisms=GSSAPI
        sasl.kerberos.service.name=kafka
    3. Configuring Kafka Clients
      To configure SASL authentication on the clients:
      1. Clients (producers, consumers, connect workers, etc) will authenticate to the cluster with their own principal (usually with the same name as the user running the client), so obtain or create these principals as needed. Then configure the JAAS configuration property for each client. Different clients within a JVM may run as different users by specifying different principals. The property in producer.properties or consumer.properties describes how clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client using a keytab (recommended for long-running processes): For command-line utilities like kafka-console-consumer or kafka-console-producer, kinit can be used along with "useTicketCache=true" as in: JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM.sasl.jaas.config
        sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required     useKeyTab=true     storeKey=true      keyTab="/etc/security/keytabs/kafka_client.keytab"     principal="kafka-client-1@EXAMPLE.COM";
        sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required     useTicketCache=true;
      2. Make sure the keytabs configured in the JAAS configuration are readable by the operating system user who is starting kafka client.
      3. Optionally pass the krb5 file locations as JVM parameters to each client JVM (see here for more details):
        -Djava.security.krb5.conf=/etc/kafka/krb5.conf
      4. Configure the following properties in producer.properties or consumer.properties:
        security.protocol=SASL_PLAINTEXT (or SASL_SSL)
        sasl.mechanism=GSSAPI
        sasl.kerberos.service.name=kafka
  4. Authentication using SASL/PLAIN

    SASL/PLAIN is a simple username/password authentication mechanism that is typically used with TLS for encryption to implement secure authentication. Kafka supports a default implementation for SASL/PLAIN which can be extended for production use as described here.

    Under the default implementation of , the username is used as the authenticated for configuration of ACLs etc. principal.builder.classPrincipal
    1. Configuring Kafka Brokers
      1. Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example: This configuration defines two users (admin and alice). The properties username and password in the KafkaServer section are used by the broker to initiate connections to other brokers. In this example, admin is the user for inter-broker communication. The set of properties user_userName defines the passwords for all users that connect to the broker and the broker validates all client connections including those from other brokers using these properties.
        KafkaServer {
            org.apache.kafka.common.security.plain.PlainLoginModule required
            username="admin"
            password="admin-secret"
            user_admin="admin-secret"
            user_alice="alice-secret";
        };
      2. Pass the JAAS config file location as JVM parameter to each Kafka broker:
        -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
      3. Configure SASL port and SASL mechanisms in server.properties as described here. For example:
        listeners=SASL_SSL://host.name:port
        security.inter.broker.protocol=SASL_SSL
        sasl.mechanism.inter.broker.protocol=PLAIN
        sasl.enabled.mechanisms=PLAIN
    2. Configuring Kafka Clients
      To configure SASL authentication on the clients:
      1. Configure the JAAS configuration property for each client in producer.properties or consumer.properties. The login module describes how the clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client for the PLAIN mechanism:
        sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required     username="alice"     password="alice-secret";

        The options username and password are used by clients to configure the user for client connections. In this example, clients connect to the broker as user alice. Different clients within a JVM may connect as different users by specifying different user names and passwords in .sasl.jaas.config

        JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM.

      2. Configure the following properties in producer.properties or consumer.properties:
        security.protocol=SASL_SSL
        sasl.mechanism=PLAIN
    3. Use of SASL/PLAIN in production
      • SASL/PLAIN should be used only with SSL as transport layer to ensure that clear passwords are not transmitted on the wire without encryption.
      • The default implementation of SASL/PLAIN in Kafka specifies usernames and passwords in the JAAS configuration file as shown here. From Kafka version 2.0 onwards, you can avoid storing clear passwords on disk by configuring your own callback handlers that obtain username and password from an external source using the configuration options and .sasl.server.callback.handler.classsasl.client.callback.handler.class
      • In production systems, external authentication servers may implement password authentication. From Kafka version 2.0 onwards, you can plug in your own callback handlers that use external authentication servers for password verification by configuring .sasl.server.callback.handler.class
  5. Authentication using SASL/SCRAM

    Salted Challenge Response Authentication Mechanism (SCRAM) is a family of SASL mechanisms that addresses the security concerns with traditional mechanisms that perform username/password authentication like PLAIN and DIGEST-MD5. The mechanism is defined in RFC 5802. Kafka supports SCRAM-SHA-256 and SCRAM-SHA-512 which can be used with TLS to perform secure authentication. Under the default implementation of , the username is used as the authenticated for configuration of ACLs etc. The default SCRAM implementation in Kafka stores SCRAM credentials in Zookeeper and is suitable for use in Kafka installations where Zookeeper is on a private network. Refer to Security Considerations for more details.principal.builder.classPrincipal

    1. Creating SCRAM Credentials

      The SCRAM implementation in Kafka uses Zookeeper as credential store. Credentials can be created in Zookeeper using kafka-configs.sh. For each SCRAM mechanism enabled, credentials must be created by adding a config with the mechanism name. Credentials for inter-broker communication must be created before Kafka brokers are started. Client credentials may be created and updated dynamically and updated credentials will be used to authenticate new connections.

      Create SCRAM credentials for user alice with password alice-secret:

      > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice

      The default iteration count of 4096 is used if iterations are not specified. A random salt is created and the SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in Zookeeper. See RFC 5802 for details on SCRAM identity and the individual fields.

      The following examples also require a user admin for inter-broker communication which can be created using:

      > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

      Existing credentials may be listed using the --describe option:

      > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --describe --entity-type users --entity-name alice

      Credentials may be deleted for one or more SCRAM mechanisms using the --alter --delete-config option:

      > bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice
    2. Configuring Kafka Brokers
      1. Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example: The properties username and password in the KafkaServer section are used by the broker to initiate connections to other brokers. In this example, admin is the user for inter-broker communication.
        KafkaServer {
            org.apache.kafka.common.security.scram.ScramLoginModule required
            username="admin"
            password="admin-secret";
        };
      2. Pass the JAAS config file location as JVM parameter to each Kafka broker:
        -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
      3. Configure SASL port and SASL mechanisms in server.properties as described here. For example:
        listeners=SASL_SSL://host.name:port
        security.inter.broker.protocol=SASL_SSL
        sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 (or SCRAM-SHA-512)
        sasl.enabled.mechanisms=SCRAM-SHA-256 (or SCRAM-SHA-512)
    3. Configuring Kafka Clients
      To configure SASL authentication on the clients:
      1. Configure the JAAS configuration property for each client in producer.properties or consumer.properties. The login module describes how the clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client for the SCRAM mechanisms:
        sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required     username="alice"     password="alice-secret";

        The options username and password are used by clients to configure the user for client connections. In this example, clients connect to the broker as user alice. Different clients within a JVM may connect as different users by specifying different user names and passwords in .sasl.jaas.config

        JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM.

      2. Configure the following properties in producer.properties or consumer.properties:
        security.protocol=SASL_SSL
        sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)
    4. Security Considerations for SASL/SCRAM
      • The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in Zookeeper. This is suitable for production use in installations where Zookeeper is secure and on a private network.
      • Kafka supports only the strong hash functions SHA-256 and SHA-512 with a minimum iteration count of 4096. Strong hash functions combined with strong passwords and high iteration counts protect against brute force attacks if Zookeeper security is compromised.
      • SCRAM should be used only with TLS-encryption to prevent interception of SCRAM exchanges. This protects against dictionary or brute force attacks and against impersonation if Zookeeper is compromised.
      • From Kafka version 2.0 onwards, the default SASL/SCRAM credential store may be overridden using custom callback handlers by configuring in installations where Zookeeper is not secure.sasl.server.callback.handler.class
      • For more details on security considerations, refer to RFC 5802.
  6. Authentication using SASL/OAUTHBEARER

    The OAuth 2 Authorization Framework "enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf." The SASL OAUTHBEARER mechanism enables the use of the framework in a SASL (i.e. a non-HTTP) context; it is defined in RFC 7628. The default OAUTHBEARER implementation in Kafka creates and validates Unsecured JSON Web Tokens and is only suitable for use in non-production Kafka installations. Refer to Security Considerations for more details.

    Under the default implementation of , the principalName of OAuthBearerToken is used as the authenticated for configuration of ACLs etc. principal.builder.classPrincipal
    1. Configuring Kafka Brokers
      1. Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example: The property unsecuredLoginStringClaim_sub in the KafkaServer section is used by the broker when it initiates connections to other brokers. In this example, admin will appear in the subject (sub) claim and will be the user for inter-broker communication.
        KafkaServer {
            org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
            unsecuredLoginStringClaim_sub="admin";
        };
      2. Pass the JAAS config file location as JVM parameter to each Kafka broker:
        -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
      3. Configure SASL port and SASL mechanisms in server.properties as described here. For example:
        listeners=SASL_SSL://host.name:port (or SASL_PLAINTEXT if non-production)
        security.inter.broker.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
        sasl.mechanism.inter.broker.protocol=OAUTHBEARER
        sasl.enabled.mechanisms=OAUTHBEARER
    2. Configuring Kafka Clients
      To configure SASL authentication on the clients:
      1. Configure the JAAS configuration property for each client in producer.properties or consumer.properties. The login module describes how the clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client for the OAUTHBEARER mechanisms:
        sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required     unsecuredLoginStringClaim_sub="alice";

        The option unsecuredLoginStringClaim_sub is used by clients to configure the subject (sub) claim, which determines the user for client connections. In this example, clients connect to the broker as user alice. Different clients within a JVM may connect as different users by specifying different subject (sub) claims in .sasl.jaas.config

        JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM.

      2. Configure the following properties in producer.properties or consumer.properties:
        security.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
        sasl.mechanism=OAUTHBEARER
      3. The default implementation of SASL/OAUTHBEARER depends on the jackson-databind library. Since it's an optional dependency, users have to configure it as a dependency via their build tool.
    3. Unsecured Token Creation Options for SASL/OAUTHBEARER
      • The default implementation of SASL/OAUTHBEARER in Kafka creates and validates Unsecured JSON Web Tokens. While suitable only for non-production use, it does provide the flexibility to create arbitrary tokens in a DEV or TEST environment.
      • Here are the various supported JAAS module options on the client side (and on the broker side if OAUTHBEARER is the inter-broker protocol):
        JAAS Module Option for Unsecured Token Creation Documentation
        unsecuredLoginStringClaim_<claimname>="value" Creates a String claim with the given name and value. Any valid claim name can be specified except 'iat' and 'exp' (these are automatically generated).
        unsecuredLoginNumberClaim_<claimname>="value" Creates a Number claim with the given name and value. Any valid claim name can be specified except 'iat' and 'exp' (these are automatically generated).
        unsecuredLoginListClaim_<claimname>="value" Creates a String List claim with the given name and values parsed from the given value where the first character is taken as the delimiter. For example: unsecuredLoginListClaim_fubar="|value1|value2". Any valid claim name can be specified except 'iat' and 'exp' (these are automatically generated).
        unsecuredLoginExtension_<extensionname>="value" Creates a String extension with the given name and value. For example: unsecuredLoginExtension_traceId="123". A valid extension name is any sequence of lowercase or uppercase alphabet characters. In addition, the "auth" extension name is reserved. A valid extension value is any combination of characters with ASCII codes 1-127.
        unsecuredLoginPrincipalClaimName Set to a custom claim name if you wish the name of the String claim holding the principal name to be something other than 'sub'.
        unsecuredLoginLifetimeSeconds Set to an integer value if the token expiration is to be set to something other than the default value of 3600 seconds (which is 1 hour). The 'exp' claim will be set to reflect the expiration time.
        unsecuredLoginScopeClaimName Set to a custom claim name if you wish the name of the String or String List claim holding any token scope to be something other than 'scope'.
    4. Unsecured Token Validation Options for SASL/OAUTHBEARER
      • Here are the various supported JAAS module options on the broker side for Unsecured JSON Web Token validation:
        JAAS Module Option for Unsecured Token Validation Documentation
        unsecuredValidatorPrincipalClaimName="value" Set to a non-empty value if you wish a particular String claim holding a principal name to be checked for existence; the default is to check for the existence of the 'sub' claim.
        unsecuredValidatorScopeClaimName="value" Set to a custom claim name if you wish the name of the String or String List claim holding any token scope to be something other than 'scope'.
        unsecuredValidatorRequiredScope="value" Set to a space-delimited list of scope values if you wish the String/String List claim holding the token scope to be checked to make sure it contains certain values.
        unsecuredValidatorAllowableClockSkewMs="value" Set to a positive integer value if you wish to allow up to some number of positive milliseconds of clock skew (the default is 0).
      • The default unsecured SASL/OAUTHBEARER implementation may be overridden (and must be overridden in production environments) using custom login and SASL Server callback handlers.
      • For more details on security considerations, refer to RFC 6749, Section 10.
    5. Token Refresh for SASL/OAUTHBEARER
      Kafka periodically refreshes any token before it expires so that the client can continue to make connections to brokers. The parameters that impact how the refresh algorithm operates are specified as part of the producer/consumer/broker configuration and are as follows. See the documentation for these properties elsewhere for details. The default values are usually reasonable, in which case these configuration parameters would not need to be explicitly set.
      Producer/Consumer/Broker Configuration Property
      sasl.login.refresh.window.factor
      sasl.login.refresh.window.jitter
      sasl.login.refresh.min.period.seconds
      sasl.login.refresh.min.buffer.seconds
    6. Secure/Production Use of SASL/OAUTHBEARER
      Production use cases will require writing an implementation of org.apache.kafka.common.security.auth.AuthenticateCallbackHandler that can handle an instance of org.apache.kafka.common.security.oauthbearer.OAuthBearerTokenCallback and declaring it via either the sasl.login.callback.handler.class configuration option for a non-broker client or via the listener.name.sasl_ssl.oauthbearer.sasl.login.callback.handler.class configuration option for brokers (when SASL/OAUTHBEARER is the inter-broker protocol).

      Production use cases will also require writing an implementation of org.apache.kafka.common.security.auth.AuthenticateCallbackHandler that can handle an instance of org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallback and declaring it via the listener.name.sasl_ssl.oauthbearer.sasl.server.callback.handler.class broker configuration option.

    7. Security Considerations for SASL/OAUTHBEARER
      • The default implementation of SASL/OAUTHBEARER in Kafka creates and validates Unsecured JSON Web Tokens. This is suitable only for non-production use.
      • OAUTHBEARER should be used in production enviromnments only with TLS-encryption to prevent interception of tokens.
      • The default unsecured SASL/OAUTHBEARER implementation may be overridden (and must be overridden in production environments) using custom login and SASL Server callback handlers as described above.
      • For more details on OAuth 2 security considerations in general, refer to RFC 6749, Section 10.
  7. Enabling multiple SASL mechanisms in a broker

    1. Specify configuration for the login modules of all enabled mechanisms in the KafkaServer section of the JAAS config file. For example:
      KafkaServer {
          com.sun.security.auth.module.Krb5LoginModule required
          useKeyTab=true
          storeKey=true
          keyTab="/etc/security/keytabs/kafka_server.keytab"
          principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
      
          org.apache.kafka.common.security.plain.PlainLoginModule required
          username="admin"
          password="admin-secret"
          user_admin="admin-secret"
          user_alice="alice-secret";
      };
    2. Enable the SASL mechanisms in server.properties:
      sasl.enabled.mechanisms=GSSAPI,PLAIN,SCRAM-SHA-256,SCRAM-SHA-512,OAUTHBEARER
    3. Specify the SASL security protocol and mechanism for inter-broker communication in server.properties if required:
      security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)
      sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechanisms)
    4. Follow the mechanism-specific steps in GSSAPI (Kerberos), PLAIN, SCRAM and OAUTHBEARER to configure SASL for the enabled mechanisms.
  8. Modifying SASL mechanism in a Running Cluster

    SASL mechanism can be modified in a running cluster using the following sequence:

    1. Enable new SASL mechanism by adding the mechanism to sasl.enabled.mechanisms in server.properties for each broker. Update JAAS config file to include both mechanisms as described here. Incrementally bounce the cluster nodes.
    2. Restart clients using the new mechanism.
    3. To change the mechanism of inter-broker communication (if this is required), set sasl.mechanism.inter.broker.protocol in server.properties to the new mechanism and incrementally bounce the cluster again.
    4. To remove old mechanism (if this is required), remove the old mechanism from sasl.enabled.mechanisms in server.properties and remove the entries for the old mechanism from JAAS config file. Incrementally bounce the cluster again.
  9. Authentication using Delegation Tokens

    Delegation token based authentication is a lightweight authentication mechanism to complement existing SASL/SSL methods. Delegation tokens are shared secrets between kafka brokers and clients. Delegation tokens will help processing frameworks to distribute the workload to available workers in a secure environment without the added cost of distributing Kerberos TGT/keytabs or keystores when 2-way SSL is used. See KIP-48 for more details.

    Under the default implementation of , the owner of delegation token is used as the authenticated for configuration of ACLs etc. principal.builder.classPrincipal

    Typical steps for delegation token usage are:

    1. User authenticates with the Kafka cluster via SASL or SSL, and obtains a delegation token. This can be done using Admin APIs or using kafka-delegation-tokens.sh script.
    2. User securely passes the delegation token to Kafka clients for authenticating with the Kafka cluster.
    3. Token owner/renewer can renew/expire the delegation tokens.
    1. Token Management

      A secret is used to generate and verify delegation tokens. This is supplied using config option delegation.token.secret.key. The same secret key must be configured across all the brokers. If the secret is not set or set to empty string, brokers will disable the delegation token authentication.

      In the current implementation, token details are stored in Zookeeper and is suitable for use in Kafka installations where Zookeeper is on a private network. Also currently, this secret is stored as plain text in the server.properties config file. We intend to make these configurable in a future Kafka release.

      A token has a current life, and a maximum renewable life. By default, tokens must be renewed once every 24 hours for up to 7 days. These can be configured using delegation.token.expiry.time.ms and delegation.token.max.lifetime.ms config options.

      Tokens can also be cancelled explicitly. If a token is not renewed by the token’s expiration time or if token is beyond the max life time, it will be deleted from all broker caches as well as from zookeeper.

    2. Creating Delegation Tokens

      Tokens can be created by using Admin APIs or using kafka-delegation-tokens.sh script. Delegation token requests (create/renew/expire/describe) should be issued only on SASL or SSL authenticated channels. Tokens can not be requests if the initial authentication is done through delegation token. A token can be created by the user for that user or others as well by specifying the --owner-principal parameter. Owner/Renewers can renew or expire tokens. Owner/renewers can always describe their own tokens. To describe other tokens, a DESCRIBE_TOKEN permission needs to be added on the User resource representing the owner of the token. kafka-delegation-tokens.sh script examples are given below.

      Create a delegation token:

      > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --create   --max-life-time-period -1 --command-config client.properties --renewer-principal User:user1

      Create a delegation token for a different owner:

      > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --create   --max-life-time-period -1 --command-config client.properties --renewer-principal User:user1 --owner-principal User:owner1

      Renew a delegation token:

      > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --renew    --renew-time-period -1 --command-config client.properties --hmac ABCDEFGHIJK

      Expire a delegation token:

      > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --expire   --expiry-time-period -1   --command-config client.properties  --hmac ABCDEFGHIJK

      Existing tokens can be described using the --describe option:

      > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --describe --command-config client.properties  --owner-principal User:user1
    3. Token Authentication

      Delegation token authentication piggybacks on the current SASL/SCRAM authentication mechanism. We must enable SASL/SCRAM mechanism on Kafka cluster as described in here.

      Configuring Kafka Clients:

      1. Configure the JAAS configuration property for each client in producer.properties or consumer.properties. The login module describes how the clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client for the token authentication:
        sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required     username="tokenID123"     password="lAYYSFmLs4bTjf+lTZ1LCHR/ZZFNA=="     tokenauth="true";

        The options username and password are used by clients to configure the token id and token HMAC. And the option tokenauth is used to indicate the server about token authentication. In this example, clients connect to the broker using token id: tokenID123. Different clients within a JVM may connect using different tokens by specifying different token details in .sasl.jaas.config

        JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM.

    4. Procedure to manually rotate the secret:

      We require a re-deployment when the secret needs to be rotated. During this process, already connected clients will continue to work. But any new connection requests and renew/expire requests with old tokens can fail. Steps are given below.

      1. Expire all existing tokens.
      2. Rotate the secret by rolling upgrade, and
      3. Generate new tokens

      We intend to automate this in a future Kafka release.

7.5 Authorization and ACLs

Kafka ships with a pluggable authorization framework, which is configured with the authorizer.class.name property in the server confgiuration. Configured implementations must extend . Kafka provides default implementations which store ACLs in the cluster metadata (either Zookeeper or the KRaft metadata log). For Zookeeper-based clusters, the provided implementation is configured as follows: For KRaft clusters, use the following configuration on all nodes (brokers, controllers, or combined broker/controller nodes): Kafka ACLs are defined in the general format of "Principal {P} is [Allowed|Denied] Operation {O} From Host {H} on any Resource {R} matching ResourcePattern {RP}". You can read more about the ACL structure in KIP-11 and resource patterns in KIP-290. In order to add, remove, or list ACLs, you can use the Kafka ACL CLI . By default, if no ResourcePatterns match a specific Resource R, then R has no associated ACLs, and therefore no one other than super users is allowed to access R. If you want to change that behavior, you can include the following in server.properties. One can also add super users in server.properties like the following (note that the delimiter is semicolon since SSL user names may contain comma). Default PrincipalType string "User" is case sensitive. org.apache.kafka.server.authorizer.Authorizer
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
kafka-acls.sh
allow.everyone.if.no.acl.found=true
super.users=User:Bob;User:Alice
KRaft Principal Forwarding
In KRaft clusters, admin requests such as and are sent to the broker listeners by the client. The broker then forwards the request to the active controller through the first listener configured in . Authorization of these requests is done on the controller node. This is achieved by way of an request which packages both the underlying request from the client as well as the client principal. When the controller receives the forwarded request from the broker, it first authorizes the request using the authenticated broker principal. Then it authorizes the underlying request using the forwarded principal.
All of this implies that Kafka must understand how to serialize and deserialize the client principal. The authentication framework allows for customized principals by overriding the configuration. In order for customized principals to work with KRaft, the configured class must implement so that Kafka knows how to serialize and deserialize the principals. The default implementation uses the Kafka RPC format defined in the source code: . For more detail about request forwarding in KRaft, see KIP-590
CreateTopicsDeleteTopicscontroller.listener.namesEnvelopeEnvelopeEnvelopeprincipal.builder.classorg.apache.kafka.common.security.auth.KafkaPrincipalSerdeorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderclients/src/main/resources/common/message/DefaultPrincipalData.json
Customizing SSL User Name
By default, the SSL user name will be of the form "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown". One can change that by setting to a customized rule in server.properties. This config allows a list of rules for mapping X.500 distinguished name to short name. The rules are evaluated in order and the first rule that matches a distinguished name is used to map it to a short name. Any later rules in the list are ignored.
The format of is a list where each rule starts with "RULE:" and contains an expression as the following formats. Default rule will return string representation of the X.500 certificate distinguished name. If the distinguished name matches the pattern, then the replacement command will be run over the name. This also supports lowercase/uppercase options, to force the translated result to be all lower/uppercase case. This is done by adding a "/L" or "/U' to the end of the rule. Example values are: Above rules translate distinguished name "CN=serviceuser,OU=ServiceUsers,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" to "serviceuser" and "CN=adminUser,OU=Admin,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" to "adminuser@admin".
For advanced use cases, one can customize the name by setting a customized PrincipalBuilder in server.properties like the following.
ssl.principal.mapping.rulesssl.principal.mapping.rules
RULE:pattern/replacement/
RULE:pattern/replacement/[LU]
ssl.principal.mapping.rules
RULE:^CN=(.*?),OU=ServiceUsers.*$/$1/,
RULE:^CN=(.*?),OU=(.*?),O=(.*?),L=(.*?),ST=(.*?),C=(.*?)$/$1@$2/L,
RULE:^.*[Cc][Nn]=([a-zA-Z0-9.]*).*$/$1/L,
DEFAULT
principal.builder.class=CustomizedPrincipalBuilderClass
Customizing SASL User Name
By default, the SASL user name will be the primary part of the Kerberos principal. One can change that by setting to a customized rule in server.properties. The format of is a list where each rule works in the same way as the auth_to_local in Kerberos configuration file (krb5.conf). This also support additional lowercase/uppercase rule, to force the translated result to be all lowercase/uppercase. This is done by adding a "/L" or "/U" to the end of the rule. check below formats for syntax. Each rules starts with RULE: and contains an expression as the following formats. See the kerberos documentation for more details. An example of adding a rule to properly translate user@MYDOMAIN.COM to user while also keeping the default rule in place is: sasl.kerberos.principal.to.local.rulessasl.kerberos.principal.to.local.rules
RULE:[n:string](regexp)s/pattern/replacement/
RULE:[n:string](regexp)s/pattern/replacement/g
RULE:[n:string](regexp)s/pattern/replacement//L
RULE:[n:string](regexp)s/pattern/replacement/g/L
RULE:[n:string](regexp)s/pattern/replacement//U
RULE:[n:string](regexp)s/pattern/replacement/g/U
sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*@MYDOMAIN.COM)s/@.*//,DEFAULT

Command Line Interface

Kafka Authorization management CLI can be found under bin directory with all the other CLIs. The CLI script is called kafka-acls.sh. Following lists all the options that the script supports:
Option Description Default Option type
--add Indicates to the script that user is trying to add an acl. Action
--remove Indicates to the script that user is trying to remove an acl. Action
--list Indicates to the script that user is trying to list acls. Action
--bootstrap-server A list of host/port pairs to use for establishing the connection to the Kafka cluster. Only one of --bootstrap-server or --authorizer option must be specified. Configuration
--command-config A property file containing configs to be passed to Admin Client. This option can only be used with --bootstrap-server option. Configuration
--cluster Indicates to the script that the user is trying to interact with acls on the singular cluster resource. ResourcePattern
--topic [topic-name] Indicates to the script that the user is trying to interact with acls on topic resource pattern(s). ResourcePattern
--group [group-name] Indicates to the script that the user is trying to interact with acls on consumer-group resource pattern(s) ResourcePattern
--transactional-id [transactional-id] The transactionalId to which ACLs should be added or removed. A value of * indicates the ACLs should apply to all transactionalIds. ResourcePattern
--delegation-token [delegation-token] Delegation token to which ACLs should be added or removed. A value of * indicates ACL should apply to all tokens. ResourcePattern
--user-principal [user-principal] A user resource to which ACLs should be added or removed. This is currently supported in relation with delegation tokens. A value of * indicates ACL should apply to all users. ResourcePattern
--resource-pattern-type [pattern-type] Indicates to the script the type of resource pattern, (for --add), or resource pattern filter, (for --list and --remove), the user wishes to use.
When adding acls, this should be a specific pattern type, e.g. 'literal' or 'prefixed'.
When listing or removing acls, a specific pattern type filter can be used to list or remove acls from a specific type of resource pattern, or the filter values of 'any' or 'match' can be used, where 'any' will match any pattern type, but will match the resource name exactly, and 'match' will perform pattern matching to list or remove all acls that affect the supplied resource(s).
WARNING: 'match', when used in combination with the '--remove' switch, should be used with care.
literal Configuration
--allow-principal Principal is in PrincipalType:name format that will be added to ACL with Allow permission. Default PrincipalType string "User" is case sensitive.
You can specify multiple --allow-principal in a single command.
Principal
--deny-principal Principal is in PrincipalType:name format that will be added to ACL with Deny permission. Default PrincipalType string "User" is case sensitive.
You can specify multiple --deny-principal in a single command.
Principal
--principal Principal is in PrincipalType:name format that will be used along with --list option. Default PrincipalType string "User" is case sensitive. This will list the ACLs for the specified principal.
You can specify multiple --principal in a single command.
Principal
--allow-host IP address from which principals listed in --allow-principal will have access. if --allow-principal is specified defaults to * which translates to "all hosts" Host
--deny-host IP address from which principals listed in --deny-principal will be denied access. if --deny-principal is specified defaults to * which translates to "all hosts" Host
--operation Operation that will be allowed or denied.
Valid values are:
  • Read
  • Write
  • Create
  • Delete
  • Alter
  • Describe
  • ClusterAction
  • DescribeConfigs
  • AlterConfigs
  • IdempotentWrite
  • CreateTokens
  • DescribeTokens
  • All
All Operation
--producer Convenience option to add/remove acls for producer role. This will generate acls that allows WRITE, DESCRIBE and CREATE on topic. Convenience
--consumer Convenience option to add/remove acls for consumer role. This will generate acls that allows READ, DESCRIBE on topic and READ on consumer-group. Convenience
--idempotent Enable idempotence for the producer. This should be used in combination with the --producer option.
Note that idempotence is enabled automatically if the producer is authorized to a particular transactional-id.
Convenience
--force Convenience option to assume yes to all queries and do not prompt. Convenience
--authorizer (DEPRECATED: not supported in KRaft) Fully qualified class name of the authorizer. kafka.security.authorizer.AclAuthorizer Configuration
--authorizer-properties (DEPRECATED: not supported in KRaft) key=val pairs that will be passed to authorizer for initialization. For the default authorizer in ZK clsuters, the example values are: zookeeper.connect=localhost:2181 Configuration
--zk-tls-config-file (DEPRECATED: not supported in KRaft) Identifies the file where ZooKeeper client TLS connectivity properties for the authorizer are defined. Any properties other than the following (with or without an "authorizer." prefix) are ignored: zookeeper.clientCnxnSocket, zookeeper.ssl.cipher.suites, zookeeper.ssl.client.enable, zookeeper.ssl.crl.enable, zookeeper.ssl.enabled.protocols, zookeeper.ssl.endpoint.identification.algorithm, zookeeper.ssl.keystore.location, zookeeper.ssl.keystore.password, zookeeper.ssl.keystore.type, zookeeper.ssl.ocsp.enable, zookeeper.ssl.protocol, zookeeper.ssl.truststore.location, zookeeper.ssl.truststore.password, zookeeper.ssl.truststore.type Configuration

Examples

  • Adding Acls
    Suppose you want to add an acl "Principals User:Bob and User:Alice are allowed to perform Operation Read and Write on Topic Test-Topic from IP 198.51.100.0 and IP 198.51.100.1". You can do that by executing the CLI with following options: By default, all principals that don't have an explicit acl that allows access for an operation to a resource are denied. In rare cases where an allow acl is defined that allows access to all but some principal we will have to use the --deny-principal and --deny-host option. For example, if we want to allow all users to Read from Test-topic but only deny User:BadBob from IP 198.51.100.3 we can do so using following commands: Note that and only support IP addresses (hostnames are not supported). Above examples add acls to a topic by specifying --topic [topic-name] as the resource pattern option. Similarly user can add acls to cluster by specifying --cluster and to a consumer group by specifying --group [group-name]. You can add acls on any resource of a certain type, e.g. suppose you wanted to add an acl "Principal User:Peter is allowed to produce to any Topic from IP 198.51.200.0" You can do that by using the wildcard resource '*', e.g. by executing the CLI with following options: You can add acls on prefixed resource patterns, e.g. suppose you want to add an acl "Principal User:Jane is allowed to produce to any Topic whose name starts with 'Test-' from any host". You can do that by executing the CLI with following options: Note, --resource-pattern-type defaults to 'literal', which only affects resources with the exact same name or, in the case of the wildcard resource name '*', a resource with any name.
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:'*' --allow-host '*' --deny-principal User:BadBob --deny-host 198.51.100.3 --operation Read --topic Test-topic
    --allow-host--deny-host
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:Peter --allow-host 198.51.200.1 --producer --topic '*'
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:Jane --producer --topic Test- --resource-pattern-type prefixed
  • Removing Acls
    Removing acls is pretty much the same. The only difference is instead of --add option users will have to specify --remove option. To remove the acls added by the first example above we can execute the CLI with following options: If you want to remove the acl added to the prefixed resource pattern above we can execute the CLI with following options:
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic 
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --allow-principal User:Jane --producer --topic Test- --resource-pattern-type Prefixed
  • List Acls
    We can list acls for any resource by specifying the --list option with the resource. To list all acls on the literal resource pattern Test-topic, we can execute the CLI with following options: However, this will only return the acls that have been added to this exact resource pattern. Other acls can exist that affect access to the topic, e.g. any acls on the topic wildcard '*', or any acls on prefixed resource patterns. Acls on the wildcard resource pattern can be queried explicitly: However, it is not necessarily possible to explicitly query for acls on prefixed resource patterns that match Test-topic as the name of such patterns may not be known. We can list all acls affecting Test-topic by using '--resource-pattern-type match', e.g. This will list acls on all matching literal, wildcard and prefixed resource patterns.
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic Test-topic
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic '*'
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic Test-topic --resource-pattern-type match
  • Adding or removing a principal as producer or consumer
    The most common use case for acl management are adding/removing a principal as producer or consumer so we added convenience options to handle these cases. In order to add User:Bob as a producer of Test-topic we can execute the following command: Similarly to add Alice as a consumer of Test-topic with consumer group Group-1 we just have to pass --consumer option: Note that for consumer option we must also specify the consumer group. In order to remove a principal from producer or consumer role we just need to pass --remove option.
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:Bob --producer --topic Test-topic
    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:Bob --consumer --topic Test-topic --group Group-1 
  • Admin API based acl management
    Users having Alter permission on ClusterResource can use Admin API for ACL management. kafka-acls.sh script supports AdminClient API to manage ACLs without interacting with zookeeper/authorizer directly. All the above examples can be executed by using --bootstrap-server option. For example:
    bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --add --allow-principal User:Bob --producer --topic Test-topic
    bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --add --allow-principal User:Bob --consumer --topic Test-topic --group Group-1
    bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --list --topic Test-topic
    bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --add --allow-principal User:tokenRequester --operation CreateTokens --user-principal "owner1"

Authorization Primitives

Protocol calls are usually performing some operations on certain resources in Kafka. It is required to know the operations and resources to set up effective protection. In this section we'll list these operations and resources, then list the combination of these with the protocols to see the valid scenarios.

Operations in Kafka

There are a few operation primitives that can be used to build up privileges. These can be matched up with certain resources to allow specific protocol calls for a given user. These are:

  • Read
  • Write
  • Create
  • Delete
  • Alter
  • Describe
  • ClusterAction
  • DescribeConfigs
  • AlterConfigs
  • IdempotentWrite
  • CreateTokens
  • DescribeTokens
  • All
Resources in Kafka

The operations above can be applied on certain resources which are described below.

  • Topic: this simply represents a Topic. All protocol calls that are acting on topics (such as reading, writing them) require the corresponding privilege to be added. If there is an authorization error with a topic resource, then a TOPIC_AUTHORIZATION_FAILED (error code: 29) will be returned.
  • Group: this represents the consumer groups in the brokers. All protocol calls that are working with consumer groups, like joining a group must have privileges with the group in subject. If the privilege is not given then a GROUP_AUTHORIZATION_FAILED (error code: 30) will be returned in the protocol response.
  • Cluster: this resource represents the cluster. Operations that are affecting the whole cluster, like controlled shutdown are protected by privileges on the Cluster resource. If there is an authorization problem on a cluster resource, then a CLUSTER_AUTHORIZATION_FAILED (error code: 31) will be returned.
  • TransactionalId: this resource represents actions related to transactions, such as committing. If any error occurs, then a TRANSACTIONAL_ID_AUTHORIZATION_FAILED (error code: 53) will be returned by brokers.
  • DelegationToken: this represents the delegation tokens in the cluster. Actions, such as describing delegation tokens could be protected by a privilege on the DelegationToken resource. Since these objects have a little special behavior in Kafka it is recommended to read KIP-48 and the related upstream documentation at Authentication using Delegation Tokens.
  • User: CreateToken and DescribeToken operations can be granted to User resources to allow creating and describing tokens for other users. More info can be found in KIP-373.
Operations and Resources on Protocols

In the below table we'll list the valid operations on resources that are executed by the Kafka API protocols.

Protocol (API key) Operation Resource Note
PRODUCE (0) Write TransactionalId An transactional producer which has its transactional.id set requires this privilege.
PRODUCE (0) IdempotentWrite Cluster An idempotent produce action requires this privilege.
PRODUCE (0) Write Topic This applies to a normal produce action.
FETCH (1) ClusterAction Cluster A follower must have ClusterAction on the Cluster resource in order to fetch partition data.
FETCH (1) Read Topic Regular Kafka consumers need READ permission on each partition they are fetching.
LIST_OFFSETS (2) Describe Topic
METADATA (3) Describe Topic
METADATA (3) Create Cluster If topic auto-creation is enabled, then the broker-side API will check for the existence of a Cluster level privilege. If it's found then it'll allow creating the topic, otherwise it'll iterate through the Topic level privileges (see the next one).
METADATA (3) Create Topic This authorizes auto topic creation if enabled but the given user doesn't have a cluster level permission (above).
LEADER_AND_ISR (4) ClusterAction Cluster
STOP_REPLICA (5) ClusterAction Cluster
UPDATE_METADATA (6) ClusterAction Cluster
CONTROLLED_SHUTDOWN (7) ClusterAction Cluster
OFFSET_COMMIT (8) Read Group An offset can only be committed if it's authorized to the given group and the topic too (see below). Group access is checked first, then Topic access.
OFFSET_COMMIT (8) Read Topic Since offset commit is part of the consuming process, it needs privileges for the read action.
OFFSET_FETCH (9) Describe Group Similarly to OFFSET_COMMIT, the application must have privileges on group and topic level too to be able to fetch. However in this case it requires describe access instead of read. Group access is checked first, then Topic access.
OFFSET_FETCH (9) Describe Topic
FIND_COORDINATOR (10) Describe Group The FIND_COORDINATOR request can be of "Group" type in which case it is looking for consumergroup coordinators. This privilege would represent the Group mode.
FIND_COORDINATOR (10) Describe TransactionalId This applies only on transactional producers and checked when a producer tries to find the transaction coordinator.
JOIN_GROUP (11) Read Group
HEARTBEAT (12) Read Group
LEAVE_GROUP (13) Read Group
SYNC_GROUP (14) Read Group
DESCRIBE_GROUPS (15) Describe Group
LIST_GROUPS (16) Describe Cluster When the broker checks to authorize a list_groups request it first checks for this cluster level authorization. If none found then it proceeds to check the groups individually. This operation doesn't return CLUSTER_AUTHORIZATION_FAILED.
LIST_GROUPS (16) Describe Group If none of the groups are authorized, then just an empty response will be sent back instead of an error. This operation doesn't return CLUSTER_AUTHORIZATION_FAILED. This is applicable from the 2.1 release.
SASL_HANDSHAKE (17) The SASL handshake is part of the authentication process and therefore it's not possible to apply any kind of authorization here.
API_VERSIONS (18) The API_VERSIONS request is part of the Kafka protocol handshake and happens on connection and before any authentication. Therefore it's not possible to control this with authorization.
CREATE_TOPICS (19) Create Cluster If there is no cluster level authorization then it won't return CLUSTER_AUTHORIZATION_FAILED but fall back to use topic level, which is just below. That'll throw error if there is a problem.
CREATE_TOPICS (19) Create Topic This is applicable from the 2.0 release.
DELETE_TOPICS (20) Delete Topic
DELETE_RECORDS (21) Delete Topic
INIT_PRODUCER_ID (22) Write TransactionalId
INIT_PRODUCER_ID (22) IdempotentWrite Cluster
OFFSET_FOR_LEADER_EPOCH (23) ClusterAction Cluster If there is no cluster level privilege for this operation, then it'll check for topic level one.
OFFSET_FOR_LEADER_EPOCH (23) Describe Topic This is applicable from the 2.1 release.
ADD_PARTITIONS_TO_TXN (24) Write TransactionalId This API is only applicable to transactional requests. It first checks for the Write action on the TransactionalId resource, then it checks the Topic in subject (below).
ADD_PARTITIONS_TO_TXN (24) Write Topic
ADD_OFFSETS_TO_TXN (25) Write TransactionalId Similarly to ADD_PARTITIONS_TO_TXN this is only applicable to transactional request. It first checks for Write action on the TransactionalId resource, then it checks whether it can Read on the given group (below).
ADD_OFFSETS_TO_TXN (25) Read Group
END_TXN (26) Write TransactionalId
WRITE_TXN_MARKERS (27) ClusterAction Cluster
TXN_OFFSET_COMMIT (28) Write TransactionalId
TXN_OFFSET_COMMIT (28) Read Group
TXN_OFFSET_COMMIT (28) Read Topic
DESCRIBE_ACLS (29) Describe Cluster
CREATE_ACLS (30) Alter Cluster
DELETE_ACLS (31) Alter Cluster
DESCRIBE_CONFIGS (32) DescribeConfigs Cluster If broker configs are requested, then the broker will check cluster level privileges.
DESCRIBE_CONFIGS (32) DescribeConfigs Topic If topic configs are requested, then the broker will check topic level privileges.
ALTER_CONFIGS (33) AlterConfigs Cluster If broker configs are altered, then the broker will check cluster level privileges.
ALTER_CONFIGS (33) AlterConfigs Topic If topic configs are altered, then the broker will check topic level privileges.
ALTER_REPLICA_LOG_DIRS (34) Alter Cluster
DESCRIBE_LOG_DIRS (35) Describe Cluster An empty response will be returned on authorization failure.
SASL_AUTHENTICATE (36) SASL_AUTHENTICATE is part of the authentication process and therefore it's not possible to apply any kind of authorization here.
CREATE_PARTITIONS (37) Alter Topic
CREATE_DELEGATION_TOKEN (38) Creating delegation tokens has special rules, for this please see the Authentication using Delegation Tokens section.
CREATE_DELEGATION_TOKEN (38) CreateTokens User Allows creating delegation tokens for the User resource.
RENEW_DELEGATION_TOKEN (39) Renewing delegation tokens has special rules, for this please see the Authentication using Delegation Tokens section.
EXPIRE_DELEGATION_TOKEN (40) Expiring delegation tokens has special rules, for this please see the Authentication using Delegation Tokens section.
DESCRIBE_DELEGATION_TOKEN (41) Describe DelegationToken Describing delegation tokens has special rules, for this please see the Authentication using Delegation Tokens section.
DESCRIBE_DELEGATION_TOKEN (41) DescribeTokens User Allows describing delegation tokens of the User resource.
DELETE_GROUPS (42) Delete Group
ELECT_PREFERRED_LEADERS (43) ClusterAction Cluster
INCREMENTAL_ALTER_CONFIGS (44) AlterConfigs Cluster If broker configs are altered, then the broker will check cluster level privileges.
INCREMENTAL_ALTER_CONFIGS (44) AlterConfigs Topic If topic configs are altered, then the broker will check topic level privileges.
ALTER_PARTITION_REASSIGNMENTS (45) Alter Cluster
LIST_PARTITION_REASSIGNMENTS (46) Describe Cluster
OFFSET_DELETE (47) Delete Group
OFFSET_DELETE (47) Read Topic
DESCRIBE_CLIENT_QUOTAS (48) DescribeConfigs Cluster
ALTER_CLIENT_QUOTAS (49) AlterConfigs Cluster
DESCRIBE_USER_SCRAM_CREDENTIALS (50) Describe Cluster
ALTER_USER_SCRAM_CREDENTIALS (51) Alter Cluster
VOTE (52) ClusterAction Cluster
BEGIN_QUORUM_EPOCH (53) ClusterAction Cluster
END_QUORUM_EPOCH (54) ClusterAction Cluster
DESCRIBE_QUORUM (55) Describe Cluster
ALTER_PARTITION (56) ClusterAction Cluster
UPDATE_FEATURES (57) Alter Cluster
ENVELOPE (58) ClusterAction Cluster
FETCH_SNAPSHOT (59) ClusterAction Cluster
DESCRIBE_CLUSTER (60) Describe Cluster
DESCRIBE_PRODUCERS (61) Read Topic
BROKER_REGISTRATION (62) ClusterAction Cluster
BROKER_HEARTBEAT (63) ClusterAction Cluster
UNREGISTER_BROKER (64) Alter Cluster
DESCRIBE_TRANSACTIONS (65) Describe TransactionalId
LIST_TRANSACTIONS (66) Describe TransactionalId
ALLOCATE_PRODUCER_IDS (67) ClusterAction Cluster
CONSUMER_GROUP_HEARTBEAT (68) Read Group

7.6 Incorporating Security Features in a Running Cluster

You can secure a running cluster via one or more of the supported protocols discussed previously. This is done in phases:
  • Incrementally bounce the cluster nodes to open additional secured port(s).
  • Restart clients using the secured rather than PLAINTEXT port (assuming you are securing the client-broker connection).
  • Incrementally bounce the cluster again to enable broker-to-broker security (if this is required)
  • A final incremental bounce to close the PLAINTEXT port.
The specific steps for configuring SSL and SASL are described in sections 7.3 and 7.4. Follow these steps to enable security for your desired protocol(s). The security implementation lets you configure different protocols for both broker-client and broker-broker communication. These must be enabled in separate bounces. A PLAINTEXT port must be left open throughout so brokers and/or clients can continue to communicate. When performing an incremental bounce stop the brokers cleanly via a SIGTERM. It's also good practice to wait for restarted replicas to return to the ISR list before moving onto the next node. As an example, say we wish to encrypt both broker-client and broker-broker communication with SSL. In the first incremental bounce, an SSL port is opened on each node: We then restart the clients, changing their config to point at the newly opened, secured port: In the second incremental server bounce we instruct Kafka to use SSL as the broker-broker protocol (which will use the same SSL port): In the final bounce we secure the cluster by closing the PLAINTEXT port: Alternatively we might choose to open multiple ports so that different protocols can be used for broker-broker and broker-client communication. Say we wished to use SSL encryption throughout (i.e. for broker-broker and broker-client communication) but we'd like to add SASL authentication to the broker-client connection also. We would achieve this by opening two additional ports during the first bounce: We would then restart the clients, changing their config to point at the newly opened, SASL & SSL secured port: The second server bounce would switch the cluster to use encrypted broker-broker communication via the SSL port we previously opened on port 9092: The final bounce secures the cluster by closing the PLAINTEXT port. ZooKeeper can be secured independently of the Kafka cluster. The steps for doing this are covered in section 7.7.2.
listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092
bootstrap.servers = [broker1:9092,...]
security.protocol = SSL
...etc
listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092
security.inter.broker.protocol=SSL
listeners=SSL://broker1:9092
security.inter.broker.protocol=SSL
listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093
bootstrap.servers = [broker1:9093,...]
security.protocol = SASL_SSL
...etc
listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093
security.inter.broker.protocol=SSL
listeners=SSL://broker1:9092,SASL_SSL://broker1:9093
security.inter.broker.protocol=SSL

7.7 ZooKeeper Authentication

ZooKeeper supports mutual TLS (mTLS) authentication beginning with the 3.5.x versions. Kafka supports authenticating to ZooKeeper with SASL and mTLS -- either individually or both together -- beginning with version 2.5. See KIP-515: Enable ZK client to use the new TLS supported authentication for more details.

When using mTLS alone, every broker and any CLI tools (such as the ZooKeeper Security Migration Tool) should identify itself with the same Distinguished Name (DN) because it is the DN that is ACL'ed. This can be changed as described below, but it involves writing and deploying a custom ZooKeeper authentication provider. Generally each certificate should have the same DN but a different Subject Alternative Name (SAN) so that hostname verification of the brokers and any CLI tools by ZooKeeper will succeed.

When using SASL authentication to ZooKeeper together with mTLS, both the SASL identity and either the DN that created the znode (i.e. the creating broker's certificate) or the DN of the Security Migration Tool (if migration was performed after the znode was created) will be ACL'ed, and all brokers and CLI tools will be authorized even if they all use different DNs because they will all use the same ACL'ed SASL identity. It is only when using mTLS authentication alone that all the DNs must match (and SANs become critical -- again, in the absence of writing and deploying a custom ZooKeeper authentication provider as described below).

Use the broker properties file to set TLS configs for brokers as described below.

Use the --zk-tls-config-file <file> option to set TLS configs in the Zookeeper Security Migration Tool. The kafka-acls.sh and kafka-configs.sh CLI tools also support the --zk-tls-config-file <file> option.

Use the -zk-tls-config-file <file> option (note the single-dash rather than double-dash) to set TLS configs for the zookeeper-shell.sh CLI tool.

7.7.1 New clusters

7.7.1.1 ZooKeeper SASL Authentication
To enable ZooKeeper SASL authentication on brokers, there are two necessary steps:
  1. Create a JAAS login file and set the appropriate system property to point to it as described above
  2. Set the configuration property zookeeper.set.acl in each broker to true
The metadata stored in ZooKeeper for the Kafka cluster is world-readable, but can only be modified by the brokers. The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of that data can cause cluster disruption. We also recommend limiting the access to ZooKeeper via network segmentation (only brokers and some admin tools need access to ZooKeeper).
7.7.1.2 ZooKeeper Mutual TLS Authentication
ZooKeeper mTLS authentication can be enabled with or without SASL authentication. As mentioned above, when using mTLS alone, every broker and any CLI tools (such as the ZooKeeper Security Migration Tool) must generally identify itself with the same Distinguished Name (DN) because it is the DN that is ACL'ed, which means each certificate should have an appropriate Subject Alternative Name (SAN) so that hostname verification of the brokers and any CLI tool by ZooKeeper will succeed.

It is possible to use something other than the DN for the identity of mTLS clients by writing a class that extends org.apache.zookeeper.server.auth.X509AuthenticationProvider and overrides the method protected String getClientId(X509Certificate clientCert). Choose a scheme name and set authProvider.[scheme] in ZooKeeper to be the fully-qualified class name of the custom implementation; then set ssl.authProvider=[scheme] to use it.

Here is a sample (partial) ZooKeeper configuration for enabling TLS authentication. These configurations are described in the ZooKeeper Admin Guide. IMPORTANT: ZooKeeper does not support setting the key password in the ZooKeeper server keystore to a value different from the keystore password itself. Be sure to set the key password to be the same as the keystore password.
secureClientPort=2182
serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
ssl.keyStore.location=/path/to/zk/keystore.jks
ssl.keyStore.password=zk-ks-passwd
ssl.trustStore.location=/path/to/zk/truststore.jks
ssl.trustStore.password=zk-ts-passwd

Here is a sample (partial) Kafka Broker configuration for connecting to ZooKeeper with mTLS authentication. These configurations are described above in Broker Configs.

# connect to the ZooKeeper port configured for TLS
zookeeper.connect=zk1:2182,zk2:2182,zk3:2182
# required to use TLS to ZooKeeper (default is false)
zookeeper.ssl.client.enable=true
# required to use TLS to ZooKeeper
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
# define key/trust stores to use TLS to ZooKeeper; ignored unless zookeeper.ssl.client.enable=true
zookeeper.ssl.keystore.location=/path/to/kafka/keystore.jks
zookeeper.ssl.keystore.password=kafka-ks-passwd
zookeeper.ssl.truststore.location=/path/to/kafka/truststore.jks
zookeeper.ssl.truststore.password=kafka-ts-passwd
# tell broker to create ACLs on znodes
zookeeper.set.acl=true
IMPORTANT: ZooKeeper does not support setting the key password in the ZooKeeper client (i.e. broker) keystore to a value different from the keystore password itself. Be sure to set the key password to be the same as the keystore password.

7.7.2 Migrating clusters

If you are running a version of Kafka that does not support security or simply with security disabled, and you want to make the cluster secure, then you need to execute the following steps to enable ZooKeeper authentication with minimal disruption to your operations:
  1. Enable SASL and/or mTLS authentication on ZooKeeper. If enabling mTLS, you would now have both a non-TLS port and a TLS port, like this:
    clientPort=2181
    secureClientPort=2182
    serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
    authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
    ssl.keyStore.location=/path/to/zk/keystore.jks
    ssl.keyStore.password=zk-ks-passwd
    ssl.trustStore.location=/path/to/zk/truststore.jks
    ssl.trustStore.password=zk-ts-passwd
  2. Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations (including connecting to the TLS-enabled ZooKeeper port) as required, which enables brokers to authenticate to ZooKeeper. At the end of the rolling restart, brokers are able to manipulate znodes with strict ACLs, but they will not create znodes with those ACLs
  3. If you enabled mTLS, disable the non-TLS port in ZooKeeper
  4. Perform a second rolling restart of brokers, this time setting the configuration parameter zookeeper.set.acl to true, which enables the use of secure ACLs when creating znodes
  5. Execute the ZkSecurityMigrator tool. To execute the tool, there is this script: bin/zookeeper-security-migration.sh with zookeeper.acl set to secure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the option if you enable mTLS.--zk-tls-config-file <file>

It is also possible to turn off authentication in a secure cluster. To do it, follow these steps:

  1. Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations, which enables brokers to authenticate, but setting zookeeper.set.acl to false. At the end of the rolling restart, brokers stop creating znodes with secure ACLs, but are still able to authenticate and manipulate all znodes
  2. Execute the ZkSecurityMigrator tool. To execute the tool, run this script bin/zookeeper-security-migration.sh with zookeeper.acl set to unsecure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the option if you need to set TLS configuration.--zk-tls-config-file <file>
  3. If you are disabling mTLS, enable the non-TLS port in ZooKeeper
  4. Perform a second rolling restart of brokers, this time omitting the system property that sets the JAAS login file and/or removing ZooKeeper mutual TLS configuration (including connecting to the non-TLS-enabled ZooKeeper port) as required
  5. If you are disabling mTLS, disable the TLS port in ZooKeeper
Here is an example of how to run the migration tool:
> bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=localhost:2181

Run this to see the full list of parameters:

> bin/zookeeper-security-migration.sh --help

7.7.3 Migrating the ZooKeeper ensemble

It is also necessary to enable SASL and/or mTLS authentication on the ZooKeeper ensemble. To do it, we need to perform a rolling restart of the server and set a few properties. See above for mTLS information. Please refer to the ZooKeeper documentation for more detail:
  1. Apache ZooKeeper documentation
  2. Apache ZooKeeper wiki

7.7.4 ZooKeeper Quorum Mutual TLS Authentication

It is possible to enable mTLS authentication between the ZooKeeper servers themselves. Please refer to the ZooKeeper documentation for more detail.

7.8 ZooKeeper Encryption

ZooKeeper connections that use mutual TLS are encrypted. Beginning with ZooKeeper version 3.5.7 (the version shipped with Kafka version 2.5) ZooKeeper supports a sever-side config ssl.clientAuth (case-insensitively: want/need/none are the valid options, the default is need), and setting this value to none in ZooKeeper allows clients to connect via a TLS-encrypted connection without presenting their own certificate. Here is a sample (partial) Kafka Broker configuration for connecting to ZooKeeper with just TLS encryption. These configurations are described above in Broker Configs.
# connect to the ZooKeeper port configured for TLS
zookeeper.connect=zk1:2182,zk2:2182,zk3:2182
# required to use TLS to ZooKeeper (default is false)
zookeeper.ssl.client.enable=true
# required to use TLS to ZooKeeper
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
# define trust stores to use TLS to ZooKeeper; ignored unless zookeeper.ssl.client.enable=true
# no need to set keystore information assuming ssl.clientAuth=none on ZooKeeper
zookeeper.ssl.truststore.location=/path/to/kafka/truststore.jks
zookeeper.ssl.truststore.password=kafka-ts-passwd
# tell broker to create ACLs on znodes (if using SASL authentication, otherwise do not set this)
zookeeper.set.acl=true

8. Kafka Connect

8.1 Overview

Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka and other systems. It makes it simple to quickly define connectors that move large collections of data into and out of Kafka. Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing with low latency. An export job can deliver data from Kafka topics into secondary storage and query systems or into batch systems for offline analysis.

Kafka Connect features include:

  • A common framework for Kafka connectors - Kafka Connect standardizes integration of other data systems with Kafka, simplifying connector development, deployment, and management
  • Distributed and standalone modes - scale up to a large, centrally managed service supporting an entire organization or scale down to development, testing, and small production deployments
  • REST interface - submit and manage connectors to your Kafka Connect cluster via an easy to use REST API
  • Automatic offset management - with just a little information from connectors, Kafka Connect can manage the offset commit process automatically so connector developers do not need to worry about this error prone part of connector development
  • Distributed and scalable by default - Kafka Connect builds on the existing group management protocol. More workers can be added to scale up a Kafka Connect cluster.
  • Streaming/batch integration - leveraging Kafka's existing capabilities, Kafka Connect is an ideal solution for bridging streaming and batch data systems

8.2 User Guide

The quickstart provides a brief example of how to run a standalone version of Kafka Connect. This section describes how to configure, run, and manage Kafka Connect in more detail.

Running Kafka Connect

Kafka Connect currently supports two modes of execution: standalone (single process) and distributed.

In standalone mode all work is performed in a single process. This configuration is simpler to setup and get started with and may be useful in situations where only one worker makes sense (e.g. collecting log files), but it does not benefit from some of the features of Kafka Connect such as fault tolerance. You can start a standalone process with the following command:

> bin/connect-standalone.sh config/connect-standalone.properties [connector1.properties connector2.properties ...]

The first parameter is the configuration for the worker. This includes settings such as the Kafka connection parameters, serialization format, and how frequently to commit offsets. The provided example should work well with a local cluster running with the default configuration provided by . It will require tweaking to use with a different configuration or production deployment. All workers (both standalone and distributed) require a few configs:config/server.properties

  • bootstrap.servers - List of Kafka servers used to bootstrap connections to Kafka
  • key.converter - Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
  • value.converter - Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
  • plugin.path (default ) - a list of paths that contain Connect plugins (connectors, converters, transformations). Before running quick starts, users must add the absolute path that contains the example FileStreamSourceConnector and FileStreamSinkConnector packaged in , because these connectors are not included by default to the or the of the Connect worker (see plugin.path property for examples).emptyconnect-file-"version".jarCLASSPATHplugin.path

The important configuration options specific to standalone mode are:

  • offset.storage.file.filename - File to store source connector offsets

The parameters that are configured here are intended for producers and consumers used by Kafka Connect to access the configuration, offset and status topics. For configuration of the producers used by Kafka source tasks and the consumers used by Kafka sink tasks, the same parameters can be used but need to be prefixed with and respectively. The only Kafka client parameter that is inherited without a prefix from the worker configuration is , which in most cases will be sufficient, since the same cluster is often used for all purposes. A notable exception is a secured cluster, which requires extra parameters to allow connections. These parameters will need to be set up to three times in the worker configuration, once for management access, once for Kafka sources and once for Kafka sinks.producer.consumer.bootstrap.servers

Starting with 2.3.0, client configuration overrides can be configured individually per connector by using the prefixes and for Kafka sources or Kafka sinks respectively. These overrides are included with the rest of the connector's configuration properties.producer.override.consumer.override.

The remaining parameters are connector configuration files. You may include as many as you want, but all will execute within the same process (on different threads). You can also choose not to specify any connector configuration files on the command line, and instead use the REST API to create connectors at runtime after your standalone worker starts.

Distributed mode handles automatic balancing of work, allows you to scale up (or down) dynamically, and offers fault tolerance both in the active tasks and for configuration and offset commit data. Execution is very similar to standalone mode:

> bin/connect-distributed.sh config/connect-distributed.properties

The difference is in the class which is started and the configuration parameters which change how the Kafka Connect process decides where to store configurations, how to assign work, and where to store offsets and task statues. In the distributed mode, Kafka Connect stores the offsets, configs and task statuses in Kafka topics. It is recommended to manually create the topics for offset, configs and statuses in order to achieve the desired the number of partitions and replication factors. If the topics are not yet created when starting Kafka Connect, the topics will be auto created with default number of partitions and replication factor, which may not be best suited for its usage.

In particular, the following configuration parameters, in addition to the common settings mentioned above, are critical to set before starting your cluster:

  • group.id (default ) - unique name for the cluster, used in forming the Connect cluster group; note that this must not conflict with consumer group IDsconnect-cluster
  • config.storage.topic (default ) - topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated, compacted topic. You may need to manually create the topic to ensure the correct configuration as auto created topics may have multiple partitions or be automatically configured for deletion rather than compactionconnect-configs
  • offset.storage.topic (default ) - topic to use for storing offsets; this topic should have many partitions, be replicated, and be configured for compactionconnect-offsets
  • status.storage.topic (default ) - topic to use for storing statuses; this topic can have multiple partitions, and should be replicated and configured for compactionconnect-status

Note that in distributed mode the connector configurations are not passed on the command line. Instead, use the REST API described below to create, modify, and destroy connectors.

Configuring Connectors

Connector configurations are simple key-value mappings. In both standalone and distributed mode, they are included in the JSON payload for the REST request that creates (or modifies) the connector. In standalone mode these can also be defined in a properties file and passed to the Connect process on the command line.

Most configurations are connector dependent, so they can't be outlined here. However, there are a few common options:

  • name - Unique name for the connector. Attempting to register again with the same name will fail.
  • connector.class - The Java class for the connector
  • tasks.max - The maximum number of tasks that should be created for this connector. The connector may create fewer tasks if it cannot achieve this level of parallelism.
  • key.converter - (optional) Override the default key converter set by the worker.
  • value.converter - (optional) Override the default value converter set by the worker.

The config supports several formats: the full name or alias of the class for this connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name or use FileStreamSink or FileStreamSinkConnector to make the configuration a bit shorter.connector.class

Sink connectors also have a few additional options to control their input. Each sink connector must set one of the following:

  • topics - A comma-separated list of topics to use as input for this connector
  • topics.regex - A Java regular expression of topics to use as input for this connector

For any other options, you should consult the documentation for the connector.

Transformations

Connectors can be configured with transformations to make lightweight message-at-a-time modifications. They can be convenient for data massaging and event routing.

A transformation chain can be specified in the connector configuration.

  • transforms - List of aliases for the transformation, specifying the order in which the transformations will be applied.
  • transforms.$alias.type - Fully qualified class name for the transformation.
  • transforms.$alias.$transformationSpecificConfig Configuration properties for the transformation

For example, lets take the built-in file source connector and use a transformation to add a static field.

Throughout the example we'll use schemaless JSON data format. To use schemaless format, we changed the following two lines in from true to false:connect-standalone.properties

key.converter.schemas.enable
value.converter.schemas.enable

The file source connector reads each line as a String. We will wrap each line in a Map and then add a second field to identify the origin of the event. To do this, we use two transformations:

  • HoistField to place the input line inside a Map
  • InsertField to add the static field. In this example we'll indicate that the record came from a file connector

After adding the transformations, file looks as following:connect-file-source.properties

name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=test.txt
topic=connect-test
transforms=MakeMap, InsertSource
transforms.MakeMap.type=org.apache.kafka.connect.transforms.HoistField$Value
transforms.MakeMap.field=line
transforms.InsertSource.type=org.apache.kafka.connect.transforms.InsertField$Value
transforms.InsertSource.static.field=data_source
transforms.InsertSource.static.value=test-file-source

All the lines starting with were added for the transformations. You can see the two transformations we created: "InsertSource" and "MakeMap" are aliases that we chose to give the transformations. The transformation types are based on the list of built-in transformations you can see below. Each transformation type has additional configuration: HoistField requires a configuration called "field", which is the name of the field in the map that will include the original String from the file. InsertField transformation lets us specify the field name and the value that we are adding.transforms

When we ran the file source connector on my sample file without the transformations, and then read them using , the results were:kafka-console-consumer.sh

"foo"
"bar"
"hello world"

We then create a new file connector, this time after adding the transformations to the configuration file. This time, the results will be:

{"line":"foo","data_source":"test-file-source"}
{"line":"bar","data_source":"test-file-source"}
{"line":"hello world","data_source":"test-file-source"}

You can see that the lines we've read are now part of a JSON map, and there is an extra field with the static value we specified. This is just one example of what you can do with transformations.

Included transformations

Several widely-applicable data and routing transformations are included with Kafka Connect:

  • InsertField - Add a field using either static data or record metadata
  • ReplaceField - Filter or rename fields
  • MaskField - Replace field with valid null value for the type (0, empty string, etc) or custom replacement (non-empty string or numeric value only)
  • ValueToKey - Replace the record key with a new key formed from a subset of fields in the record value
  • HoistField - Wrap the entire event as a single field inside a Struct or a Map
  • ExtractField - Extract a specific field from Struct and Map and include only this field in results
  • SetSchemaMetadata - modify the schema name or version
  • TimestampRouter - Modify the topic of a record based on original topic and timestamp. Useful when using a sink that needs to write to different tables or indexes based on timestamps
  • RegexRouter - modify the topic of a record based on original topic, replacement string and a regular expression
  • Filter - Removes messages from all further processing. This is used with a predicate to selectively filter certain messages.
  • InsertHeader - Add a header using static data
  • HeadersFrom - Copy or move fields in the key or value to the record headers
  • DropHeaders - Remove headers by name

Details on how to configure each transformation are listed below:

org.apache.kafka.connect.transforms.InsertField
Insert field(s) using attributes from the record metadata or a configured static value.Use the concrete transformation type designed for the record key () or value (). org.apache.kafka.connect.transforms.InsertField$Keyorg.apache.kafka.connect.transforms.InsertField$Value
  • offset.field

    Field name for Kafka offset - only applicable to sink connectors.
    Suffix with to make this a required field, or to keep it optional (the default).
    !?

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • partition.field

    Field name for Kafka partition. Suffix with to make this a required field, or to keep it optional (the default).!?

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • static.field

    Field name for static data field. Suffix with to make this a required field, or to keep it optional (the default).!?

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • static.value

    Static field value, if field name configured.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • timestamp.field

    Field name for record timestamp. Suffix with to make this a required field, or to keep it optional (the default).!?

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • topic.field

    Field name for Kafka topic. Suffix with to make this a required field, or to keep it optional (the default).!?

    Type:string
    Default:null
    Valid Values:
    Importance:medium
org.apache.kafka.connect.transforms.ReplaceField
Filter or rename fields.Use the concrete transformation type designed for the record key () or value (). org.apache.kafka.connect.transforms.ReplaceField$Keyorg.apache.kafka.connect.transforms.ReplaceField$Value
  • exclude

    Fields to exclude. This takes precedence over the fields to include.

    Type:list
    Default:""
    Valid Values:
    Importance:medium
  • include

    Fields to include. If specified, only these fields will be used.

    Type:list
    Default:""
    Valid Values:
    Importance:medium
  • renames

    Field rename mappings.

    Type:list
    Default:""
    Valid Values:list of colon-delimited pairs, e.g. foo:bar,abc:xyz
    Importance:medium
  • blacklist

    Deprecated. Use exclude instead.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • whitelist

    Deprecated. Use include instead.

    Type:list
    Default:null
    Valid Values:
    Importance:low
org.apache.kafka.connect.transforms.MaskField
Mask specified fields with a valid null value for the field type (i.e. 0, false, empty string, and so on).For numeric and string fields, an optional replacement value can be specified that is converted to the correct type.Use the concrete transformation type designed for the record key () or value (). org.apache.kafka.connect.transforms.MaskField$Keyorg.apache.kafka.connect.transforms.MaskField$Value
  • fields

    Names of fields to mask.

    Type:list
    Default:
    Valid Values:non-empty list
    Importance:high
  • replacement

    Custom value replacement, that will be applied to all 'fields' values (numeric or non-empty string values only).

    Type:string
    Default:null
    Valid Values:non-empty string
    Importance:low
org.apache.kafka.connect.transforms.ValueToKey
Replace the record key with a new key formed from a subset of fields in the record value.
  • fields

    Field names on the record value to extract as the record key.

    Type:list
    Default:
    Valid Values:non-empty list
    Importance:high
org.apache.kafka.connect.transforms.HoistField
Wrap data using the specified field name in a Struct when schema present, or a Map in the case of schemaless data.Use the concrete transformation type designed for the record key () or value (). org.apache.kafka.connect.transforms.HoistField$Keyorg.apache.kafka.connect.transforms.HoistField$Value
  • field

    Field name for the single field that will be created in the resulting Struct or Map.

    Type:string
    Default:
    Valid Values:
    Importance:medium
org.apache.kafka.connect.transforms.ExtractField
Extract the specified field from a Struct when schema present, or a Map in the case of schemaless data. Any null values are passed through unmodified.Use the concrete transformation type designed for the record key () or value (). org.apache.kafka.connect.transforms.ExtractField$Keyorg.apache.kafka.connect.transforms.ExtractField$Value
  • field

    Field name to extract.

    Type:string
    Default:
    Valid Values:
    Importance:medium
org.apache.kafka.connect.transforms.SetSchemaMetadata
Set the schema name, version or both on the record's key () or value () schema. org.apache.kafka.connect.transforms.SetSchemaMetadata$Keyorg.apache.kafka.connect.transforms.SetSchemaMetadata$Value
  • schema.name

    Schema name to set.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • schema.version

    Schema version to set.

    Type:int
    Default:null
    Valid Values:
    Importance:high
org.apache.kafka.connect.transforms.TimestampRouter
Update the record's topic field as a function of the original topic value and the record timestamp.This is mainly useful for sink connectors, since the topic field is often used to determine the equivalent entity name in the destination system(e.g. database table or search index name).
  • timestamp.format

    Format string for the timestamp that is compatible with .java.text.SimpleDateFormat

    Type:string
    Default:yyyyMMdd
    Valid Values:
    Importance:high
  • topic.format

    Format string which can contain and as placeholders for the topic and timestamp, respectively.${topic}${timestamp}

    Type:string
    Default:${topic}-${timestamp}
    Valid Values:
    Importance:high
org.apache.kafka.connect.transforms.RegexRouter
Update the record topic using the configured regular expression and replacement string.Under the hood, the regex is compiled to a . If the pattern matches the input topic, is used with the replacement string to obtain the new topic. java.util.regex.Patternjava.util.regex.Matcher#replaceFirst()
  • regex

    Regular expression to use for matching.

    Type:string
    Default:
    Valid Values:valid regex
    Importance:high
  • replacement

    Replacement string.

    Type:string
    Default:
    Valid Values:
    Importance:high
org.apache.kafka.connect.transforms.Flatten
Flatten a nested data structure, generating names for each field by concatenating the field names at each level with a configurable delimiter character. Applies to Struct when schema present, or a Map in the case of schemaless data. Array fields and their contents are not modified. The default delimiter is '.'.Use the concrete transformation type designed for the record key () or value (). org.apache.kafka.connect.transforms.Flatten$Keyorg.apache.kafka.connect.transforms.Flatten$Value
  • delimiter

    Delimiter to insert between field names from the input record when generating field names for the output record

    Type:string
    Default:.
    Valid Values:
    Importance:medium
org.apache.kafka.connect.transforms.Cast
Cast fields or the entire key or value to a specific type, e.g. to force an integer field to a smaller width. Cast from integers, floats, boolean and string to any other type, and cast binary to string (base64 encoded).Use the concrete transformation type designed for the record key () or value (). org.apache.kafka.connect.transforms.Cast$Keyorg.apache.kafka.connect.transforms.Cast$Value
  • spec

    List of fields and the type to cast them to of the form field1:type,field2:type to cast fields of Maps or Structs. A single type to cast the entire value. Valid types are int8, int16, int32, int64, float32, float64, boolean, and string. Note that binary fields can only be cast to string.

    Type:list
    Default:
    Valid Values:list of colon-delimited pairs, e.g. foo:bar,abc:xyz
    Importance:high
org.apache.kafka.connect.transforms.TimestampConverter
Convert timestamps between different formats such as Unix epoch, strings, and Connect Date/Timestamp types.Applies to individual fields or to the entire value.Use the concrete transformation type designed for the record key () or value (). org.apache.kafka.connect.transforms.TimestampConverter$Keyorg.apache.kafka.connect.transforms.TimestampConverter$Value
  • target.type

    The desired timestamp representation: string, unix, Date, Time, or Timestamp

    Type:string
    Default:
    Valid Values:[string, unix, Date, Time, Timestamp]
    Importance:high
  • field

    The field containing the timestamp, or empty if the entire value is a timestamp

    Type:string
    Default:""
    Valid Values:
    Importance:high
  • format

    A SimpleDateFormat-compatible format for the timestamp. Used to generate the output when type=string or used to parse the input if the input is a string.

    Type:string
    Default:""
    Valid Values:
    Importance:medium
  • unix.precision

    The desired Unix precision for the timestamp: seconds, milliseconds, microseconds, or nanoseconds. Used to generate the output when type=unix or used to parse the input if the input is a Long.Note: This SMT will cause precision loss during conversions from, and to, values with sub-millisecond components.

    Type:string
    Default:milliseconds
    Valid Values:[nanoseconds, microseconds, milliseconds, seconds]
    Importance:low
org.apache.kafka.connect.transforms.Filter
Drops all records, filtering them from subsequent transformations in the chain. This is intended to be used conditionally to filter out records matching (or not matching) a particular Predicate.
org.apache.kafka.connect.transforms.InsertHeader
Add a header to each record.
  • header

    The name of the header.

    Type:string
    Default:
    Valid Values:non-null string
    Importance:high
  • value.literal

    The literal value that is to be set as the header value on all records.

    Type:string
    Default:
    Valid Values:non-null string
    Importance:high
org.apache.kafka.connect.transforms.DropHeaders
Removes one or more headers from each record.
  • headers

    The name of the headers to be removed.

    Type:list
    Default:
    Valid Values:non-empty list
    Importance:high
org.apache.kafka.connect.transforms.HeaderFrom
Moves or copies fields in the key/value of a record into that record's headers. Corresponding elements of and together identify a field and the header it should be moved or copied to. Use the concrete transformation type designed for the record key () or value (). fieldsheadersorg.apache.kafka.connect.transforms.HeaderFrom$Keyorg.apache.kafka.connect.transforms.HeaderFrom$Value
  • fields

    Field names in the record whose values are to be copied or moved to headers.

    Type:list
    Default:
    Valid Values:non-empty list
    Importance:high
  • headers

    Header names, in the same order as the field names listed in the fields configuration property.

    Type:list
    Default:
    Valid Values:non-empty list
    Importance:high
  • operation

    Either if the fields are to be moved to the headers (removed from the key/value), or if the fields are to be copied to the headers (retained in the key/value).movecopy

    Type:string
    Default:
    Valid Values:[move, copy]
    Importance:high
Predicates

Transformations can be configured with predicates so that the transformation is applied only to messages which satisfy some condition. In particular, when combined with the Filter transformation predicates can be used to selectively filter out certain messages.

Predicates are specified in the connector configuration.

  • predicates - Set of aliases for the predicates to be applied to some of the transformations.
  • predicates.$alias.type - Fully qualified class name for the predicate.
  • predicates.$alias.$predicateSpecificConfig - Configuration properties for the predicate.

All transformations have the implicit config properties and . A predicular predicate is associated with a transformation by setting the transformation's config to the predicate's alias. The predicate's value can be reversed using the configuration property.predicatenegatepredicatenegate

For example, suppose you have a source connector which produces messages to many different topics and you want to:

  • filter out the messages in the 'foo' topic entirely
  • apply the ExtractField transformation with the field name 'other_field' to records in all topics except the topic 'bar'

To do this we need first to filter out the records destined for the topic 'foo'. The Filter transformation removes records from further processing, and can use the TopicNameMatches predicate to apply the transformation only to records in topics which match a certain regular expression. TopicNameMatches's only configuration property is which is a Java regular expression for matching against the topic name. The configuration would look like this:pattern

transforms=Filter
transforms.Filter.type=org.apache.kafka.connect.transforms.Filter
transforms.Filter.predicate=IsFoo

predicates=IsFoo
predicates.IsFoo.type=org.apache.kafka.connect.transforms.predicates.TopicNameMatches
predicates.IsFoo.pattern=foo

Next we need to apply ExtractField only when the topic name of the record is not 'bar'. We can't just use TopicNameMatches directly, because that would apply the transformation to matching topic names, not topic names which do not match. The transformation's implicit config properties allows us to invert the set of records which a predicate matches. Adding the configuration for this to the previous example we arrive at:negate

transforms=Filter,Extract
transforms.Filter.type=org.apache.kafka.connect.transforms.Filter
transforms.Filter.predicate=IsFoo

transforms.Extract.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.Extract.field=other_field
transforms.Extract.predicate=IsBar
transforms.Extract.negate=true

predicates=IsFoo,IsBar
predicates.IsFoo.type=org.apache.kafka.connect.transforms.predicates.TopicNameMatches
predicates.IsFoo.pattern=foo

predicates.IsBar.type=org.apache.kafka.connect.transforms.predicates.TopicNameMatches
predicates.IsBar.pattern=bar

Kafka Connect includes the following predicates:

  • TopicNameMatches - matches records in a topic with a name matching a particular Java regular expression.
  • HasHeaderKey - matches records which have a header with the given key.
  • RecordIsTombstone - matches tombstone records, that is records with a null value.

Details on how to configure each predicate are listed below:

org.apache.kafka.connect.transforms.predicates.HasHeaderKey
A predicate which is true for records with at least one header with the configured name.
  • name

    The header name.

    Type:string
    Default:
    Valid Values:non-empty string
    Importance:medium
org.apache.kafka.connect.transforms.predicates.RecordIsTombstone
A predicate which is true for records which are tombstones (i.e. have null value).
org.apache.kafka.connect.transforms.predicates.TopicNameMatches
A predicate which is true for records with a topic name that matches the configured regular expression.
  • pattern

    A Java regular expression for matching against the name of a record's topic.

    Type:string
    Default:
    Valid Values:non-empty string, valid regex
    Importance:medium

REST API

Since Kafka Connect is intended to be run as a service, it also provides a REST API for managing connectors. This REST API is available in both standalone and distributed mode. The REST API server can be configured using the configuration option. This field should contain a list of listeners in the following format: . Currently supported protocols are and . For example:listenersprotocol://host:port,protocol2://host2:port2httphttps

listeners=http://localhost:8080,https://localhost:8443

By default, if no are specified, the REST server runs on port 8083 using the HTTP protocol. When using HTTPS, the configuration has to include the SSL configuration. By default, it will use the settings. In case it is needed to use different configuration for the REST API than for connecting to Kafka brokers, the fields can be prefixed with . When using the prefix, only the prefixed options will be used and the options without the prefix will be ignored. Following fields can be used to configure HTTPS for the REST API:listenersssl.*listeners.httpsssl.*

  • ssl.keystore.location
  • ssl.keystore.password
  • ssl.keystore.type
  • ssl.key.password
  • ssl.truststore.location
  • ssl.truststore.password
  • ssl.truststore.type
  • ssl.enabled.protocols
  • ssl.provider
  • ssl.protocol
  • ssl.cipher.suites
  • ssl.keymanager.algorithm
  • ssl.secure.random.implementation
  • ssl.trustmanager.algorithm
  • ssl.endpoint.identification.algorithm
  • ssl.client.auth

The REST API is used not only by users to monitor / manage Kafka Connect. In distributed mode, it is also used for the Kafka Connect cross-cluster communication. Some requests received on the follower nodes REST API will be forwarded to the leader node REST API. In case the URI under which is given host reachable is different from the URI which it listens on, the configuration options , and can be used to change the URI which will be used by the follower nodes to connect with the leader. When using both HTTP and HTTPS listeners, the option can be also used to define which listener will be used for the cross-cluster communication. When using HTTPS for communication between nodes, the same or options will be used to configure the HTTPS client.rest.advertised.host.namerest.advertised.portrest.advertised.listenerrest.advertised.listenerssl.*listeners.https

The following are the currently supported REST API endpoints:

  • GET /connectors - return a list of active connectors
  • POST /connectors - create a new connector; the request body should be a JSON object containing a string field and an object field with the connector configuration parametersnameconfig
  • GET /connectors/{name} - get information about a specific connector
  • GET /connectors/{name}/config - get the configuration parameters for a specific connector
  • PUT /connectors/{name}/config - update the configuration parameters for a specific connector
  • GET /connectors/{name}/status - get current status of the connector, including if it is running, failed, paused, etc., which worker it is assigned to, error information if it has failed, and the state of all its tasks
  • GET /connectors/{name}/tasks - get a list of tasks currently running for a connector
  • GET /connectors/{name}/tasks/{taskid}/status - get current status of the task, including if it is running, failed, paused, etc., which worker it is assigned to, and error information if it has failed
  • PUT /connectors/{name}/pause - pause the connector and its tasks, which stops message processing until the connector is resumed. Any resources claimed by its tasks are left allocated, which allows the connector to begin processing data quickly once it is resumed.
  • PUT /connectors/{name}/stop - stop the connector and shut down its tasks, deallocating any resources claimed by its tasks. This is more efficient from a resource usage standpoint than pausing the connector, but can cause it to take longer to begin processing data once resumed.
  • PUT /connectors/{name}/resume - resume a paused or stopped connector (or do nothing if the connector is not paused or stopped)
  • POST /connectors/{name}/restart?includeTasks=<true|false>&onlyFailed=<true|false> - restart a connector and its tasks instances.
    • the "includeTasks" parameter specifies whether to restart the connector instance and task instances ("includeTasks=true") or just the connector instance ("includeTasks=false"), with the default ("false") preserving the same behavior as earlier versions.
    • the "onlyFailed" parameter specifies whether to restart just the instances with a FAILED status ("onlyFailed=true") or all instances ("onlyFailed=false"), with the default ("false") preserving the same behavior as earlier versions.
  • POST /connectors/{name}/tasks/{taskId}/restart - restart an individual task (typically because it has failed)
  • DELETE /connectors/{name} - delete a connector, halting all tasks and deleting its configuration
  • GET /connectors/{name}/topics - get the set of topics that a specific connector is using since the connector was created or since a request to reset its set of active topics was issued
  • PUT /connectors/{name}/topics/reset - send a request to empty the set of active topics of a connector
  • GET /connectors/{name}/offsets - get the current offsets for a connector (see KIP-875 for more details)

Kafka Connect also provides a REST API for getting information about connector plugins:

  • GET /connector-plugins- return a list of connector plugins installed in the Kafka Connect cluster. Note that the API only checks for connectors on the worker that handles the request, which means you may see inconsistent results, especially during a rolling upgrade if you add new connector jars
  • PUT /connector-plugins/{connector-type}/config/validate - validate the provided configuration values against the configuration definition. This API performs per config validation, returns suggested values and error messages during validation.

The following is a supported REST request at the top-level (root) endpoint:

  • GET /- return basic information about the Kafka Connect cluster such as the version of the Connect worker that serves the REST request (including git commit ID of the source code) and the Kafka cluster ID that is connected to.

For the complete specification of the REST API, see the OpenAPI documentation

Error Reporting in Connect

Kafka Connect provides error reporting to handle errors encountered along various stages of processing. By default, any error encountered during conversion or within transformations will cause the connector to fail. Each connector configuration can also enable tolerating such errors by skipping them, optionally writing each error and the details of the failed operation and problematic record (with various levels of detail) to the Connect application log. These mechanisms also capture errors when a sink connector is processing the messages consumed from its Kafka topics, and all of the errors can be written to a configurable "dead letter queue" (DLQ) Kafka topic.

To report errors within a connector's converter, transforms, or within the sink connector itself to the log, set in the connector configuration to log details of each error and problem record's topic, partition, and offset. For additional debugging purposes, set to also log the problem record key, value, and headers to the log (note this may log sensitive information).errors.log.enable=trueerrors.log.include.messages=true

To report errors within a connector's converter, transforms, or within the sink connector itself to a dead letter queue topic, set , and optionally .errors.deadletterqueue.topic.nameerrors.deadletterqueue.context.headers.enable=true

By default connectors exhibit "fail fast" behavior immediately upon an error or exception. This is equivalent to adding the following configuration properties with their defaults to a connector configuration:

# disable retries on failure
errors.retry.timeout=0

# do not log the error and their contexts
errors.log.enable=false

# do not record errors in a dead letter queue topic
errors.deadletterqueue.topic.name=

# Fail on first error
errors.tolerance=none

These and other related connector configuration properties can be changed to provide different behavior. For example, the following configuration properties can be added to a connector configuration to setup error handling with multiple retries, logging to the application logs and the Kafka topic, and tolerating all errors by reporting them rather than failing the connector task:my-connector-errors

# retry for at most 10 minutes times waiting up to 30 seconds between consecutive failures
errors.retry.timeout=600000
errors.retry.delay.max.ms=30000

# log error context along with application logs, but do not include configs and messages
errors.log.enable=true
errors.log.include.messages=false

# produce error context into the Kafka topic
errors.deadletterqueue.topic.name=my-connector-errors

# Tolerate all errors.
errors.tolerance=all

Exactly-once support

Kafka Connect is capable of providing exactly-once semantics for sink connectors (as of version 0.11.0) and source connectors (as of version 3.3.0). Please note that support for exactly-once semantics is highly dependent on the type of connector you run. Even if you set all the correct worker properties in the configuration for each node in a cluster, if a connector is not designed to, or cannot take advantage of the capabilities of the Kafka Connect framework, exactly-once may not be possible.

Sink connectors

If a sink connector supports exactly-once semantics, to enable exactly-once at the Connect worker level, you must ensure its consumer group is configured to ignore records in aborted transactions. You can do this by setting the worker property to or, if running a version of Kafka Connect that supports it, using a connector client config override policy that allows the property to be set to in individual connector configs. There are no additional ACL requirements.consumer.isolation.levelread_committedconsumer.override.isolation.levelread_committed

Source connectors

If a source connector supports exactly-once semantics, you must configure your Connect cluster to enable framework-level support for exactly-once source connectors. Additional ACLs may be necessary if running against a secured Kafka cluster. Note that exactly-once support for source connectors is currently only available in distributed mode; standalone Connect workers cannot provide exactly-once semantics.

Worker configuration

For new Connect clusters, set the property to in the worker config for each node in the cluster. For existing clusters, two rolling upgrades are necessary. During the first upgrade, the property should be set to , and during the second, it should be set to .exactly.once.source.supportenabledexactly.once.source.supportpreparingenabled

ACL requirements

With exactly-once source support enabled, the principal for each Connect worker will require the following ACLs:

Operation Resource Type Resource Name Note
Write TransactionalId connect-cluster-${groupId}, where is the of the cluster${groupId}group.id
Describe TransactionalId connect-cluster-${groupId}, where is the of the cluster${groupId}group.id
IdempotentWrite Cluster ID of the Kafka cluster that hosts the worker's config topic The IdempotentWrite ACL has been deprecated as of 2.8 and will only be necessary for Connect clusters running on pre-2.8 Kafka clusters

And the principal for each individual connector will require the following ACLs:

Operation Resource Type Resource Name Note
Write TransactionalId ${groupId}-${connector}-${taskId}, for each task that the connector will create, where is the of the Connect cluster, is the name of the connector, and is the ID of the task (starting from zero)${groupId}group.id${connector}${taskId} A wildcard prefix of can be used for convenience if there is no risk of conflict with other transactional IDs or if conflicts are acceptable to the user.${groupId}-${connector}*
Describe TransactionalId ${groupId}-${connector}-${taskId}, for each task that the connector will create, where is the of the Connect cluster, is the name of the connector, and is the ID of the task (starting from zero)${groupId}group.id${connector}${taskId} A wildcard prefix of can be used for convenience if there is no risk of conflict with other transactional IDs or if conflicts are acceptable to the user.${groupId}-${connector}*
Write Topic Offsets topic used by the connector, which is either the value of the property in the connector’s configuration if provided, or the value of the property in the worker’s configuration if not.offsets.storage.topicoffsets.storage.topic
Read Topic Offsets topic used by the connector, which is either the value of the property in the connector’s configuration if provided, or the value of the property in the worker’s configuration if not.offsets.storage.topicoffsets.storage.topic
Describe Topic Offsets topic used by the connector, which is either the value of the property in the connector’s configuration if provided, or the value of the property in the worker’s configuration if not.offsets.storage.topicoffsets.storage.topic
Create Topic Offsets topic used by the connector, which is either the value of the property in the connector’s configuration if provided, or the value of the property in the worker’s configuration if not.offsets.storage.topicoffsets.storage.topic Only necessary if the offsets topic for the connector does not exist yet
IdempotentWrite Cluster ID of the Kafka cluster that the source connector writes to The IdempotentWrite ACL has been deprecated as of 2.8 and will only be necessary for Connect clusters running on pre-2.8 Kafka clusters

8.3 Connector Development Guide

This guide describes how developers can write new connectors for Kafka Connect to move data between Kafka and other systems. It briefly reviews a few key concepts and then describes how to create a simple connector.

Core Concepts and APIs

Connectors and Tasks

To copy data between Kafka and another system, users create a for the system they want to pull data from or push data to. Connectors come in two flavors: import data from another system (e.g. would import a relational database into Kafka) and export data (e.g. would export the contents of a Kafka topic to an HDFS file).ConnectorSourceConnectorsJDBCSourceConnectorSinkConnectorsHDFSSinkConnector

Connectors do not perform any data copying themselves: their configuration describes the data to be copied, and the is responsible for breaking that job into a set of that can be distributed to workers. These also come in two corresponding flavors: and .ConnectorTasksTasksSourceTaskSinkTask

With an assignment in hand, each must copy its subset of the data to or from Kafka. In Kafka Connect, it should always be possible to frame these assignments as a set of input and output streams consisting of records with consistent schemas. Sometimes this mapping is obvious: each file in a set of log files can be considered a stream with each parsed line forming a record using the same schema and offsets stored as byte offsets in the file. In other cases it may require more effort to map to this model: a JDBC connector can map each table to a stream, but the offset is less clear. One possible mapping uses a timestamp column to generate queries incrementally returning new data, and the last queried timestamp can be used as the offset.Task

Streams and Records

Each stream should be a sequence of key-value records. Both the keys and values can have complex structure -- many primitive types are provided, but arrays, objects, and nested data structures can be represented as well. The runtime data format does not assume any particular serialization format; this conversion is handled internally by the framework.

In addition to the key and value, records (both those generated by sources and those delivered to sinks) have associated stream IDs and offsets. These are used by the framework to periodically commit the offsets of data that have been processed so that in the event of failures, processing can resume from the last committed offsets, avoiding unnecessary reprocessing and duplication of events.

Dynamic Connectors

Not all jobs are static, so implementations are also responsible for monitoring the external system for any changes that might require reconfiguration. For example, in the example, the might assign a set of tables to each . When a new table is created, it must discover this so it can assign the new table to one of the by updating its configuration. When it notices a change that requires reconfiguration (or a change in the number of ), it notifies the framework and the framework updates any corresponding .ConnectorJDBCSourceConnectorConnectorTaskTasksTasksTasks

Developing a Simple Connector

Developing a connector only requires implementing two interfaces, the and . A simple example is included with the source code for Kafka in the package. This connector is meant for use in standalone mode and has implementations of a / to read each line of a file and emit it as a record and a / that writes each record to a file.ConnectorTaskfileSourceConnectorSourceTaskSinkConnectorSinkTask

The rest of this section will walk through some code to demonstrate the key steps in creating a connector, but developers should also refer to the full example source code as many details are omitted for brevity.

Connector Example

We'll cover the as a simple example. implementations are very similar. Start by creating the class that inherits from and add a field that will store the configuration information to be propagated to the task(s) (the topic to send data to, and optionally - the filename to read from and the maximum batch size):SourceConnectorSinkConnectorSourceConnector

public class FileStreamSourceConnector extends SourceConnector {
    private Map<String, String> props;

The easiest method to fill in is , which defines the class that should be instantiated in worker processes to actually read the data:taskClass()

@Override
public Class<? extends Task> taskClass() {
    return FileStreamSourceTask.class;
}

We will define the class below. Next, we add some standard lifecycle methods, and :FileStreamSourceTaskstart()stop()

@Override
public void start(Map<String, String> props) {
    // Initialization logic and setting up of resources can take place in this method.
    // This connector doesn't need to do any of that, but we do log a helpful message to the user.

    this.props = props;
    AbstractConfig config = new AbstractConfig(CONFIG_DEF, props);
    String filename = config.getString(FILE_CONFIG);
    filename = (filename == null || filename.isEmpty()) ? "standard input" : config.getString(FILE_CONFIG);
    log.info("Starting file source connector reading from {}", filename);
}

@Override
public void stop() {
    // Nothing to do since no background monitoring is required.
}

Finally, the real core of the implementation is in . In this case we are only handling a single file, so even though we may be permitted to generate more tasks as per the argument, we return a list with only one entry:taskConfigs()maxTasks

@Override
public List<Map<String, String>> taskConfigs(int maxTasks) {
    // Note that the task configs could contain configs additional to or different from the connector configs if needed. For instance,
    // if different tasks have different responsibilities, or if different tasks are meant to process different subsets of the source data stream).
    ArrayList<Map<String, String>> configs = new ArrayList<>();
    // Only one input stream makes sense.
    configs.add(props);
    return configs;
}

Even with multiple tasks, this method implementation is usually pretty simple. It just has to determine the number of input tasks, which may require contacting the remote service it is pulling data from, and then divvy them up. Because some patterns for splitting work among tasks are so common, some utilities are provided in to simplify these cases.ConnectorUtils

Note that this simple example does not include dynamic input. See the discussion in the next section for how to trigger updates to task configs.

Task Example - Source Task

Next we'll describe the implementation of the corresponding . The implementation is short, but too long to cover completely in this guide. We'll use pseudo-code to describe most of the implementation, but you can refer to the source code for the full example.SourceTask

Just as with the connector, we need to create a class inheriting from the appropriate base class. It also has some standard lifecycle methods:Task

public class FileStreamSourceTask extends SourceTask {
    private String filename;
    private InputStream stream;
    private String topic;
    private int batchSize;

    @Override
    public void start(Map<String, String> props) {
        filename = props.get(FileStreamSourceConnector.FILE_CONFIG);
        stream = openOrThrowError(filename);
        topic = props.get(FileStreamSourceConnector.TOPIC_CONFIG);
        batchSize = props.get(FileStreamSourceConnector.TASK_BATCH_SIZE_CONFIG);
    }

    @Override
    public synchronized void stop() {
        stream.close();
    }

These are slightly simplified versions, but show that these methods should be relatively simple and the only work they should perform is allocating or freeing resources. There are two points to note about this implementation. First, the method does not yet handle resuming from a previous offset, which will be addressed in a later section. Second, the method is synchronized. This will be necessary because are given a dedicated thread which they can block indefinitely, so they need to be stopped with a call from a different thread in the Worker.start()stop()SourceTasks

Next, we implement the main functionality of the task, the method which gets events from the input system and returns a :poll()List<SourceRecord>

@Override
public List<SourceRecord> poll() throws InterruptedException {
    try {
        ArrayList<SourceRecord> records = new ArrayList<>();
        while (streamValid(stream) && records.isEmpty()) {
            LineAndOffset line = readToNextLine(stream);
            if (line != null) {
                Map<String, Object> sourcePartition = Collections.singletonMap("filename", filename);
                Map<String, Object> sourceOffset = Collections.singletonMap("position", streamOffset);
                records.add(new SourceRecord(sourcePartition, sourceOffset, topic, Schema.STRING_SCHEMA, line));
                if (records.size() >= batchSize) {
                    return records;
                }
            } else {
                Thread.sleep(1);
            }
        }
        return records;
    } catch (IOException e) {
        // Underlying stream was killed, probably as a result of calling stop. Allow to return
        // null, and driving thread will handle any shutdown if necessary.
    }
    return null;
}

Again, we've omitted some details, but we can see the important steps: the method is going to be called repeatedly, and for each call it will loop trying to read records from the file. For each line it reads, it also tracks the file offset. It uses this information to create an output with four pieces of information: the source partition (there is only one, the single file being read), source offset (byte offset in the file), output topic name, and output value (the line, and we include a schema indicating this value will always be a string). Other variants of the constructor can also include a specific output partition, a key, and headers.poll()SourceRecordSourceRecord

Note that this implementation uses the normal Java interface and may sleep if data is not available. This is acceptable because Kafka Connect provides each task with a dedicated thread. While task implementations have to conform to the basic interface, they have a lot of flexibility in how they are implemented. In this case, an NIO-based implementation would be more efficient, but this simple approach works, is quick to implement, and is compatible with older versions of Java.InputStreampoll()

Although not used in the example, also provides two APIs to commit offsets in the source system: and . The APIs are provided for source systems which have an acknowledgement mechanism for messages. Overriding these methods allows the source connector to acknowledge messages in the source system, either in bulk or individually, once they have been written to Kafka. The API stores the offsets in the source system, up to the offsets that have been returned by . The implementation of this API should block until the commit is complete. The API saves the offset in the source system for each after it is written to Kafka. As Kafka Connect will record offsets automatically, s are not required to implement them. In cases where a connector does need to acknowledge messages in the source system, only one of the APIs is typically required.SourceTaskcommitcommitRecordcommitpollcommitRecordSourceRecordSourceTask

Sink Tasks

The previous section described how to implement a simple . Unlike and , and have very different interfaces because uses a pull interface and uses a push interface. Both share the common lifecycle methods, but the interface is quite different:SourceTaskSourceConnectorSinkConnectorSourceTaskSinkTaskSourceTaskSinkTaskSinkTask

public abstract class SinkTask implements Task {
    public void initialize(SinkTaskContext context) {
        this.context = context;
    }

    public abstract void put(Collection<SinkRecord> records);

    public void flush(Map<TopicPartition, OffsetAndMetadata> currentOffsets) {
    }

The documentation contains full details, but this interface is nearly as simple as the . The method should contain most of the implementation, accepting sets of , performing any required translation, and storing them in the destination system. This method does not need to ensure the data has been fully written to the destination system before returning. In fact, in many cases internal buffering will be useful so an entire batch of records can be sent at once, reducing the overhead of inserting events into the downstream data store. The contain essentially the same information as : Kafka topic, partition, offset, the event key and value, and optional headers.SinkTaskSourceTaskput()SinkRecordsSinkRecordsSourceRecords

The method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the operation atomically commits the data and offsets to a final location in HDFS.flush()offsetsflush()

Errant Record Reporter

When error reporting is enabled for a connector, the connector can use an to report problems with individual records sent to a sink connector. The following example shows how a connector's subclass might obtain and use the , safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature:ErrantRecordReporterSinkTaskErrantRecordReporter

private ErrantRecordReporter reporter;

@Override
public void start(Map<String, String> props) {
    ...
    try {
        reporter = context.errantRecordReporter(); // may be null if DLQ not enabled
    } catch (NoSuchMethodException | NoClassDefFoundError e) {
        // Will occur in Connect runtimes earlier than 2.6
        reporter = null;
    }
}

@Override
public void put(Collection<SinkRecord> records) {
    for (SinkRecord record: records) {
        try {
            // attempt to process and send record to data sink
            process(record);
        } catch(Exception e) {
            if (reporter != null) {
                // Send errant record to error reporter
                reporter.report(record, e);
            } else {
                // There's no error reporter, so fail
                throw new ConnectException("Failed on record", e);
            }
        }
    }
}
Resuming from Previous Offsets

The implementation included a stream ID (the input filename) and offset (position in the file) with each record. The framework uses this to commit offsets periodically so that in the case of a failure, the task can recover and minimize the number of events that are reprocessed and possibly duplicated (or to resume from the most recent offset if Kafka Connect was stopped gracefully, e.g. in standalone mode or due to a job reconfiguration). This commit process is completely automated by the framework, but only the connector knows how to seek back to the right position in the input stream to resume from that location.SourceTask

To correctly resume upon startup, the task can use the passed into its method to access the offset data. In , we would add a bit more code to read the offset (if it exists) and seek to that position:SourceContextinitialize()initialize()

stream = new FileInputStream(filename);
Map<String, Object> offset = context.offsetStorageReader().offset(Collections.singletonMap(FILENAME_FIELD, filename));
if (offset != null) {
    Long lastRecordedOffset = (Long) offset.get("position");
    if (lastRecordedOffset != null)
        seekToOffset(stream, lastRecordedOffset);
}

Of course, you might need to read many keys for each of the input streams. The interface also allows you to issue bulk reads to efficiently load all offsets, then apply them by seeking each input stream to the appropriate position.OffsetStorageReader

Exactly-once source connectors
Supporting exactly-once

With the passing of KIP-618, Kafka Connect supports exactly-once source connectors as of version 3.3.0. In order for a source connector to take advantage of this support, it must be able to provide meaningful source offsets for each record that it emits, and resume consumption from the external system at the exact position corresponding to any of those offsets without dropping or duplicating messages.

Defining transaction boundaries

By default, the Kafka Connect framework will create and commit a new Kafka transaction for each batch of records that a source task returns from its method. However, connectors can also define their own transaction boundaries, which can be enabled by users by setting the property to in the config for the connector.polltransaction.boundaryconnector

If enabled, the connector's tasks will have access to a from their , which they can use to control when transactions are aborted and committed.TransactionContextSourceTaskContext

For example, to commit a transaction at least every ten records:

private int recordsSent;

@Override
public void start(Map<String, String> props) {
    this.recordsSent = 0;
}

@Override
public List<SourceRecord> poll() {
    List<SourceRecord> records = fetchRecords();
    boolean shouldCommit = false;
    for (SourceRecord record : records) {
        if (++this.recordsSent >= 10) {
            shouldCommit = true;
        }
    }
    if (shouldCommit) {
        this.recordsSent = 0;
        this.context.transactionContext().commitTransaction();
    }
    return records;
}

Or to commit a transaction for exactly every tenth record:

private int recordsSent;

@Override
public void start(Map<String, String> props) {
    this.recordsSent = 0;
}

@Override
public List<SourceRecord> poll() {
    List<SourceRecord> records = fetchRecords();
    for (SourceRecord record : records) {
        if (++this.recordsSent % 10 == 0) {
            this.context.transactionContext().commitTransaction(record);
        }
    }
    return records;
}

Most connectors do not need to define their own transaction boundaries. However, it may be useful if files or objects in the source system are broken up into multiple source records, but should be delivered atomically. Additionally, it may be useful if it is impossible to give each source record a unique source offset, if every record with a given offset is delivered within a single transaction.

Note that if the user has not enabled connector-defined transaction boundaries in the connector configuration, the returned by will be .TransactionContextcontext.transactionContext()null

Validation APIs

A few additional preflight validation APIs can be implemented by source connector developers.

Some users may require exactly-once semantics from a connector. In this case, they may set the property to in the configuration for the connector. When this happens, the Kafka Connect framework will ask the connector whether it can provide exactly-once semantics with the specified configuration. This is done by invoking the method on the connector.exactly.once.supportrequiredexactlyOnceSupport

If a connector doesn't support exactly-once semantics, it should still implement this method to let users know for certain that it cannot provide exactly-once semantics:

@Override
public ExactlyOnceSupport exactlyOnceSupport(Map<String, String> props) {
    // This connector cannot provide exactly-once semantics under any conditions
    return ExactlyOnceSupport.UNSUPPORTED;
}

Otherwise, a connector should examine the configuration, and return if it can provide exactly-once semantics:ExactlyOnceSupport.SUPPORTED

@Override
public ExactlyOnceSupport exactlyOnceSupport(Map<String, String> props) {
    // This connector can always provide exactly-once semantics
    return ExactlyOnceSupport.SUPPORTED;
}

Additionally, if the user has configured the connector to define its own transaction boundaries, the Kafka Connect framework will ask the connector whether it can define its own transaction boundaries with the specified configuration, using the method:canDefineTransactionBoundaries

@Override
public ConnectorTransactionBoundaries canDefineTransactionBoundaries(Map<String, String> props) {
    // This connector can always define its own transaction boundaries
    return ConnectorTransactionBoundaries.SUPPORTED;
}

This method should only be implemented for connectors that can define their own transaction boundaries in some cases. If a connector is never able to define its own transaction boundaries, it does not need to implement this method.

Dynamic Input/Output Streams

Kafka Connect is intended to define bulk data copying jobs, such as copying an entire database rather than creating many jobs to copy each table individually. One consequence of this design is that the set of input or output streams for a connector can vary over time.

Source connectors need to monitor the source system for changes, e.g. table additions/deletions in a database. When they pick up changes, they should notify the framework via the object that reconfiguration is necessary. For example, in a :ConnectorContextSourceConnector

if (inputsChanged())
    this.context.requestTaskReconfiguration();

The framework will promptly request new configuration information and update the tasks, allowing them to gracefully commit their progress before reconfiguring them. Note that in the this monitoring is currently left up to the connector implementation. If an extra thread is required to perform this monitoring, the connector must allocate it itself.SourceConnector

Ideally this code for monitoring changes would be isolated to the and tasks would not need to worry about them. However, changes can also affect tasks, most commonly when one of their input streams is destroyed in the input system, e.g. if a table is dropped from a database. If the encounters the issue before the , which will be common if the needs to poll for changes, the will need to handle the subsequent error. Thankfully, this can usually be handled simply by catching and handling the appropriate exception.ConnectorTaskConnectorConnectorTask

SinkConnectors usually only have to handle the addition of streams, which may translate to new entries in their outputs (e.g., a new database table). The framework manages any changes to the Kafka input, such as when the set of input topics changes because of a regex subscription. should expect new input streams, which may require creating new resources in the downstream system, such as a new table in a database. The trickiest situation to handle in these cases may be conflicts between multiple seeing a new input stream for the first time and simultaneously trying to create the new resource. , on the other hand, will generally require no special code for handling a dynamic set of streams.SinkTasksSinkTasksSinkConnectors

Connect Configuration Validation

Kafka Connect allows you to validate connector configurations before submitting a connector to be executed and can provide feedback about errors and recommended values. To take advantage of this, connector developers need to provide an implementation of to expose the configuration definition to the framework.config()

The following code in defines the configuration and exposes it to the framework.FileStreamSourceConnector

static final ConfigDef CONFIG_DEF = new ConfigDef()
    .define(FILE_CONFIG, Type.STRING, null, Importance.HIGH, "Source filename. If not specified, the standard input will be used")
    .define(TOPIC_CONFIG, Type.STRING, ConfigDef.NO_DEFAULT_VALUE, new ConfigDef.NonEmptyString(), Importance.HIGH, "The topic to publish data to")
    .define(TASK_BATCH_SIZE_CONFIG, Type.INT, DEFAULT_TASK_BATCH_SIZE, Importance.LOW,
        "The maximum number of records the source task can read from the file each time it is polled");

public ConfigDef config() {
    return CONFIG_DEF;
}

ConfigDef class is used for specifying the set of expected configurations. For each configuration, you can specify the name, the type, the default value, the documentation, the group information, the order in the group, the width of the configuration value and the name suitable for display in the UI. Plus, you can provide special validation logic used for single configuration validation by overriding the class. Moreover, as there may be dependencies between configurations, for example, the valid values and visibility of a configuration may change according to the values of other configurations. To handle this, allows you to specify the dependents of a configuration and to provide an implementation of to get valid values and set visibility of a configuration given the current configuration values.ValidatorConfigDefRecommender

Also, the method in provides a default validation implementation which returns a list of allowed configurations together with configuration errors and recommended values for each configuration. However, it does not use the recommended values for configuration validation. You may provide an override of the default implementation for customized configuration validation, which may use the recommended values.validate()Connector

Working with Schemas

The FileStream connectors are good examples because they are simple, but they also have trivially structured data -- each line is just a string. Almost all practical connectors will need schemas with more complex data formats.

To create more complex data, you'll need to work with the Kafka Connect API. Most structured records will need to interact with two classes in addition to primitive types: and .dataSchemaStruct

The API documentation provides a complete reference, but here is a simple example creating a and :SchemaStruct

Schema schema = SchemaBuilder.struct().name(NAME)
    .field("name", Schema.STRING_SCHEMA)
    .field("age", Schema.INT_SCHEMA)
    .field("admin", SchemaBuilder.bool().defaultValue(false).build())
    .build();

Struct struct = new Struct(schema)
    .put("name", "Barbara Liskov")
    .put("age", 75);

If you are implementing a source connector, you'll need to decide when and how to create schemas. Where possible, you should avoid recomputing them as much as possible. For example, if your connector is guaranteed to have a fixed schema, create it statically and reuse a single instance.

However, many connectors will have dynamic schemas. One simple example of this is a database connector. Considering even just a single table, the schema will not be predefined for the entire connector (as it varies from table to table). But it also may not be fixed for a single table over the lifetime of the connector since the user may execute an command. The connector must be able to detect these changes and react appropriately.ALTER TABLE

Sink connectors are usually simpler because they are consuming data and therefore do not need to create schemas. However, they should take just as much care to validate that the schemas they receive have the expected format. When the schema does not match -- usually indicating the upstream producer is generating invalid data that cannot be correctly translated to the destination system -- sink connectors should throw an exception to indicate this error to the system.

Kafka Connect Administration

Kafka Connect's REST layer provides a set of APIs to enable administration of the cluster. This includes APIs to view the configuration of connectors and the status of their tasks, as well as to alter their current behavior (e.g. changing configuration and restarting tasks).

When a connector is first submitted to the cluster, a rebalance is triggered between the Connect workers in order to distribute the load that consists of the tasks of the new connector. This same rebalancing procedure is also used when connectors increase or decrease the number of tasks they require, when a connector's configuration is changed, or when a worker is added or removed from the group as part of an intentional upgrade of the Connect cluster or due to a failure.

In versions prior to 2.3.0, the Connect workers would rebalance the full set of connectors and their tasks in the cluster as a simple way to make sure that each worker has approximately the same amount of work. This behavior can be still enabled by setting . connect.protocol=eager

Starting with 2.3.0, Kafka Connect is using by default a protocol that performs incremental cooperative rebalancing that incrementally balances the connectors and tasks across the Connect workers, affecting only tasks that are new, to be removed, or need to move from one worker to another. Other tasks are not stopped and restarted during the rebalance, as they would have been with the old protocol.

If a Connect worker leaves the group, intentionally or due to a failure, Connect waits for before triggering a rebalance. This delay defaults to five minutes () to tolerate failures or upgrades of workers without immediately redistributing the load of a departing worker. If this worker returns within the configured delay, it gets its previously assigned tasks in full. However, this means that the tasks will remain unassigned until the time specified by elapses. If a worker does not return within that time limit, Connect will reassign those tasks among the remaining workers in the Connect cluster. scheduled.rebalance.max.delay.ms300000msscheduled.rebalance.max.delay.ms

The new Connect protocol is enabled when all the workers that form the Connect cluster are configured with , which is also the default value when this property is missing. Therefore, upgrading to the new Connect protocol happens automatically when all the workers upgrade to 2.3.0. A rolling upgrade of the Connect cluster will activate incremental cooperative rebalancing when the last worker joins on version 2.3.0. connect.protocol=compatible

You can use the REST API to view the current status of a connector and its tasks, including the ID of the worker to which each was assigned. For example, the request shows the status of a connector named : GET /connectors/file-source/statusfile-source

{
    "name": "file-source",
    "connector": {
        "state": "RUNNING",
        "worker_id": "192.168.1.208:8083"
    },
    "tasks": [
        {
        "id": 0,
        "state": "RUNNING",
        "worker_id": "192.168.1.209:8083"
        }
    ]
}

Connectors and their tasks publish status updates to a shared topic (configured with ) which all workers in the cluster monitor. Because the workers consume this topic asynchronously, there is typically a (short) delay before a state change is visible through the status API. The following states are possible for a connector or one of its tasks: status.storage.topic

  • UNASSIGNED: The connector/task has not yet been assigned to a worker.
  • RUNNING: The connector/task is running.
  • PAUSED: The connector/task has been administratively paused.
  • FAILED: The connector/task has failed (usually by raising an exception, which is reported in the status output).
  • RESTARTING: The connector/task is either actively restarting or is expected to restart soon

In most cases, connector and task states will match, though they may be different for short periods of time when changes are occurring or if tasks have failed. For example, when a connector is first started, there may be a noticeable delay before the connector and its tasks have all transitioned to the RUNNING state. States will also diverge when tasks fail since Connect does not automatically restart failed tasks. To restart a connector/task manually, you can use the restart APIs listed above. Note that if you try to restart a task while a rebalance is taking place, Connect will return a 409 (Conflict) status code. You can retry after the rebalance completes, but it might not be necessary since rebalances effectively restart all the connectors and tasks in the cluster.

Starting with 2.5.0, Kafka Connect uses the to also store information related to the topics that each connector is using. Connect Workers use these per-connector topic status updates to respond to requests to the REST endpoint by returning the set of topic names that a connector is using. A request to the REST endpoint resets the set of active topics for a connector and allows a new set to be populated, based on the connector's latest pattern of topic usage. Upon connector deletion, the set of the connector's active topics is also deleted. Topic tracking is enabled by default but can be disabled by setting . If you want to disallow requests to reset the active topics of connectors during runtime, set the Worker property . status.storage.topicGET /connectors/{name}/topicsPUT /connectors/{name}/topics/resettopic.tracking.enable=falsetopic.tracking.allow.reset=false

It's sometimes useful to temporarily stop the message processing of a connector. For example, if the remote system is undergoing maintenance, it would be preferable for source connectors to stop polling it for new data instead of filling logs with exception spam. For this use case, Connect offers a pause/resume API. While a source connector is paused, Connect will stop polling it for additional records. While a sink connector is paused, Connect will stop pushing new messages to it. The pause state is persistent, so even if you restart the cluster, the connector will not begin message processing again until the task has been resumed. Note that there may be a delay before all of a connector's tasks have transitioned to the PAUSED state since it may take time for them to finish whatever processing they were in the middle of when being paused. Additionally, failed tasks will not transition to the PAUSED state until they have been restarted.

9. Kafka Streams

Kafka Streams is a client library for processing and analyzing data stored in Kafka. It builds upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, exactly-once processing semantics and simple yet efficient management of application state.

Kafka Streams has a low barrier to entry: You can quickly write and run a small-scale proof-of-concept on a single machine; and you only need to run additional instances of your application on multiple machines to scale up to high-volume production workloads. Kafka Streams transparently handles the load balancing of multiple instances of the same application by leveraging Kafka's parallelism model.

Learn More about Kafka Streams read this Section.

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多