Events are organized and durably stored in topics. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder. An example topic name could be "payments". Topics in Kafka are always multi-producer and multi-subscriber: a topic can have zero, one, or many producers that write events to it, as well as zero, one, or many consumers that subscribe to these events. Events in a topic can be read as often as needed—unlike traditional messaging systems, events are not deleted after consumption. Instead, you define for how long Kafka should retain your events through a per-topic configuration setting, after which old events will be discarded. Kafka's performance is effectively constant with respect to data size, so storing data for a long time is perfectly fine.
Topics are partitioned, meaning a topic is spread over a number of "buckets" located on different Kafka brokers. This distributed placement of your data is very important for scalability because it allows client applications to both read and write the data from/to many brokers at the same time. When a new event is published to a topic, it is actually appended to one of the topic's partitions. Events with the same event key (e.g., a customer or vehicle ID) are written to the same partition, and Kafka guarantees that any consumer of a given topic-partition will always read that partition's events in exactly the same order as they were written.
To make your data fault-tolerant and highly-available, every topic can be replicated, even across geo-regions or datacenters, so that there are always multiple brokers that have a copy of the data just in case things go wrong, you want to do maintenance on the brokers, and so on. A common production setting is a replication factor of 3, i.e., there will always be three copies of your data. This replication is performed at the level of topic-partitions.
This primer should be sufficient for an introduction. The Design section of the documentation explains Kafka's various concepts in full detail, if you are interested.
In addition to command line tooling for management and administration tasks, Kafka has five core APIs for Java and Scala:
The Admin API to manage and inspect topics, brokers, and other Kafka objects.
The Producer API to publish (write) a stream of events to one or more Kafka topics.
The Consumer API to subscribe to (read) one or more topics and to process the stream of events produced to them.
The Kafka Streams API to implement stream processing applications and microservices. It provides higher-level functions to process event streams, including transformations, stateful operations like aggregations and joins, windowing, processing based on event-time, and more. Input is read from one or more topics in order to generate output to one or more topics, effectively transforming the input streams to output streams.
The Kafka Connect API to build and run reusable data import/export connectors that consume (read) or produce (write) streams of events from and to external systems and applications so they can integrate with Kafka. For example, a connector to a relational database like PostgreSQL might capture every change to a set of tables. However, in practice, you typically don't need to implement your own connectors because the Kafka community already provides hundreds of ready-to-use connectors.
Kafka works well as a replacement for a more traditional message broker.
Message brokers are used for a variety of reasons (to decouple processing from data producers, to buffer unprocessed messages, etc).
In comparison to most messaging systems Kafka has better throughput, built-in partitioning, replication, and fault-tolerance which makes it a good
solution for large scale message processing applications.
In our experience messaging uses are often comparatively low-throughput, but may require low end-to-end latency and often depend on the strong
durability guarantees Kafka provides.
In this domain Kafka is comparable to traditional messaging systems such as ActiveMQ or
RabbitMQ.
The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds.
This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type.
These feeds are available for subscription for a range of use cases including real-time processing, real-time monitoring, and loading into Hadoop or
offline data warehousing systems for offline processing and reporting.
Activity tracking is often very high volume as many activity messages are generated for each user page view.
Kafka is often used for operational monitoring data.
This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.
Many people use Kafka as a replacement for a log aggregation solution.
Log aggregation typically collects physical log files off servers and puts them in a central place (a file server or HDFS perhaps) for processing.
Kafka abstracts away the details of files and gives a cleaner abstraction of log or event data as a stream of messages.
This allows for lower-latency processing and easier support for multiple data sources and distributed data consumption.
In comparison to log-centric systems like Scribe or Flume, Kafka offers equally good performance, stronger durability guarantees due to replication,
and much lower end-to-end latency.
Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then
aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an "articles" topic;
further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic;
a final processing stage might attempt to recommend this content to users.
Such processing pipelines create graphs of real-time data flows based on the individual topics.
Starting in 0.10.0.0, a light-weight but powerful stream processing library called Kafka Streams
is available in Apache Kafka to perform such data processing as described above.
Apart from Kafka Streams, alternative open source stream processing tools include Apache Storm and
Apache Samza.
Event sourcing is a style of application design where state changes are logged as a
time-ordered sequence of records. Kafka's support for very large stored log data makes it an excellent backend for an application built in this style.
Kafka can serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing
mechanism for failed nodes to restore their data.
The log compaction feature in Kafka helps support this usage.
In this usage Kafka is similar to Apache BookKeeper project.
$ bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
This is my first event
This is my second event
You can stop the consumer client with at any time.Ctrl-C
Feel free to experiment: for example, switch back to your producer terminal (previous step) to write
additional events, and see how the events immediately show up in your consumer terminal.
Because events are durably stored in Kafka, they can be read as many times and by as many consumers as you want.
You can easily verify this by opening yet another terminal session and re-running the previous command again.
You probably have lots of data in existing systems like relational databases or traditional messaging systems,
along with many applications that already use these systems.
Kafka Connect allows you to continuously ingest
data from external systems into Kafka, and vice versa. It is an extensible tool that runs
connectors, which implement the custom logic for interacting with an external system.
It is thus very easy to integrate existing systems with Kafka. To make this process even easier,
there are hundreds of such connectors readily available.
In this quickstart we'll see how to run Kafka Connect with simple connectors that import data
from a file to a Kafka topic and export data from a Kafka topic to a file.
First, make sure to add to the property in the Connect worker's configuration.
For the purpose of this quickstart we'll use a relative path and consider the connectors' package as an uber jar, which works when the quickstart commands are run from the installation directory.
However, it's worth noting that for production deployments using absolute paths is always preferable. See plugin.path for a detailed description of how to set this config.
connect-file-3.5.0.jarplugin.path
Edit the file, add or change the configuration property match the following, and save the file:
config/connect-standalone.propertiesplugin.path
There are a plethora of tools that integrate with Kafka outside the main distribution. The ecosystem page lists many of these, including stream processing systems, Hadoop integration, monitoring, and deployment tools.
Upgraded the dependency, snappy-java, to a version which is not vulnerable to
CVE-2023-34455.
You can find more information about the CVE at Kafka CVE list.
Fixed a regression introduced in 3.3.0, which caused configuration values to be restricted to
upper case only. After the fix, values are case insensitive.
See KAFKA-15053 for details.
security.protocolsecurity.protocol
Kafka Streams has introduced a new state store type, versioned key-value stores,
for storing multiple record versions per key, thereby enabling timestamped retrieval
operations to return the latest record (per key) as of a specified timestamp.
See KIP-889
and KIP-914
for more details.
If the new store typed is used in the DSL, improved processing semantics are applied as described in
KIP-914.
KTable aggregation semantics got further improved via
KIP-904,
now avoiding spurious intermediate results.
Kafka Streams' is improved via
KIP-399,
now also covering serialization errors.
ProductionExceptionHandler
MirrorMaker now uses incrementalAlterConfigs API by default to synchronize topic configurations instead of the deprecated alterConfigs API.
A new settings called is introduced to allow users to control which API to use.
This new setting is marked deprecated and will be removed in the next major release when incrementalAlterConfigs API is always used.
See KIP-894 for more details.
use.incremental.alter.configs
The JmxTool, EndToEndLatency, StreamsResetter, ConsumerPerformance and ClusterTool have been migrated to the tools module.
The 'kafka.tools' package is deprecated and will change to 'org.apache.kafka.tools' in the next major release.
See KAFKA-14525 for more details.
If you are upgrading from a version prior to 2.1.x, please see the note in step 5 below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to .
inter.broker.protocol.version3.5
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 3.5 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
If you are upgrading from a version prior to 3.3.0, please see the note in step 3 below. Once you have changed the metadata.version to the latest version, it will not be possible to downgrade to a version prior to 3.3-IV0.
For a rolling upgrade:
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
Once the cluster's behavior and performance has been verified, bump the metadata.version by running
./bin/kafka-features.sh upgrade --metadata 3.5
Note that the cluster metadata version cannot be downgraded to a pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded.
However, it is possible to downgrade to production versions such as 3.3-IV0, 3.3-IV1, etc.
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to .
inter.broker.protocol.version3.4
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 3.4 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
If you are upgrading from a version prior to 3.3.0, please see the note below. Once you have changed the metadata.version to the latest version, it will not be possible to downgrade to a version prior to 3.3-IV0.
For a rolling upgrade:
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
Once the cluster's behavior and performance has been verified, bump the metadata.version by running
./bin/kafka-features.sh upgrade --metadata 3.4
Note that the cluster metadata version cannot be downgraded to a pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded.
However, it is possible to downgrade to production versions such as 3.3-IV0, 3.3-IV1, etc.
Since Apache Kafka 3.4.0, we have added a system property ("org.apache.kafka.disallowed.login.modules") to disable the problematic
login modules usage in SASL JAAS configuration. Also by default "com.sun.security.auth.module.JndiLoginModule" is disabled from Apache Kafka 3.4.0.
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to .
inter.broker.protocol.version3.3
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 3.3 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
If you are upgrading from a version prior to 3.3.1, please see the note below. Once you have changed the metadata.version to the latest version, it will not be possible to downgrade to a version prior to 3.3-IV0.
For a rolling upgrade:
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
Once the cluster's behavior and performance has been verified, bump the metadata.version by running
./bin/kafka-features.sh upgrade --metadata 3.3
Note that the cluster metadata version cannot be downgraded to a pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded. However, it is possible to downgrade to production versions such as 3.3-IV0, 3.3-IV1, etc.
KRaft mode is production ready for new clusters. See KIP-833
for more details (including limitations).
The partitioner used by default for records with no keys has been improved to avoid pathological behavior when one or more brokers are slow.
The new logic may affect the batching behavior, which can be tuned using the and/or configuration settings.
The previous behavior can be restored by setting .
See KIP-794 for more details.
batch.sizelinger.mspartitioner.class=org.apache.kafka.clients.producer.internals.DefaultPartitioner
There is now a slightly different upgrade process for KRaft clusters than for ZK-based clusters, as described above.
Introduced a new API to which would create a new Metric if not existing or return the same metric
if already registered. Note that this behaviour is different from API which throws an when
trying to create an already existing metric. (See KIP-843
for more details).
addMetricIfAbsentMetricsaddMetricIllegalArgumentException
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to .
inter.broker.protocol.version3.2
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 3.2 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
Idempotence for the producer is enabled by default if no conflicting configurations are set. When producing to brokers older than 2.8.0,
the permission is required. Check the compatibility
section of KIP-679
for details. In 3.0.0 and 3.1.0, a bug prevented this default from being applied,
which meant that idempotence remained disabled unless the user had explicitly set to true
(See KAFKA-13598 for more details).
This issue was fixed and the default is properly applied in 3.0.1, 3.1.1, and 3.2.0.IDEMPOTENT_WRITEenable.idempotence
A notable exception is Connect that by default disables idempotent behavior for all of its
producers in order to uniformly support using a wide range of Kafka broker versions.
Users can change this behavior to enable idempotence for some or all producers
via Connect worker and/or connector configuration. Connect may enable idempotent producers
by default in a future major release.
Kafka has replaced log4j with reload4j due to security concerns.
This only affects modules that specify a logging backend ( and are two such examples).
A number of modules, including , leave it to the application to specify the logging backend.
More information can be found at reload4j.
Projects that depend on the affected modules from the Kafka project should use
slf4j-log4j12 version 1.7.35 or above or
slf4j-reload4j to avoid
possible compatibility issues originating from the logging framework.connect-runtimekafka-toolskafka-clients
The example connectors, and , have been
removed from the default classpath. To use them in Kafka Connect standalone or distributed mode they need to be
explicitly added, for example .FileStreamSourceConnectorFileStreamSinkConnectorCLASSPATH=./lib/connect-file-3.2.0.jar ./bin/connect-distributed.sh
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to .
inter.broker.protocol.version3.1
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 3.1 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
Idempotence for the producer is enabled by default if no conflicting configurations are set. When producing to brokers older than 2.8.0,
the permission is required. Check the compatibility
section of KIP-679
for details. A bug prevented the producer idempotence default from being applied which meant that it remained disabled unless the user had
explicitly set to true. See KAFKA-13598 for
more details. This issue was fixed and the default is properly applied.IDEMPOTENT_WRITEenable.idempotence
A notable exception is Connect that by default disables idempotent behavior for all of its
producers in order to uniformly support using a wide range of Kafka broker versions.
Users can change this behavior to enable idempotence for some or all producers
via Connect worker and/or connector configuration. Connect may enable idempotent producers
by default in a future major release.
Kafka has replaced log4j with reload4j due to security concerns.
This only affects modules that specify a logging backend ( and are two such examples).
A number of modules, including , leave it to the application to specify the logging backend.
More information can be found at reload4j.
Projects that depend on the affected modules from the Kafka project should use
slf4j-log4j12 version 1.7.35 or above or
slf4j-reload4j to avoid
possible compatibility issues originating from the logging framework.connect-runtimekafka-toolskafka-clients
The following metrics have been deprecated: , ,
and . Please use , ,
and instead. See KIP-773
for more details.bufferpool-wait-time-totalio-waittime-totaliotime-totalbufferpool-wait-time-ns-totalio-wait-time-ns-totalio-time-ns-total
IBP 3.1 introduces topic IDs to FetchRequest as a part of
KIP-516.
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to .
inter.broker.protocol.version3.0
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 3.0 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
Idempotence for the producer is enabled by default if no conflicting configurations are set. When producing to brokers older than 2.8.0,
the permission is required. Check the compatibility
section of KIP-679
for details. A bug prevented the producer idempotence default from being applied which meant that it remained disabled unless the user had
explicitly set to true. See KAFKA-13598 for
more details. This issue was fixed and the default is properly applied.IDEMPOTENT_WRITEenable.idempotence
The producer has stronger delivery guarantees by default: is enabled and is set to instead of .
See KIP-679 for details.
In 3.0.0 and 3.1.0, a bug prevented the idempotence default from being applied which meant that it remained disabled unless the user had explicitly set
to true. Note that the bug did not affect the change. See KAFKA-13598 for more details.
This issue was fixed and the default is properly applied in 3.0.1, 3.1.1, and 3.2.0.idempotenceacksall1enable.idempotenceacks=all
Java 8 and Scala 2.12 support have been deprecated since Apache Kafka 3.0 and will be removed in Apache Kafka 4.0.
See KIP-750
and KIP-751 for more details.
ZooKeeper has been upgraded to version 3.6.3.
A preview of KRaft mode is available, though upgrading to it from the 2.8 Early Access release is not possible. See
the file for details.config/kraft/README.md
The release tarball no longer includes test, sources, javadoc and test sources jars. These are still published to the Maven Central repository.
The default value for the consumer configuration was increased from 10s to 45s. See
KIP-735 for more details.session.timeout.ms
The broker configuration and topic configuration have been deprecated.
The value of both configurations is always assumed to be if is or higher.
If or are set, we recommend clearing them at the same time as the
upgrade to 3.0. This will avoid potential compatibility issues if the
is downgraded. See KIP-724 for more details.log.message.format.versionmessage.format.version3.0inter.broker.protocol.version3.0log.message.format.versionmessage.format.versioninter.broker.protocol.versioninter.broker.protocol.version
The Streams API removed all deprecated APIs that were deprecated in version 2.5.0 or earlier.
For a complete list of removed APIs compare the detailed Kafka Streams upgrade notes.
Kafka Streams no longer has a compile time dependency on "connect:json" module (KAFKA-5146).
Projects that were relying on this transitive dependency will have to explicitly declare it.
Custom principal builder implementations specified through must now implement the
interface to allow for forwarding between brokers. See KIP-590 for more details about the usage of KafkaPrincipalSerde.principal.builder.classKafkaPrincipalSerde
A number of deprecated classes, methods and tools have been removed from the , , and modules:clientsconnectcoretools
The Scala , and related classes have been removed. Please use the Java
and instead.AuthorizerSimpleAclAuthorizerAuthorizerAclAuthorizer
The method was removed (KAFKA-12573).Metric#value()
The and classes were removed (KAFKA-12584).
Please use and instead.SumTotalWindowedSumCumulativeSum
The and classes were removed. Please use and
respectively instead.CountSampledTotalWindowedCountWindowedSum
The , and classes were removed.
PrincipalBuilderDefaultPrincipalBuilderResourceFilter
Various constants and constructors were removed from , , and
.SslConfigsSaslConfigsAclBindingAclBindingFilter
The methods were removed. Please use instead.Admin.electedPreferredLeaders()Admin.electLeaders
The command line tool was removed. Please use instead.kafka-preferred-replica-electionkafka-leader-election
The option was removed from the and command line tools.
Please use instead.--zookeeperkafka-topicskafka-reassign-partitions--bootstrap-server
The constructor was removed (KAFKA-12577).
Please use the remaining public constructor instead.ConfigEntry
The config value for the client config has been removed. In the unlikely
event that you set this config explicitly, we recommend leaving the config unset ( is used by default).defaultclient.dns.lookupuse_all_dns_ips
The and classes have been removed. Please use
and instead.ExtendedDeserializerExtendedSerializerDeserializerSerializer
The method was removed from the producer, consumer and admin client. Please use
.close(long, TimeUnit)close(Duration)
The and methods
were removed. These methods were not intended to be public API and there is no replacement.ConsumerConfig.addDeserializerToConfigProducerConfig.addSerializerToConfig
The method was removed. Please use
instead.NoOffsetForPartitionException.partition()partitions()
The default is changed to "[RangeAssignor, CooperativeStickyAssignor]",
which will use the RangeAssignor by default, but allows upgrading to the CooperativeStickyAssignor with just a single rolling bounce that removes the RangeAssignor from the list.
Please check the client upgrade path guide here for more detail.partition.assignment.strategy
The Scala was removed. Please use the Java .kafka.common.MessageFormatterorg.apache.kafka.common.MessageFormatter
The method was removed. Please use instead.MessageFormatter.init(Properties)configure(Map)
The method has been removed from and . The message
format v2, which has been the default since 0.11, moved the checksum from the record to the record batch. As such, these methods
don't make sense and no replacements exist.checksum()ConsumerRecordRecordMetadata
The class was removed. It is not part of the public API, but it may have been used
with . It reported the checksum of each record, which has not been supported
since message format v2.ChecksumMessageFormatterkafka-console-consumer.sh
The class has been removed. Please use
instead.org.apache.kafka.clients.consumer.internals.PartitionAssignororg.apache.kafka.clients.consumer.ConsumerPartitionAssignor
The and configurations were removed (KAFKA-12591).
Dynamic quota defaults must be used instead.quota.producer.defaultquota.consumer.default
The and configurations were removed. Please use instead.porthost.namelisteners
The and configurations were removed. Please use instead.advertised.portadvertised.host.nameadvertised.listeners
The deprecated worker configurations and were removed (KAFKA-12482) from the Kafka Connect worker configuration.
Please use instead.rest.host.namerest.portlisteners
The method has been deprecated. Please use
instead, where the
can be retrieved via for stronger semantics. Note that the full set of consumer group metadata is only
understood by brokers or version 2.5 or higher, so you must upgrade your kafka cluster to get the stronger semantics. Otherwise, you can just pass
in to work with older brokers. See KIP-732 for more details.
Producer#sendOffsetsToTransaction(Map offsets, String consumerGroupId)Producer#sendOffsetsToTransaction(Map offsets, ConsumerGroupMetadata metadata)ConsumerGroupMetadataKafkaConsumer#groupMetadata()new ConsumerGroupMetadata(consumerGroupId)
The Connect and properties have been completely removed.
The use of these Connect worker properties has been deprecated since version 2.0.0.
Workers are now hardcoded to use the JSON converter with set to . If your cluster has been using
a different internal key or value converter, you can follow the migration steps outlined in KIP-738
to safely upgrade your Connect cluster to 3.0.
internal.key.converterinternal.value.converterschemas.enablefalse
The Connect-based MirrorMaker (MM2) includes changes to support , enabling replication without renaming topics.
The existing is still used by default, but identity replication can be enabled via the
configuration property. This is especially useful for users migrating from the older MirrorMaker (MM1), or for
use-cases with simple one-way replication topologies where topic renaming is undesirable. Note that , unlike
, cannot prevent replication cycles based on topic names, so take care to avoid cycles when constructing your
replication topology.
IdentityReplicationPolicyDefaultReplicationPolicyreplication.policyIdentityReplicationPolicyDefaultReplicationPolicy
The original MirrorMaker (MM1) and related classes have been deprecated. Please use the Connect-based
MirrorMaker (MM2), as described in the
Geo-Replication section.
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to .
inter.broker.protocol.version2.8
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 2.8 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
The 2.8.0 release added a new method to the Authorizer Interface introduced in
KIP-679.
The motivation is to unblock our future plan to enable the strongest message delivery guarantee by default.
Custom authorizer should consider providing a more efficient implementation that supports audit logging and any custom configs or access rules.
IBP 2.8 introduces topic IDs to topics as a part of
KIP-516.
When using ZooKeeper, this information is stored in the TopicZNode. If the cluster is downgraded to a previous IBP or version,
future topics will not get topic IDs and it is not guaranteed that topics will retain their topic IDs in ZooKeeper.
This means that upon upgrading again, some topics or all topics will be assigned new IDs.
Kafka Streams introduce a type-safe operator as a substitution for deprecated method
(cf. KIP-418).
split()KStream#branch()
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to .
inter.broker.protocol.version2.7
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 2.7 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
The 2.7.0 release includes the core Raft implementation specified in
KIP-595.
There is a separate "raft" module containing most of the logic. Until integration with the
controller is complete, there is a standalone server that users can use for testing the
performance of the Raft implementation. See the README.md in the raft module for details
KIP-651 adds support
for using PEM files for key and trust stores.
KIP-612 adds support
for enforcing broker-wide and per-listener connection create rates. The 2.7.0 release contains
the first part of KIP-612 with dynamic configuration coming in the 2.8.0 release.
The ability to throttle topic and partition creations or
topics deletions to prevent a cluster from being harmed via
KIP-599
When new features become available in Kafka there are two main issues:
How do Kafka clients become aware of broker capabilities?
How does the broker decide which features to enable?
KIP-584
provides a flexible and operationally easy solution for client discovery, feature gating and rolling upgrades using a single restart.
The ability to print record offsets and headers with the is now possible
via KIP-431ConsoleConsumer
The addition of KIP-554
continues progress towards the goal of Zookeeper removal from Kafka. The addition of KIP-554
means you don't have to connect directly to ZooKeeper anymore for managing SCRAM credentials.
Altering non-reconfigurable configs of existent listeners causes .
By contrast, the previous (unintended) behavior would have caused the updated configuration to be persisted,
but it wouldn't
take effect until the broker was restarted. See KAFKA-10479 for more discussion.
See and
for the supported reconfigurable configs of existent listeners.
InvalidRequestExceptionDynamicBrokerConfig.DynamicSecurityConfigsSocketServer.ListenerReconfigurableConfigs
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to .
inter.broker.protocol.version2.6
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 2.6 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
Kafka Streams adds a new processing mode (requires broker 2.5 or newer) that improves application
scalability using exactly-once guarantees
(cf. KIP-447)
TLSv1.3 has been enabled by default for Java 11 or newer. The client and server will negotiate TLSv1.3 if
both support it and fallback to TLSv1.2 otherwise. See
KIP-573 for more details.
The default value for the configuration has been changed from
to . If a hostname resolves to multiple IP addresses, clients and brokers will now
attempt to connect to each IP in sequence until the connection is successfully established. See
KIP-602
for more details.
client.dns.lookupdefaultuse_all_dns_ips
NotLeaderForPartitionException has been deprecated and replaced with .
Fetch requests and other requests intended only for the leader or follower return NOT_LEADER_OR_FOLLOWER(6) instead of REPLICA_NOT_AVAILABLE(9)
if the broker is not a replica, ensuring that this transient error during reassignments is handled by all clients as a retriable exception.
NotLeaderOrFollowerException
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to .
inter.broker.protocol.version2.5
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 2.5 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
There are several notable changes to the reassignment tool
following the completion of
KIP-455.
This tool now requires the flag to be provided when changing the throttle of an
active reassignment. Reassignment cancellation is now possible using the
command. Finally, reassignment with
has been deprecated in favor of . See the KIP for more detail.
kafka-reassign-partitions.sh--additional--cancel--zookeeper--bootstrap-server
When is used, can still return data
while it is in the middle of a rebalance for those partitions still owned by the consumer; in addition
now may throw a non-fatal to notify
users of such an event, in order to distinguish from the fatal and allow
users to complete the ongoing rebalance and then reattempt committing offsets for those still-owned partitions.RebalanceProtocol#COOPERATIVEConsumer#pollConsumer#commitSyncRebalanceInProgressExceptionCommitFailedException
For improved resiliency in typical network environments, the default value of
has been increased from 6s to 18s and
from 10s to 30s.zookeeper.session.timeout.msreplica.lag.time.max.ms
New DSL operator has been added for aggregating multiple streams together at once.cogroup()
Added a new API to translate an input event stream into a KTable.KStream.toTable()
Added a new Serde type to represent null keys or null values from input topic.Void
Deprecated and replaced it with .UsePreviousTimeOnInvalidTimestampUsePartitionTimeOnInvalidTimeStamp
Improved exactly-once semantics by adding a pending offset fencing mechanism and stronger transactional commit
consistency check, which greatly simplifies the implementation of a scalable exactly-once application.
We also added a new exactly-once semantics code example under
examples folder. Check out
KIP-447
for the full details.
Added a new public api KafkaStreams.queryMetadataForKey(String, K, Serializer) to get detailed information on the key being queried.
It provides information about the partition number where the key resides in addition to hosts containing the active and standby partitions for the key.
Provided support to query stale stores (for high availability) and the stores belonging to a specific partition by deprecating and replacing it with .KafkaStreams.store(String, QueryableStoreType)KafkaStreams.store(StoreQueryParameters)
Added a new public api to access lag information for stores local to an instance with .KafkaStreams.allLocalStorePartitionLags()
Scala 2.11 is no longer supported. See
KIP-531
for details.
All Scala classes from the package have been deprecated. See
KIP-504
for details of the new Java authorizer API added in 2.4.0. Note that
and were deprecated in 2.4.0.
kafka.security.authkafka.security.auth.Authorizerkafka.security.auth.SimpleAclAuthorizer
TLSv1 and TLSv1.1 have been disabled by default since these have known security vulnerabilities. Only TLSv1.2 is now
enabled by default. You can continue to use TLSv1 and TLSv1.1 by explicitly enabling these in the configuration options
and .
ssl.protocolssl.enabled.protocols
ZooKeeper has been upgraded to 3.5.7, and a ZooKeeper upgrade from 3.4.X to 3.5.7 can fail if there are no snapshot files in the 3.4 data directory.
This usually happens in test upgrades where ZooKeeper 3.5.7 is trying to load an existing 3.4 data dir in which no snapshot file has been created.
For more details about the issue please refer to ZOOKEEPER-3056.
A fix is given in ZOOKEEPER-3056, which is to set
config in before the upgrade.
snapshot.trust.empty=truezookeeper.properties
ZooKeeper version 3.5.7 supports TLS-encrypted connectivity to ZooKeeper both with or without client certificates,
and additional Kafka configurations are available to take advantage of this.
See KIP-515 for details.
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to 2.4.
inter.broker.protocol.version
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 2.4 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
Additional Upgrade Notes:
ZooKeeper has been upgraded to 3.5.6. ZooKeeper upgrade from 3.4.X to 3.5.6 can fail if there are no snapshot files in 3.4 data directory.
This usually happens in test upgrades where ZooKeeper 3.5.6 is trying to load an existing 3.4 data dir in which no snapshot file has been created.
For more details about the issue please refer to ZOOKEEPER-3056.
A fix is given in ZOOKEEPER-3056, which is to set
config in before the upgrade. But we have observed data loss in standalone cluster upgrades when using
config. For more details about the issue please refer to ZOOKEEPER-3644.
So we recommend the safe workaround of copying empty snapshot file to the 3.4 data directory,
if there are no snapshot files in 3.4 data directory. For more details about the workaround please refer to ZooKeeper Upgrade FAQ.
snapshot.trust.empty=truezookeeper.propertiessnapshot.trust.empty=true
An embedded Jetty based AdminServer added in ZooKeeper 3.5.
AdminServer is enabled by default in ZooKeeper and is started on port 8080.
AdminServer is disabled by default in the ZooKeeper config () provided by the Apache Kafka distribution.
Make sure to update your local file with if you wish to disable the AdminServer.
Please refer AdminServer config to configure the AdminServer.
zookeeper.propertieszookeeper.propertiesadmin.enableServer=false
A new Admin API has been added for partition reassignments. Due to changing the way Kafka propagates reassignment information,
it is possible to lose reassignment state in failure edge cases while upgrading to the new version. It is not recommended to start reassignments while upgrading.
ZooKeeper has been upgraded from 3.4.14 to 3.5.6. TLS and dynamic reconfiguration are supported by the new version.
The command line tool has been deprecated. It has been replaced by .bin/kafka-preferred-replica-election.shbin/kafka-leader-election.sh
The methods in the Java class have been deprecated in favor of the methods .electPreferredLeadersAdminClientelectLeaders
Scala code leveraging the constructor with literal values will need to explicitly call on the second literal.NewTopic(String, int, short)toShort
The argument in the constructor is now used to specify an exception message.
Previously it referred to the group that failed authorization. This was done for consistency with other exception types and to
avoid potential misuse. The constructor which was previously used for a single
unauthorized topic was changed similarly.
GroupAuthorizationException(String)TopicAuthorizationException(String)
The internal interface has been deprecated and replaced with a new in the public API. Some
methods/signatures are slightly different between the two interfaces. Users implementing a custom PartitionAssignor should migrate to the new interface as soon as possible.
PartitionAssignorConsumerPartitionAssignor
The now uses a sticky partitioning strategy. This means that records for specific topic with null keys and no assigned partition
will be sent to the same partition until the batch is ready to be sent. When a new batch is created, a new partition is chosen. This decreases latency to produce, but
it may result in uneven distribution of records across partitions in edge cases. Generally users will not be impacted, but this difference may be noticeable in tests and
other situations producing records for a very short amount of time.
DefaultPartitioner
The blocking methods have been extended to allow a list of partitions as input parameters rather than a single partition.
It enables fewer request/response iterations between clients and brokers fetching for the committed offsets for the consumer group.
The old overloaded functions are deprecated and we would recommend users to make their code changes to leverage the new methods (details
can be found in KIP-520).
KafkaConsumer#committed
We've introduced a new error in the produce response to distinguish from the error.
To be more concrete, previously when a batch of records was sent as part of a single request to the broker and one or more of the records failed
the validation due to various causes (mismatch magic bytes, crc checksum errors, null key for log compacted topics, etc), the whole batch would be rejected
with the same and misleading , and the caller of the producer client would see the corresponding exception from either
the future object of returned from the call as well as in the
Now with the new error code and improved error messages of the exception, producer callers would be better informed about the root cause why their sent records were failed.
INVALID_RECORDCORRUPT_MESSAGECORRUPT_MESSAGERecordMetadatasendCallback#onCompletion(RecordMetadata metadata, Exception exception)
We are introducing incremental cooperative rebalancing to the clients' group protocol, which allows consumers to keep all of their assigned partitions during a rebalance
and at the end revoke only those which must be migrated to another consumer for overall cluster balance. The will choose the latest
that is commonly supported by all of the consumer's supported assignors. You can use the new built-in or plug in your own custom cooperative assignor. To do
so you must implement the interface and include in the list returned by .
Your custom assignor can then leverage the field in each consumer's to give partitions back to their previous owners whenever possible. Note that when
a partition is to be reassigned to another consumer, it must be removed from the new assignment until it has been revoked from its original owner. Any consumer that has to revoke a partition will trigger
a followup rebalance to allow the revoked partition to safely be assigned to its new owner. See the
ConsumerPartitionAssignor RebalanceProtocol javadocs for more information.
To upgrade from the old (eager) protocol, which always revokes all partitions before rebalancing, to cooperative rebalancing, you must follow a specific upgrade path to get all clients on the same
that supports the cooperative protocol. This can be done with two rolling bounces, using the for the example: during the first one, add "cooperative-sticky" to the list of supported assignors
for each member (without removing the previous assignor -- note that if previously using the default, you must include that explicitly as well). You then bounce and/or upgrade it.
Once the entire group is on 2.4+ and all members have the "cooperative-sticky" among their supported assignors, remove the other assignor(s) and perform a second rolling bounce so that by the end all members support only the
cooperative protocol. For further details on the cooperative rebalancing protocol and upgrade path, see KIP-429.
ConsumerCoordinatorRebalanceProtocolCooperativeStickyAssignorConsumerPartitionAssignorRebalanceProtocol.COOPERATIVEConsumerPartitionAssignor#supportedProtocolsownedPartitionsSubscriptionConsumerPartitionAssignorCooperativeStickyAssignor
There are some behavioral changes to the , as well as a new API. Exceptions thrown during any of the listener's three callbacks will no longer be swallowed, and will instead be re-thrown
all the way up to the call. The method has been added to allow users to react to abnormal circumstances where a consumer may have lost ownership of its partitions
(such as a missed rebalance) and cannot commit offsets. By default, this will simply call the existing API to align with previous behavior. Note however that will not
be called when the set of lost partitions is empty. This means that no callback will be invoked at the beginning of the first rebalance of a new consumer joining the group.
The semantics of the callbacks are further changed when following the cooperative rebalancing protocol described above. In addition to ,
will also never be called when the set of revoked partitions is empty. The callback will generally be invoked only at the end of a rebalance, and only on the set of partitions that are being moved to another consumer. The
callback will however always be called, even with an empty set of partitions, as a way to notify users of a rebalance event (this is true for both cooperative and eager). For details on
the new callback semantics, see the ConsumerRebalanceListener javadocs.
ConsumerRebalanceListenerConsumer.poll()onPartitionsLostonPartitionsRevokedonPartitionsLostConsumerRebalanceListener'sonPartitionsLostonPartitionsRevokedonPartitionsAssigned
The Scala trait has been deprecated and replaced with a new Java API
. The authorizer implementation class
has also been deprecated and replaced with a new
implementation . uses features
supported by the new API to improve authorization logging and is compatible with .
For more details, see KIP-504.
kafka.security.auth.Authorizerorg.apache.kafka.server.authorizer.Authorizerkafka.security.auth.SimpleAclAuthorizerkafka.security.authorizer.AclAuthorizerAclAuthorizerSimpleAclAuthorizer
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from 0.11.0.x, 1.0.x, 1.1.x, 2.0.x, or 2.1.x, and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to 2.3.
inter.broker.protocol.version
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 2.3 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
We are introducing a new rebalancing protocol for Kafka Connect based on
incremental cooperative rebalancing.
The new protocol does not require stopping all the tasks during a rebalancing phase between Connect workers. Instead, only the tasks that need to be exchanged
between workers are stopped and they are started in a follow up rebalance. The new Connect protocol is enabled by default beginning with 2.3.0.
For more details on how it works and how to enable the old behavior of eager rebalancing, checkout
incremental cooperative rebalancing design.
We are introducing static membership towards consumer user. This feature reduces unnecessary rebalances during normal application upgrades or rolling bounces.
For more details on how to use it, checkout static membership design.
Kafka Streams DSL switches its used store types. While this change is mainly transparent to users, there are some corner cases that may require code changes.
See the Kafka Streams upgrade section for more details.
Kafka Streams 2.3.0 requires 0.11 message format or higher and does not work with older message format.
If you are upgrading from a version prior to 2.1.x, please see the note below about the change to the schema used to store consumer offsets.
Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from 0.11.0.x, 1.0.x, 1.1.x, or 2.0.x and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to 2.2.
inter.broker.protocol.version
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 2.2 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
The default consumer group id has been changed from the empty string () to . Consumers who use the new default group id will not be able to subscribe to topics,
and fetch or commit offsets. The empty string as consumer group id is deprecated but will be supported until a future major release. Old clients that rely on the empty string group id will now
have to explicitly provide it as part of their consumer config. For more information see
KIP-289.""null
The command line tool is now able to connect directly to brokers with instead of zookeeper. The old
option is still available for now. Please read KIP-377 for more information.bin/kafka-topics.sh--bootstrap-server--zookeeper
Kafka Streams depends on a newer version of RocksDBs that requires MacOS 10.13 or higher.
Note that 2.1.x contains a change to the internal schema used to store consumer offsets. Once the upgrade is
complete, it will not be possible to downgrade to previous versions. See the rolling upgrade notes below for more detail.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from 0.11.0.x, 1.0.x, 1.1.x, or 2.0.x and you have not overridden the message format, then you only need to override
the inter-broker protocol version.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
It is still possible to downgrade at this point if there are any problems.
Once the cluster's behavior and performance has been verified, bump the protocol version by editing
and setting it to 2.1.
inter.broker.protocol.version
Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
protocol version, it will no longer be possible to downgrade the cluster to an older version.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 2.1 on each broker and restart them one by one. Note that the older Scala clients,
which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
(or to take advantage of exactly once semantics),
the newer Java clients must be used.
Additional Upgrade Notes:
Offset expiration semantics has slightly changed in this version. According to the new semantics, offsets of partitions in a group will
not be removed while the group is subscribed to the corresponding topic and is still active (has active consumers). If group becomes
empty all its offsets will be removed after default offset retention period (or the one set by broker) has passed (unless the group becomes
active again). Offsets associated with standalone (simple) consumers, that do not use Kafka group management, will be removed after default
offset retention period (or the one set by broker) has passed since their last commit.
The default for console consumer's property when no is provided is now set to .
This is to avoid polluting the consumer coordinator cache as the auto-generated group is not likely to be used by other consumers.enable.auto.commitgroup.idfalse
The default value for the producer's config was changed to , as we introduced
in KIP-91,
which sets an upper bound on the total time between sending a record and receiving acknowledgement from the broker. By default,
the delivery timeout is set to 2 minutes.retriesInteger.MAX_VALUEdelivery.timeout.ms
By default, MirrorMaker now overrides to when
configuring the producer. If you have overridden the value of in order to fail faster,
you will instead need to override .delivery.timeout.msInteger.MAX_VALUEretriesdelivery.timeout.ms
The API now expects, as a recommended alternative, access to the groups a user should be able to list.
Even though the old access is still supported for backward compatibility, using it for this API is not advised.ListGroupDescribe GroupDescribe Cluster
KIP-336 deprecates the ExtendedSerializer and ExtendedDeserializer interfaces and
propagates the usage of Serializer and Deserializer. ExtendedSerializer and ExtendedDeserializer were introduced with
KIP-82 to provide record headers for serializers and deserializers
in a Java 7 compatible fashion. Now we consolidated these interfaces as Java 7 support has been dropped since.
Jetty has been upgraded to 9.4.12, which excludes TLS_RSA_* ciphers by default because they do not support forward
secrecy, see https://github.com/eclipse/jetty.project/issues/2807 for more information.
Unclean leader election is automatically enabled by the controller when config is dynamically updated by using per-topic config override.unclean.leader.election.enable
The has added a method . Now any application using the can gain more information
and insight by viewing the metrics captured from the . For more information
see KIP-324AdminClientAdminClient#metrics()AdminClientAdminClient
Kafka now supports Zstandard compression from KIP-110.
You must upgrade the broker as well as clients to make use of it. Consumers prior to 2.1.0 will not be able to read from topics which use
Zstandard compression, so you should not enable it for a topic until all downstream consumers are upgraded. See the KIP for more detail.
Kafka 2.0.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below,
you guarantee no downtime during the upgrade. However, please review the notable changes in 2.0.0 before upgrading.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from 0.11.0.x, 1.0.x, or 1.1.x and you have not overridden the message format, then you only need to override
the inter-broker protocol format.
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
Once the entire cluster is upgraded, bump the protocol version by editing and setting it to 2.0.
inter.broker.protocol.version
Restart the brokers one by one for the new protocol version to take effect.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 2.0 on each broker and restart them one by one. Note that the older Scala consumer
does not support the new message format introduced in 0.11, so to avoid the performance cost of down-conversion (or to
take advantage of exactly once semantics), the newer Java consumer must be used.
Additional Upgrade Notes:
If you are willing to accept downtime, you can simply take all the brokers down, update the code and start them back up. They will start
with the new protocol by default.
Bumping the protocol version and restarting can be done any time after the brokers are upgraded. It does not have to be immediately after.
Similarly for the message format version.
If you are using Java8 method references in your Kafka Streams code you might need to update your code to resolve method ambiguities.
Hot-swapping the jar-file only might not work.
ACLs should not be added to prefixed resources,
(added in KIP-290),
until all brokers in the cluster have been updated.
NOTE: any prefixed ACLs added to a cluster, even after the cluster is fully upgraded, will be ignored should the cluster be downgraded again.
KIP-186 increases the default offset retention time from 1 day to 7 days. This makes it less likely to "lose" offsets in an application that commits infrequently. It also increases the active set of offsets and therefore can increase memory usage on the broker. Note that the console consumer currently enables offset commit by default and can be the source of a large number of offsets which this change will now preserve for 7 days instead of 1. You can preserve the existing behavior by setting the broker config to 1440.offsets.retention.minutes
Support for Java 7 has been dropped, Java 8 is now the minimum version required.
The default value for was changed to , which performs hostname verification (man-in-the-middle attacks are possible otherwise). Set to an empty string to restore the previous behaviour. ssl.endpoint.identification.algorithmhttpsssl.endpoint.identification.algorithm
KAFKA-5674 extends the lower interval of minimum to zero and therefore allows IP-based filtering of inbound connections.max.connections.per.ip
KIP-272
added API version tag to the metric .
This metric now becomes . This will impact
JMX monitoring tools that do not automatically aggregate. To get the total count for a specific request type, the tool needs to be
updated to aggregate across different versions.
kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce|FetchConsumer|FetchFollower|...}kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce|FetchConsumer|FetchFollower|...},version={0|1|2|3|...}
KIP-225 changed the metric "records.lag" to use tags for topic and partition. The original version with the name format "{topic}-{partition}.records-lag" has been removed.
The Scala consumers, which have been deprecated since 0.11.0.0, have been removed. The Java consumer has been the recommended option
since 0.10.0.0. Note that the Scala consumers in 1.1.0 (and older) will continue to work even if the brokers are upgraded to 2.0.0.
The Scala producers, which have been deprecated since 0.10.0.0, have been removed. The Java producer has been the recommended option
since 0.9.0.0. Note that the behaviour of the default partitioner in the Java producer differs from the default partitioner
in the Scala producers. Users migrating should consider configuring a custom partitioner that retains the previous behaviour.
Note that the Scala producers in 1.1.0 (and older) will continue to work even if the brokers are upgraded to 2.0.0.
MirrorMaker and ConsoleConsumer no longer support the Scala consumer, they always use the Java consumer.
The ConsoleProducer no longer supports the Scala producer, it always uses the Java producer.
A number of deprecated tools that rely on the Scala clients have been removed: ReplayLogProducer, SimpleConsumerPerformance, SimpleConsumerShell, ExportZkOffsets, ImportZkOffsets, UpdateOffsetsInZK, VerifyConsumerRebalance.
The deprecated kafka.tools.ProducerPerformance has been removed, please use org.apache.kafka.tools.ProducerPerformance.
New Kafka Streams configuration parameter added that allows rolling bounce upgrade from older version. upgrade.from
KIP-284 changed the retention time for Kafka Streams repartition topics by setting its default value to .Long.MAX_VALUE
Updated APIs in Kafka Streams for registering state stores to the processor topology. For more details please read the Streams Upgrade Guide.ProcessorStateManager
In earlier releases, Connect's worker configuration required the and properties.
In 2.0, these are no longer required and default to the JSON converter.
You may safely remove these properties from your Connect standalone and distributed worker configurations:internal.key.converterinternal.value.converter internal.key.converter=org.apache.kafka.connect.json.JsonConverterinternal.key.converter.schemas.enable=falseinternal.value.converter=org.apache.kafka.connect.json.JsonConverterinternal.value.converter.schemas.enable=false
KIP-266 adds a new consumer configuration
to specify the default timeout to use for APIs that could block. The KIP also adds overloads for such blocking
APIs to support specifying a specific timeout to use for each of them instead of using the default timeout set by .
In particular, a new API has been added which does not block for dynamic partition assignment.
The old API has been deprecated and will be removed in a future version. Overloads have also been added
for other methods like , , ,
, and that take in a .default.api.timeout.msKafkaConsumerdefault.api.timeout.mspoll(Duration)poll(long)KafkaConsumerpartitionsForlistTopicsoffsetsForTimesbeginningOffsetsendOffsetscloseDuration
Also as part of KIP-266, the default value of has been changed to 30 seconds.
The previous value was a little higher than 5 minutes to account for maximum time that a rebalance would take.
Now we treat the JoinGroup request in the rebalance as a special case and use a value derived from
for the request timeout. All other request types use the timeout defined
by request.timeout.msmax.poll.interval.msrequest.timeout.ms
The internal method has been removed. Users are encouraged to migrate to .kafka.admin.AdminClient.deleteRecordsBeforeorg.apache.kafka.clients.admin.AdminClient.deleteRecords
The AclCommand tool convenience option uses the KIP-277 finer grained ACL on the given topic. --producer
KIP-176 removes
the option for all consumer based tools. This option is redundant since the new consumer is automatically
used if --bootstrap-server is defined.
--new-consumer
KIP-290 adds the ability
to define ACLs on prefixed resources, e.g. any topic starting with 'foo'.
KIP-283 improves message down-conversion
handling on Kafka broker, which has typically been a memory-intensive operation. The KIP adds a mechanism by which the operation becomes less memory intensive
by down-converting chunks of partition data at a time which helps put an upper bound on memory consumption. With this improvement, there is a change in
protocol behavior where the broker could send an oversized message batch towards the end of the response with an invalid offset.
Such oversized messages must be ignored by consumer clients, as is done by .
FetchResponseKafkaConsumer
KIP-283 also adds new topic and broker configurations and respectively
to control whether down-conversion is enabled. When disabled, broker does not perform any down-conversion and instead sends an
error to the client.message.downconversion.enablelog.message.downconversion.enableUNSUPPORTED_VERSION
Dynamic broker configuration options can be stored in ZooKeeper using kafka-configs.sh before brokers are started.
This option can be used to avoid storing clear passwords in server.properties as all password configs may be stored encrypted in ZooKeeper.
ZooKeeper hosts are now re-resolved if connection attempt fails. But if your ZooKeeper host names resolve
to multiple addresses and some of them are not reachable, then you may need to increase the connection timeout
.zookeeper.connection.timeout.ms
Upgrading your Streams application from 1.1 to 2.0 does not require a broker upgrade.
A Kafka Streams 2.0 application can connect to 2.0, 1.1, 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
Note that in 2.0 we have removed the public APIs that are deprecated prior to 1.0; users leveraging on those deprecated APIs need to make code changes accordingly.
See Streams API changes in 2.0.0 for more details.
Kafka 1.1.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below,
you guarantee no downtime during the upgrade. However, please review the notable changes in 1.1.0 before upgrading.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from 0.11.0.x or 1.0.x and you have not overridden the message format, then you only need to override
the inter-broker protocol format.
inter.broker.protocol.version=CURRENT_KAFKA_VERSION (0.11.0 or 1.0).
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
Once the entire cluster is upgraded, bump the protocol version by editing and setting it to 1.1.
inter.broker.protocol.version
Restart the brokers one by one for the new protocol version to take effect.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 1.1 on each broker and restart them one by one. Note that the older Scala consumer
does not support the new message format introduced in 0.11, so to avoid the performance cost of down-conversion (or to
take advantage of exactly once semantics), the newer Java consumer must be used.
Additional Upgrade Notes:
If you are willing to accept downtime, you can simply take all the brokers down, update the code and start them back up. They will start
with the new protocol by default.
Bumping the protocol version and restarting can be done any time after the brokers are upgraded. It does not have to be immediately after.
Similarly for the message format version.
If you are using Java8 method references in your Kafka Streams code you might need to update your code to resolve method ambiguties.
Hot-swapping the jar-file only might not work.
The kafka artifact in Maven no longer depends on log4j or slf4j-log4j12. Similarly to the kafka-clients artifact, users
can now choose the logging back-end by including the appropriate slf4j module (slf4j-log4j12, logback, etc.). The release
tarball still includes log4j and slf4j-log4j12.
KIP-225 changed the metric "records.lag" to use tags for topic and partition. The original version with the name format "{topic}-{partition}.records-lag" is deprecated and will be removed in 2.0.0.
Kafka Streams is more robust against broker communication errors. Instead of stopping the Kafka Streams client with a fatal exception,
Kafka Streams tries to self-heal and reconnect to the cluster. Using the new you have better control of how often
Kafka Streams retries and can configure
fine-grained timeouts (instead of hard coded retries as in older version).AdminClient
Kafka Streams rebalance time was reduced further making Kafka Streams more responsive.
Kafka Connect now supports message headers in both sink and source connectors, and to manipulate them via simple message transforms. Connectors must be changed to explicitly use them. A new is introduced to control how headers are (de)serialized, and the new "SimpleHeaderConverter" is used by default to use string representations of values.HeaderConverter
kafka.tools.DumpLogSegments now automatically sets deep-iteration option if print-data-log is enabled
explicitly or implicitly due to any of the other options like decoder.
Upgrading your Streams application from 1.0 to 1.1 does not require a broker upgrade.
A Kafka Streams 1.1 application can connect to 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
Kafka 1.0.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below,
you guarantee no downtime during the upgrade. However, please review the notable changes in 1.0.0 before upgrading.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
If you are upgrading from 0.11.0.x and you have not overridden the message format, you must set
both the message format version and the inter-broker protocol version to 0.11.0.
inter.broker.protocol.version=0.11.0
log.message.format.version=0.11.0
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
Once the entire cluster is upgraded, bump the protocol version by editing and setting it to 1.0.
inter.broker.protocol.version
Restart the brokers one by one for the new protocol version to take effect.
If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
change log.message.format.version to 1.0 on each broker and restart them one by one. If you are upgrading from
0.11.0 and log.message.format.version is set to 0.11.0, you can update the config and skip the rolling restart.
Note that the older Scala consumer does not support the new message format introduced in 0.11, so to avoid the
performance cost of down-conversion (or to take advantage of exactly once semantics),
the newer Java consumer must be used.
Additional Upgrade Notes:
If you are willing to accept downtime, you can simply take all the brokers down, update the code and start them back up. They will start
with the new protocol by default.
Bumping the protocol version and restarting can be done any time after the brokers are upgraded. It does not have to be immediately after.
Similarly for the message format version.
Restored binary compatibility of AdminClient's Options classes (e.g. CreateTopicsOptions, DeleteTopicsOptions, etc.) with
0.11.0.x. Binary (but not source) compatibility had been broken inadvertently in 1.0.0.
Topic deletion is now enabled by default, since the functionality is now stable. Users who wish to
to retain the previous behavior should set the broker config to . Keep in mind that topic deletion removes data and the operation is not reversible (i.e. there is no "undelete" operation)delete.topic.enablefalse
For topics that support timestamp search if no offset can be found for a partition, that partition is now included in the search result with a null offset value. Previously, the partition was not included in the map.
This change was made to make the search behavior consistent with the case of topics not supporting timestamp search.
If the is 1.0 or later, a broker will now stay online to serve replicas
on live log directories even if there are offline log directories. A log directory may become offline due to IOException
caused by hardware failure. Users need to monitor the per-broker metric to check
whether there is offline log directory. inter.broker.protocol.versionofflineLogDirectoryCount
Added KafkaStorageException which is a retriable exception. KafkaStorageException will be converted to NotLeaderForPartitionException in the response
if the version of the client's FetchRequest or ProducerRequest does not support KafkaStorageException.
-XX:+DisableExplicitGC was replaced by -XX:+ExplicitGCInvokesConcurrent in the default JVM settings. This helps
avoid out of memory exceptions during allocation of native memory by direct buffers in some cases.
The overridden method implementations have been removed from the following deprecated classes in
the package: , , ,
, , , and .
This was only intended for use on the broker, but it is no longer in use and the implementations have not been maintained.
A stub implementation has been retained for binary compatibility.handleErrorkafka.apiFetchRequestGroupCoordinatorRequestOffsetCommitRequestOffsetFetchRequestOffsetRequestProducerRequestTopicMetadataRequest
The Java clients and tools now accept any string as a client-id.
The deprecated tool has been removed. Use to get consumer group details.kafka-consumer-offset-checker.shkafka-consumer-groups.sh
SimpleAclAuthorizer now logs access denials to the authorizer log by default.
Authentication failures are now reported to clients as one of the subclasses of .
No retries will be performed if a client connection fails authentication.AuthenticationException
Custom implementations may throw to provide an error
message to return to clients indicating the reason for authentication failure. Implementors should take care not to include
any security-critical information in the exception message that should not be leaked to unauthenticated clients.SaslServerSaslAuthenticationException
The mbean registered with JMX to provide version and commit id will be deprecated and replaced with
metrics providing these attributes.app-info
Kafka metrics may now contain non-numeric values. has been deprecated and
will return in such cases to minimise the probability of breaking users who read the value of every client
metric (via a implementation or by calling the method).
can be used to retrieve numeric and non-numeric metric values.org.apache.kafka.common.Metric#value()0.0MetricsReportermetrics()org.apache.kafka.common.Metric#metricValue()
Every Kafka rate metric now has a corresponding cumulative count metric with the suffix
to simplify downstream processing. For example, has a corresponding
metric named .-totalrecords-consumed-raterecords-consumed-total
Mx4j will only be enabled if the system property is set to . Due to a logic
inversion bug, it was previously enabled by default and disabled if was set to .kafka_mx4jenabletruekafka_mx4jenabletrue
The package in the clients jar has been made public and added to the javadocs.
Internal classes which had previously been located in this package have been moved elsewhere.org.apache.kafka.common.security.auth
When using an Authorizer and a user doesn't have required permissions on a topic, the broker
will return TOPIC_AUTHORIZATION_FAILED errors to requests irrespective of topic existence on broker.
If the user have required permissions and the topic doesn't exists, then the UNKNOWN_TOPIC_OR_PARTITION
error code will be returned.
config/consumer.properties file updated to use new consumer config properties.
KIP-112: LeaderAndIsrRequest v1 introduces a partition-level field. is_new
KIP-112: UpdateMetadataRequest v4 introduces a partition-level field. offline_replicas
KIP-112: MetadataResponse v5 introduces a partition-level field. offline_replicas
KIP-112: ProduceResponse v4 introduces error code for KafkaStorageException.
KIP-112: FetchResponse v6 introduces error code for KafkaStorageException.
KIP-152:
SaslAuthenticate request has been added to enable reporting of authentication failures. This request will
be used if the SaslHandshake request version is greater than 0.
Upgrading your Streams application from 0.11.0 to 1.0 does not require a broker upgrade.
A Kafka Streams 1.0 application can connect to 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
However, Kafka Streams 1.0 requires 0.10 message format or newer and does not work with older message formats.
If you are monitoring on streams metrics, you will need make some changes to the metrics names in your reporting and monitoring code, because the metrics sensor hierarchy was changed.
There are a few public APIs including , and , are being deprecated by new APIs.
We recommend making corresponding code changes, which should be very minor since the new APIs look quite similar, when you upgrade.
ProcessorContext#schedule()Processor#punctuate()KStreamBuilderTopologyBuilder
Upgrading your Streams application from 0.10.2 to 1.0 does not require a broker upgrade.
A Kafka Streams 1.0 application can connect to 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
If you are monitoring on streams metrics, you will need make some changes to the metrics names in your reporting and monitoring code, because the metrics sensor hierarchy was changed.
There are a few public APIs including , and , are being deprecated by new APIs.
We recommend making corresponding code changes, which should be very minor since the new APIs look quite similar, when you upgrade.
ProcessorContext#schedule()Processor#punctuate()KStreamBuilderTopologyBuilder
If you specify customized , and in configs, it is recommended to use their replaced configure parameter as these configs are deprecated. key.serdevalue.serdetimestamp.extractor
Upgrading your Streams application from 0.10.1 to 1.0 does not require a broker upgrade.
A Kafka Streams 1.0 application can connect to 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
You need to recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
If you are monitoring on streams metrics, you will need make some changes to the metrics names in your reporting and monitoring code, because the metrics sensor hierarchy was changed.
There are a few public APIs including , and , are being deprecated by new APIs.
We recommend making corresponding code changes, which should be very minor since the new APIs look quite similar, when you upgrade.
ProcessorContext#schedule()Processor#punctuate()KStreamBuilderTopologyBuilder
If you specify customized , and in configs, it is recommended to use their replaced configure parameter as these configs are deprecated. key.serdevalue.serdetimestamp.extractor
If you use a custom (i.e., user implemented) timestamp extractor, you will need to update this code, because the interface was changed. TimestampExtractor
If you register custom metrics, you will need to update this code, because the interface was changed. StreamsMetric
Upgrading your Streams application from 0.10.0 to 1.0 does require a broker upgrade because a Kafka Streams 1.0 application can only connect to 0.1, 0.11.0, 0.10.2, or 0.10.1 brokers.
Upgrading from 0.10.0.x to 1.0.2 requires two rolling bounces with config set for first upgrade phase
(cf. KIP-268).
As an alternative, an offline upgrade is also possible.
upgrade.from="0.10.0"
prepare your application instances for a rolling bounce and make sure that config is set to for new version 0.11.0.3 upgrade.from"0.10.0"
bounce each instance of your application once
prepare your newly deployed 1.0.2 application instances for a second round of rolling bounces; make sure to remove the value for config upgrade.from
bounce each instance of your application once more to complete the upgrade
Upgrading from 0.10.0.x to 1.0.0 or 1.0.1 requires an offline upgrade (rolling bounce upgrade is not supported)
stop all old (0.10.0.x) application instances
update your code and swap old code and jar file with new code and new jar file
restart all new (1.0.0 or 1.0.1) application instances
Kafka 0.11.0.0 introduces a new message format version as well as wire protocol changes. By following the recommended rolling upgrade plan below,
you guarantee no downtime during the upgrade. However, please review the notable changes in 0.11.0.0 before upgrading.
Starting with version 0.10.2, Java clients (producer and consumer) have acquired the ability to communicate with older brokers. Version 0.11.0
clients can talk to version 0.10.0 or newer brokers. However, if your brokers are older than 0.10.0, you must upgrade all the brokers in the
Kafka cluster before upgrading your clients. Version 0.11.0 brokers support 0.8.x and newer clients.
For a rolling upgrade:
Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the current message format version currently in use. If you have
not overridden the message format previously, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1 or 0.10.2).
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
Once the entire cluster is upgraded, bump the protocol version by editing and setting it to 0.11.0, but
do not change yet. inter.broker.protocol.versionlog.message.format.version
Restart the brokers one by one for the new protocol version to take effect.
Once all (or most) consumers have been upgraded to 0.11.0 or later, then change log.message.format.version to 0.11.0 on each
broker and restart them one by one. Note that the older Scala consumer does not support the new message format, so to avoid
the performance cost of down-conversion (or to take advantage of exactly once semantics),
the new Java consumer must be used.
Additional Upgrade Notes:
If you are willing to accept downtime, you can simply take all the brokers down, update the code and start them back up. They will start
with the new protocol by default.
Bumping the protocol version and restarting can be done any time after the brokers are upgraded. It does not have to be immediately after.
Similarly for the message format version.
It is also possible to enable the 0.11.0 message format on individual topics using the topic admin tool ()
prior to updating the global setting .bin/kafka-topics.shlog.message.format.version
If you are upgrading from a version prior to 0.10.0, it is NOT necessary to first update the message format to 0.10.0
before you switch to 0.11.0.
Upgrading your Streams application from 0.10.2 to 0.11.0 does not require a broker upgrade.
A Kafka Streams 0.11.0 application can connect to 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
If you specify customized , and in configs, it is recommended to use their replaced configure parameter as these configs are deprecated. key.serdevalue.serdetimestamp.extractor
Upgrading your Streams application from 0.10.1 to 0.11.0 does not require a broker upgrade.
A Kafka Streams 0.11.0 application can connect to 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
You need to recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
If you specify customized , and in configs, it is recommended to use their replaced configure parameter as these configs are deprecated. key.serdevalue.serdetimestamp.extractor
If you use a custom (i.e., user implemented) timestamp extractor, you will need to update this code, because the interface was changed. TimestampExtractor
If you register custom metrics, you will need to update this code, because the interface was changed. StreamsMetric
Upgrading your Streams application from 0.10.0 to 0.11.0 does require a broker upgrade because a Kafka Streams 0.11.0 application can only connect to 0.11.0, 0.10.2, or 0.10.1 brokers.
Upgrading from 0.10.0.x to 0.11.0.3 requires two rolling bounces with config set for first upgrade phase
(cf. KIP-268).
As an alternative, an offline upgrade is also possible.
upgrade.from="0.10.0"
prepare your application instances for a rolling bounce and make sure that config is set to for new version 0.11.0.3 upgrade.from"0.10.0"
bounce each instance of your application once
prepare your newly deployed 0.11.0.3 application instances for a second round of rolling bounces; make sure to remove the value for config upgrade.from
bounce each instance of your application once more to complete the upgrade
Upgrading from 0.10.0.x to 0.11.0.0, 0.11.0.1, or 0.11.0.2 requires an offline upgrade (rolling bounce upgrade is not supported)
stop all old (0.10.0.x) application instances
update your code and swap old code and jar file with new code and new jar file
restart all new (0.11.0.0 , 0.11.0.1, or 0.11.0.2) application instances
Unclean leader election is now disabled by default. The new default favors durability over availability. Users who wish to
to retain the previous behavior should set the broker config to .unclean.leader.election.enabletrue
Producer configs , and have been
removed. They were initially deprecated in Kafka 0.9.0.0.block.on.buffer.fullmetadata.fetch.timeout.mstimeout.ms
The broker config is now enforced upon auto topic creation. Internal
auto topic creation will fail with a GROUP_COORDINATOR_NOT_AVAILABLE error until the cluster size meets this
replication factor requirement.offsets.topic.replication.factor
When compressing data with snappy, the producer and broker will use the compression scheme's default block size (2 x 32 KB)
instead of 1 KB in order to improve the compression ratio. There have been reports of data compressed with the smaller
block size being 50% larger than when compressed with the larger block size. For the snappy case, a producer with 5000
partitions will require an additional 315 MB of JVM heap.
Similarly, when compressing data with gzip, the producer and broker will use 8 KB instead of 1 KB as the buffer size. The default
for gzip is excessively low (512 bytes).
The broker configuration now applies to the total size of a batch of messages.
Previously the setting applied to batches of compressed messages, or to non-compressed messages individually.
A message batch may consist of only a single message, so in most cases, the limitation on the size of
individual messages is only reduced by the overhead of the batch format. However, there are some subtle implications
for message format conversion (see below for more detail). Note also
that while previously the broker would ensure that at least one message is returned in each fetch request (regardless of the
total and partition-level fetch sizes), the same behavior now applies to one message batch.max.message.bytes
GC log rotation is enabled by default, see KAFKA-3754 for details.
Deprecated constructors of RecordMetadata, MetricName and Cluster classes have been removed.
Added user headers support through a new Headers interface providing user headers read and write access.
ProducerRecord and ConsumerRecord expose the new Headers API via method call.Headers headers()
ExtendedSerializer and ExtendedDeserializer interfaces are introduced to support serialization and deserialization for headers. Headers will be ignored if the configured serializer and deserializer are not the above classes.
A new config, , was introduced.
This config specifies the time, in milliseconds, that the will delay the initial consumer rebalance.
The rebalance will be further delayed by the value of as new members join the group, up to a maximum of .
The default value for this is 3 seconds.
During development and testing it might be desirable to set this to 0 in order to not delay test execution time.
group.initial.rebalance.delay.msGroupCoordinatorgroup.initial.rebalance.delay.msmax.poll.interval.ms
org.apache.kafka.common.Cluster#partitionsForTopic, and methods
will return an empty list instead of (which is considered a bad practice) in case the metadata for the required topic does not exist.
partitionsForNodeavailablePartitionsForTopicnull
Streams API configuration parameters , , and were deprecated and
replaced by , , and , respectively.
timestamp.extractorkey.serdevalue.serdedefault.timestamp.extractordefault.key.serdedefault.value.serde
For offset commit failures in the Java consumer's APIs, we no longer expose the underlying
cause when instances of are passed to the commit callback. See
KAFKA-5052 for more detail.
commitAsyncRetriableCommitFailedException
Kafka 0.11.0 includes support for idempotent and transactional capabilities in the producer. Idempotent delivery
ensures that messages are delivered exactly once to a particular topic partition during the lifetime of a single producer.
Transactional delivery allows producers to send data to multiple partitions such that either all messages are successfully
delivered, or none of them are. Together, these capabilities enable "exactly once semantics" in Kafka. More details on these
features are available in the user guide, but below we add a few specific notes on enabling them in an upgraded cluster.
Note that enabling EoS is not required and there is no impact on the broker's behavior if unused.
Only the new Java producer and consumer support exactly once semantics.
These features depend crucially on the 0.11.0 message format. Attempting to use them
on an older format will result in unsupported version errors.
Transaction state is stored in a new internal topic . This topic is not created until the
the first attempt to use a transactional request API. Similar to the consumer offsets topic, there are several settings
to control the topic's configuration. For example, controls the minimum ISR for
this topic. See the configuration section in the user guide for a full list of options.__transaction_statetransaction.state.log.min.isr
For secure clusters, the transactional APIs require new ACLs which can be turned on with the .
tool.bin/kafka-acls.sh
EoS in Kafka introduces new request APIs and modifies several existing ones. See
KIP-98
for the full details
The 0.11.0 message format includes several major enhancements in order to support better delivery semantics for the producer
(see KIP-98)
and improved replication fault tolerance
(see KIP-101).
Although the new format contains more information to make these improvements possible, we have made the batch format much
more efficient. As long as the number of messages per batch is more than 2, you can expect lower overall overhead. For smaller
batches, however, there may be a small performance impact. See here for the results of our
initial performance analysis of the new message format. You can also find more detail on the message format in the
KIP-98 proposal.
One of the notable differences in the new message format is that even uncompressed messages are stored together as a single batch.
This has a few implications for the broker configuration , which limits the size of a single batch. First,
if an older client produces messages to a topic partition using the old format, and the messages are individually smaller than
, the broker may still reject them after they are merged into a single batch during the up-conversion process.
Generally this can happen when the aggregate size of the individual messages is larger than . There is a similar
effect for older consumers reading messages down-converted from the new format: if the fetch size is not set at least as large as
, the consumer may not be able to make progress even if the individual uncompressed messages are smaller
than the configured fetch size. This behavior does not impact the Java client for 0.10.1.0 and later since it uses an updated fetch protocol
which ensures that at least one message can be returned even if it exceeds the fetch size. To get around these problems, you should ensure
1) that the producer's batch size is not set larger than , and 2) that the consumer's fetch size is set at
least as large as .
max.message.bytesmax.message.bytesmax.message.bytesmax.message.bytesmax.message.bytesmax.message.bytes
Most of the discussion on the performance impact of upgrading to the 0.10.0 message format
remains pertinent to the 0.11.0 upgrade. This mainly affects clusters that are not secured with TLS since "zero-copy" transfer
is already not possible in that case. In order to avoid the cost of down-conversion, you should ensure that consumer applications
are upgraded to the latest 0.11.0 client. Significantly, since the old consumer has been deprecated in 0.11.0.0, it does not support
the new message format. You must upgrade to use the new consumer to use the new message format without the cost of down-conversion.
Note that 0.11.0 consumers support backwards compatibility with 0.10.0 brokers and upward, so it is possible to upgrade the
clients first before the brokers.
0.10.2.0 has wire protocol changes. By following the recommended rolling upgrade plan below, you guarantee no downtime during the upgrade.
However, please review the notable changes in 0.10.2.0 before upgrading.
Starting with version 0.10.2, Java clients (producer and consumer) have acquired the ability to communicate with older brokers. Version 0.10.2
clients can talk to version 0.10.0 or newer brokers. However, if your brokers are older than 0.10.0, you must upgrade all the brokers in the
Kafka cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.x and newer clients.
For a rolling upgrade:
Update server.properties file on all brokers and add the following properties:
inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0 or 0.10.1).
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
Once the entire cluster is upgraded, bump the protocol version by editing inter.broker.protocol.version and setting it to 0.10.2.
If your previous message format is 0.10.0, change log.message.format.version to 0.10.2 (this is a no-op as the message format is the same for 0.10.0, 0.10.1 and 0.10.2).
If your previous message format version is lower than 0.10.0, do not change log.message.format.version yet - this parameter should only change once all consumers have been upgraded to 0.10.0.0 or later.
Restart the brokers one by one for the new protocol version to take effect.
If log.message.format.version is still lower than 0.10.0 at this point, wait until all consumers have been upgraded to 0.10.0 or later,
then change log.message.format.version to 0.10.2 on each broker and restart them one by one.
Note: If you are willing to accept downtime, you can simply take all the brokers down, update the code and start all of them. They will start with the new protocol by default.
Note: Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.
Upgrading your Streams application from 0.10.1 to 0.10.2 does not require a broker upgrade.
A Kafka Streams 0.10.2 application can connect to 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
You need to recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
If you use a custom (i.e., user implemented) timestamp extractor, you will need to update this code, because the interface was changed. TimestampExtractor
If you register custom metrics, you will need to update this code, because the interface was changed. StreamsMetric
Upgrading your Streams application from 0.10.0 to 0.10.2 does require a broker upgrade because a Kafka Streams 0.10.2 application can only connect to 0.10.2 or 0.10.1 brokers.
There are couple of API changes, that are not backward compatible (cf. Streams API changes in 0.10.2 for more details).
Thus, you need to update and recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
Upgrading from 0.10.0.x to 0.10.2.2 requires two rolling bounces with config set for first upgrade phase
(cf. KIP-268).
As an alternative, an offline upgrade is also possible.
upgrade.from="0.10.0"
prepare your application instances for a rolling bounce and make sure that config is set to for new version 0.10.2.2 upgrade.from"0.10.0"
bounce each instance of your application once
prepare your newly deployed 0.10.2.2 application instances for a second round of rolling bounces; make sure to remove the value for config upgrade.from
bounce each instance of your application once more to complete the upgrade
Upgrading from 0.10.0.x to 0.10.2.0 or 0.10.2.1 requires an offline upgrade (rolling bounce upgrade is not supported)
stop all old (0.10.0.x) application instances
update your code and swap old code and jar file with new code and new jar file
restart all new (0.10.2.0 or 0.10.2.1) application instances
The default values for two configurations of the StreamsConfig class were changed to improve the resiliency of Kafka Streams applications. The internal Kafka Streams producer default value was changed from 0 to 10. The internal Kafka Streams consumer default value was changed from 300000 to .
retriesmax.poll.interval.msInteger.MAX_VALUE
The Java clients (producer and consumer) have acquired the ability to communicate with older brokers. Version 0.10.2 clients
can talk to version 0.10.0 or newer brokers. Note that some features are not available or are limited when older brokers
are used.
Several methods on the Java consumer may now throw if the calling thread is interrupted.
Please refer to the Javadoc for a more in-depth explanation of this change.InterruptExceptionKafkaConsumer
Java consumer now shuts down gracefully. By default, the consumer waits up to 30 seconds to complete pending requests.
A new close API with timeout has been added to to control the maximum wait time.KafkaConsumer
Multiple regular expressions separated by commas can be passed to MirrorMaker with the new Java consumer via the --whitelist option. This
makes the behaviour consistent with MirrorMaker when used the old Scala consumer.
Upgrading your Streams application from 0.10.1 to 0.10.2 does not require a broker upgrade.
A Kafka Streams 0.10.2 application can connect to 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though).
The Zookeeper dependency was removed from the Streams API. The Streams API now uses the Kafka protocol to manage internal topics instead of
modifying Zookeeper directly. This eliminates the need for privileges to access Zookeeper directly and "StreamsConfig.ZOOKEEPER_CONFIG"
should not be set in the Streams app any more. If the Kafka cluster is secured, Streams apps must have the required security privileges to create new topics.
Several new fields including "security.protocol", "connections.max.idle.ms", "retry.backoff.ms", "reconnect.backoff.ms" and "request.timeout.ms" were added to
StreamsConfig class. User should pay attention to the default values and set these if needed. For more details please refer to 3.5 Kafka Streams Configs.
0.10.1.0 has wire protocol changes. By following the recommended rolling upgrade plan below, you guarantee no downtime during the upgrade.
However, please notice the Potential breaking changes in 0.10.1.0 before upgrade.
Note: Because new protocols are introduced, it is important to upgrade your Kafka clusters before upgrading your clients (i.e. 0.10.1.x clients
only support 0.10.1.x or later brokers while 0.10.1.x brokers also support older clients).
For a rolling upgrade:
Update server.properties file on all brokers and add the following properties:
inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2.0, 0.9.0.0 or 0.10.0.0).
Upgrade the brokers one at a time: shut down the broker, update the code, and restart it.
Once the entire cluster is upgraded, bump the protocol version by editing inter.broker.protocol.version and setting it to 0.10.1.0.
If your previous message format is 0.10.0, change log.message.format.version to 0.10.1 (this is a no-op as the message format is the same for both 0.10.0 and 0.10.1).
If your previous message format version is lower than 0.10.0, do not change log.message.format.version yet - this parameter should only change once all consumers have been upgraded to 0.10.0.0 or later.
Restart the brokers one by one for the new protocol version to take effect.
If log.message.format.version is still lower than 0.10.0 at this point, wait until all consumers have been upgraded to 0.10.0 or later,
then change log.message.format.version to 0.10.1 on each broker and restart them one by one.
Note: If you are willing to accept downtime, you can simply take all the brokers down, update the code and start all of them. They will start with the new protocol by default.
Note: Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.
The log retention time is no longer based on last modified time of the log segments. Instead it will be based on the largest timestamp of the messages in a log segment.
The log rolling time is no longer depending on log segment create time. Instead it is now based on the timestamp in the messages. More specifically. if the timestamp of the first message in the segment is T, the log will be rolled out when a new message has a timestamp greater than or equal to T + log.roll.ms
The open file handlers of 0.10.0 will increase by ~33% because of the addition of time index files for each segment.
The time index and offset index share the same index size configuration. Since each time index entry is 1.5x the size of offset index entry. User may need to increase log.index.size.max.bytes to avoid potential frequent log rolling.
Due to the increased number of index files, on some brokers with large amount the log segments (e.g. >15K), the log loading process during the broker startup could be longer. Based on our experiment, setting the num.recovery.threads.per.data.dir to one may reduce the log loading time.
Upgrading your Streams application from 0.10.0 to 0.10.1 does require a broker upgrade because a Kafka Streams 0.10.1 application can only connect to 0.10.1 brokers.
There are couple of API changes, that are not backward compatible (cf. Streams API changes in 0.10.1 for more details).
Thus, you need to update and recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
Upgrading from 0.10.0.x to 0.10.1.2 requires two rolling bounces with config set for first upgrade phase
(cf. KIP-268).
As an alternative, an offline upgrade is also possible.
upgrade.from="0.10.0"
prepare your application instances for a rolling bounce and make sure that config is set to for new version 0.10.1.2 upgrade.from"0.10.0"
bounce each instance of your application once
prepare your newly deployed 0.10.1.2 application instances for a second round of rolling bounces; make sure to remove the value for config upgrade.from
bounce each instance of your application once more to complete the upgrade
Upgrading from 0.10.0.x to 0.10.1.0 or 0.10.1.1 requires an offline upgrade (rolling bounce upgrade is not supported)
stop all old (0.10.0.x) application instances
update your code and swap old code and jar file with new code and new jar file
restart all new (0.10.1.0 or 0.10.1.1) application instances
The new Java consumer is no longer in beta and we recommend it for all new development. The old Scala consumers are still supported, but they will be deprecated in the next release
and will be removed in a future major release.
The / switch is no longer required to use tools like MirrorMaker and the Console Consumer with the new consumer; one simply
needs to pass a Kafka broker to connect to instead of the ZooKeeper ensemble. In addition, usage of the Console Consumer with the old consumer has been deprecated and it will be
removed in a future major release. --new-consumer--new.consumer
Kafka clusters can now be uniquely identified by a cluster id. It will be automatically generated when a broker is upgraded to 0.10.1.0. The cluster id is available via the kafka.server:type=KafkaServer,name=ClusterId metric and it is part of the Metadata response. Serializers, client interceptors and metric reporters can receive the cluster id by implementing the ClusterResourceListener interface.
The BrokerState "RunningAsController" (value 4) has been removed. Due to a bug, a broker would only be in this state briefly before transitioning out of it and hence the impact of the removal should be minimal. The recommended way to detect if a given broker is the controller is via the kafka.controller:type=KafkaController,name=ActiveControllerCount metric.
The new Java Consumer now allows users to search offsets by timestamp on partitions.
The new Java Consumer now supports heartbeating from a background thread. There is a new configuration
which controls the maximum time between poll invocations before the consumer
will proactively leave the group (5 minutes by default). The value of the configuration
(default to 30 seconds) must always be smaller than (default to 5 minutes),
since that is the maximum time that a JoinGroup request can block on the server while the consumer is rebalance.
Finally, the default value of has been adjusted down to
10 seconds, and the default value of has been changed to 500.max.poll.interval.msrequest.timeout.msmax.poll.interval.mssession.timeout.msmax.poll.records
When using an Authorizer and a user doesn't have Describe authorization on a topic, the broker will no
longer return TOPIC_AUTHORIZATION_FAILED errors to requests since this leaks topic names. Instead, the UNKNOWN_TOPIC_OR_PARTITION
error code will be returned. This may cause unexpected timeouts or delays when using the producer and consumer since
Kafka clients will typically retry automatically on unknown topic errors. You should consult the client logs if you
suspect this could be happening.
Fetch responses have a size limit by default (50 MB for consumers and 10 MB for replication). The existing per partition limits also apply (1 MB for consumers
and replication). Note that neither of these limits is an absolute maximum as explained in the next point.
Consumers and replicas can make progress if a message larger than the response/partition size limit is found. More concretely, if the first message in the
first non-empty partition of the fetch is larger than either or both limits, the message will still be returned.
Overloaded constructors were added to and to allow the caller to specify the
order of the partitions (since order is significant in v3). The previously existing constructors were deprecated and the partitions are shuffled before
the request is sent to avoid starvation issues. kafka.api.FetchRequestkafka.javaapi.FetchRequest
ListOffsetRequest v1 supports accurate offset search based on timestamps.
MetadataResponse v2 introduces a new field: "cluster_id".
FetchRequest v3 supports limiting the response size (in addition to the existing per partition limit), it returns messages
bigger than the limits if required to make progress and the order of partitions in the request is now significant.
JoinGroup v1 introduces a new field: "rebalance_timeout".
0.10.0.0 has potential breaking changes (please review before upgrading) and possible performance impact following the upgrade. By following the recommended rolling upgrade plan below, you guarantee no downtime and no performance impact during and following the upgrade.
Note: Because new protocols are introduced, it is important to upgrade your Kafka clusters before upgrading your clients.
Notes to clients with version 0.9.0.0: Due to a bug introduced in 0.9.0.0,
clients that depend on ZooKeeper (old Scala high-level Consumer and MirrorMaker if used with the old consumer) will not
work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9.0.1 before brokers are upgraded to
0.10.0.x. This step is not necessary for 0.8.X or 0.9.0.1 clients.
For a rolling upgrade:
Update server.properties file on all brokers and add the following properties:
inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2 or 0.9.0.0).
Upgrade the brokers. This can be done a broker at a time by simply bringing it down, updating the code, and restarting it.
Once the entire cluster is upgraded, bump the protocol version by editing inter.broker.protocol.version and setting it to 0.10.0.0. NOTE: You shouldn't touch log.message.format.version yet - this parameter should only change once all consumers have been upgraded to 0.10.0.0
Restart the brokers one by one for the new protocol version to take effect.
Once all consumers have been upgraded to 0.10.0, change log.message.format.version to 0.10.0 on each broker and restart them one by one.
Note: If you are willing to accept downtime, you can simply take all the brokers down, update the code and start all of them. They will start with the new protocol by default.
Note: Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.
The message format in 0.10.0 includes a new timestamp field and uses relative offsets for compressed messages.
The on disk message format can be configured through log.message.format.version in the server.properties file.
The default on-disk message format is 0.10.0. If a consumer client is on a version before 0.10.0.0, it only understands
message formats before 0.10.0. In this case, the broker is able to convert messages from the 0.10.0 format to an earlier format
before sending the response to the consumer on an older version. However, the broker can't use zero-copy transfer in this case.
Reports from the Kafka community on the performance impact have shown CPU utilization going from 20% before to 100% after an upgrade, which forced an immediate upgrade of all clients to bring performance back to normal.
To avoid such message conversion before consumers are upgraded to 0.10.0.0, one can set log.message.format.version to 0.8.2 or 0.9.0 when upgrading the broker to 0.10.0.0. This way, the broker can still use zero-copy transfer to send the data to the old consumers. Once consumers are upgraded, one can change the message format to 0.10.0 on the broker and enjoy the new message format that includes new timestamp and improved compression.
The conversion is supported to ensure compatibility and can be useful to support a few apps that have not updated to newer clients yet, but is impractical to support all consumer traffic on even an overprovisioned cluster. Therefore, it is critical to avoid the message conversion as much as possible when brokers have been upgraded but the majority of clients have not.
For clients that are upgraded to 0.10.0.0, there is no performance impact.
Note: By setting the message format version, one certifies that all existing messages are on or below that
message format version. Otherwise consumers before 0.10.0.0 might break. In particular, after the message format
is set to 0.10.0, one should not change it back to an earlier format as it may break consumers on versions before 0.10.0.0.
Note: Due to the additional timestamp introduced in each message, producers sending small messages may see a
message throughput degradation because of the increased overhead.
Likewise, replication now transmits an additional 8 bytes per message.
If you're running close to the network capacity of your cluster, it's possible that you'll overwhelm the network cards
and see failures and performance issues due to the overload.
Note: If you have enabled compression on producers, you may notice reduced producer throughput and/or
lower compression rate on the broker in some cases. When receiving compressed messages, 0.10.0
brokers avoid recompressing the messages, which in general reduces the latency and improves the throughput. In
certain cases, however, this may reduce the batching size on the producer, which could lead to worse throughput. If this
happens, users can tune linger.ms and batch.size of the producer for better throughput. In addition, the producer buffer
used for compressing messages with snappy is smaller than the one used by the broker, which may have a negative
impact on the compression ratio for the messages on disk. We intend to make this configurable in a future Kafka
release.
Starting from Kafka 0.10.0.0, the message format version in Kafka is represented as the Kafka version. For example, message format 0.9.0 refers to the highest message version supported by Kafka 0.9.0.
Message format 0.10.0 has been introduced and it is used by default. It includes a timestamp field in the messages and relative offsets are used for compressed messages.
ProduceRequest/Response v2 has been introduced and it is used by default to support message format 0.10.0
FetchRequest/Response v2 has been introduced and it is used by default to support message format 0.10.0
MessageFormatter interface was changed from to
def writeTo(key: Array[Byte], value: Array[Byte], output: PrintStream)def writeTo(consumerRecord: ConsumerRecord[Array[Byte], Array[Byte]], output: PrintStream)
MessageReader interface was changed from to
def readMessage(): KeyedMessage[Array[Byte], Array[Byte]]def readMessage(): ProducerRecord[Array[Byte], Array[Byte]]
MessageFormatter's package was changed from to kafka.toolskafka.common
MessageReader's package was changed from to kafka.toolskafka.common
MirrorMakerMessageHandler no longer exposes the method as it was never called. handle(record: MessageAndMetadata[Array[Byte], Array[Byte]])
The 0.7 KafkaMigrationTool is no longer packaged with Kafka. If you need to migrate from 0.7 to 0.10.0, please migrate to 0.8 first and then follow the documented upgrade process to upgrade from 0.8 to 0.10.0.
The new consumer has standardized its APIs to accept as the sequence type for method parameters. Existing code may have to be updated to work with the 0.10.0 client library. java.util.Collection
LZ4-compressed message handling was changed to use an interoperable framing specification (LZ4f v1.5.1).
To maintain compatibility with old clients, this change only applies to Message format 0.10.0 and later.
Clients that Produce/Fetch LZ4-compressed messages using v0/v1 (Message format 0.9.0) should continue
to use the 0.9.0 framing implementation. Clients that use Produce/Fetch protocols v2 or later
should use interoperable LZ4f framing. A list of interoperable LZ4 libraries is available at https://www./
Starting from Kafka 0.10.0.0, a new client library named Kafka Streams is available for stream processing on data stored in Kafka topics. This new client library only works with 0.10.x and upward versioned brokers due to message format changes mentioned above. For more information please read Streams documentation.
The default value of the configuration parameter is now 64K for the new consumer.receive.buffer.bytes
The new consumer now exposes the configuration parameter to restrict internal topics (such as the consumer offsets topic) from accidentally being included in regular expression subscriptions. By default, it is enabled.exclude.internal.topics
The old Scala producer has been deprecated. Users should migrate their code to the Java producer included in the kafka-clients JAR as soon as possible.
0.9.0.0 has potential breaking changes (please review before upgrading) and an inter-broker protocol change from previous versions. This means that upgraded brokers and clients may not be compatible with older versions. It is important that you upgrade your Kafka cluster before upgrading your clients. If you are using MirrorMaker downstream clusters should be upgraded first as well.
For a rolling upgrade:
Update server.properties file on all brokers and add the following property: inter.broker.protocol.version=0.8.2.X
Upgrade the brokers. This can be done a broker at a time by simply bringing it down, updating the code, and restarting it.
Once the entire cluster is upgraded, bump the protocol version by editing inter.broker.protocol.version and setting it to 0.9.0.0.
Restart the brokers one by one for the new protocol version to take effect
Note: If you are willing to accept downtime, you can simply take all the brokers down, update the code and start all of them. They will start with the new protocol by default.
Note: Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.
Broker IDs above 1000 are now reserved by default to automatically assigned broker IDs. If your cluster has existing broker IDs above that threshold make sure to increase the reserved.broker.max.id broker configuration property accordingly.
Configuration parameter replica.lag.max.messages was removed. Partition leaders will no longer consider the number of lagging messages when deciding which replicas are in sync.
Configuration parameter replica.lag.time.max.ms now refers not just to the time passed since last fetch request from replica, but also to time since the replica last caught up. Replicas that are still fetching messages from leaders but did not catch up to the latest messages in replica.lag.time.max.ms will be considered out of sync.
Compacted topics no longer accept messages without key and an exception is thrown by the producer if this is attempted. In 0.8.x, a message without key would cause the log compaction thread to subsequently complain and quit (and stop compacting all compacted topics).
MirrorMaker no longer supports multiple target clusters. As a result it will only accept a single --consumer.config parameter. To mirror multiple source clusters, you will need at least one MirrorMaker instance per source cluster, each with its own consumer configuration.
Tools packaged under org.apache.kafka.clients.tools.* have been moved to org.apache.kafka.tools.*. All included scripts will still function as usual, only custom code directly importing these classes will be affected.
The default Kafka JVM performance options (KAFKA_JVM_PERFORMANCE_OPTS) have been changed in kafka-run-class.sh.
The kafka-topics.sh script (kafka.admin.TopicCommand) now exits with non-zero exit code on failure.
The kafka-topics.sh script (kafka.admin.TopicCommand) will now print a warning when topic names risk metric collisions due to the use of a '.' or '_' in the topic name, and error in the case of an actual collision.
The kafka-console-producer.sh script (kafka.tools.ConsoleProducer) will use the Java producer instead of the old Scala producer be default, and users have to specify 'old-producer' to use the old producer.
By default, all command line tools will print all logging messages to stderr instead of stdout.
The new broker id generation feature can be disabled by setting broker.id.generation.enable to false.
Configuration parameter log.cleaner.enable is now true by default. This means topics with a cleanup.policy=compact will now be compacted by default, and 128 MB of heap will be allocated to the cleaner process via log.cleaner.dedupe.buffer.size. You may want to review log.cleaner.dedupe.buffer.size and the other log.cleaner configuration values based on your usage of compacted topics.
Default value of configuration parameter fetch.min.bytes for the new consumer is now 1 by default.
Deprecations in 0.9.0.0
Altering topic configuration from the kafka-topics.sh script (kafka.admin.TopicCommand) has been deprecated. Going forward, please use the kafka-configs.sh script (kafka.admin.ConfigCommand) for this functionality.
The kafka-consumer-offset-checker.sh (kafka.tools.ConsumerOffsetChecker) has been deprecated. Going forward, please use kafka-consumer-groups.sh (kafka.admin.ConsumerGroupCommand) for this functionality.
The kafka.tools.ProducerPerformance class has been deprecated. Going forward, please use org.apache.kafka.tools.ProducerPerformance for this functionality (kafka-producer-perf-test.sh will also be changed to use the new class).
The producer config block.on.buffer.full has been deprecated and will be removed in future release. Currently its default value has been changed to false. The KafkaProducer will no longer throw BufferExhaustedException but instead will use max.block.ms value to block, after which it will throw a TimeoutException. If block.on.buffer.full property is set to true explicitly, it will set the max.block.ms to Long.MAX_VALUE and metadata.fetch.timeout.ms will not be honoured
Release 0.7 is incompatible with newer releases. Major changes were made to the API, ZooKeeper data structures, and protocol, and configuration in order to add replication (Which was missing in 0.7). The upgrade from 0.7 to later versions requires a special tool for migration. This migration can be done without downtime.
The Producer API allows applications to send streams of data to topics in the Kafka cluster.
The Consumer API allows applications to read streams of data from topics in the Kafka cluster.
The Streams API allows transforming streams of data from input topics to output topics.
The Connect API allows implementing connectors that continually pull from some source system or application into Kafka or push from Kafka into some sink system or application.
The Admin API allows managing and inspecting topics, brokers, and other Kafka objects.
Kafka exposes all its functionality over a language independent protocol which has clients available in many programming languages. However only the Java clients are maintained as part of the main Kafka project, the others are available as independent open source projects. A list of non-Java clients is available here.
When using Scala you may optionally include the library. Additional documentation on using the Kafka Streams DSL for Scala is available in the developer guide.
kafka-streams-scala
To use Kafka Streams DSL for Scala for Scala 2.13 you can use the following maven dependency:
The Connect API allows implementing connectors that continually pull from some source data system into Kafka or push from Kafka into some sink data system.
Many users of Connect won't need to use this API directly, though, they can use pre-built connectors without needing to write any code. Additional information on using Connect is available here.
Those who want to implement custom connectors can see the javadoc.
Listeners to publish to ZooKeeper for clients to use, if different than the config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for will be used. Unlike , it is not valid to advertise the 0.0.0.0 meta-address. Also unlike , there can be duplicated ports in this property, so that one listener can be configured to advertise another listener's address. This can be useful in some cases where external load balancers are used.listenerslistenerslistenerslisteners
Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by `leader.imbalance.check.interval.seconds`. If the leader imbalance exceeds `leader.imbalance.per.broker.percentage`, leader rebalance to the preferred leader for partitions is triggered.
The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1.
Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.
Name of listener used for communication between controller and brokers. Broker will use the control.plane.listener.name to locate the endpoint in listeners list, to listen for connections from the controller. For example, if a broker's config is : listeners = INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094 listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL control.plane.listener.name = CONTROLLER On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL". On controller side, when it discovers a broker's published endpoints through zookeeper, it will use the control.plane.listener.name to find the endpoint, which it will use to establish connection to the broker. For example, if the broker's published endpoints on zookeeper are : "endpoints" : ["INTERNAL://broker1.example.com:9092","EXTERNAL://broker1.example.com:9093","CONTROLLER://broker1.example.com:9094"] and the controller's config is : listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL control.plane.listener.name = CONTROLLER then controller will use "broker1.example.com:9094" with security protocol "SSL" to connect to the broker. If not explicitly configured, the default value will be null and there will be no dedicated endpoints for controller connections. If explicitly configured, the value cannot be the same as the value of .inter.broker.listener.name
A comma-separated list of the names of the listeners used by the controller. This is required if running in KRaft mode. When communicating with the controller quorum, the broker will always use the first listener in this list. Note: The ZK-based controller should not set this configuration.
Maximum time in milliseconds before starting new elections. This is used in the binary exponential backoff mechanism that helps prevent gridlocked elections
Maximum time without a successful fetch from the current leader before becoming a candidate and triggering an election for voters; Maximum time without receiving fetch from a majority of the quorum before asking around to see if there's a new epoch for leader
Map of id/endpoint information for the set of voters in a comma-separated list of `{id}@{host}:{port}` entries. For example: `1@localhost:9092,2@localhost:9093,3@localhost:9094`
A comma-separated list of listener names which may be started before the authorizer has finished initialization. This is useful when the authorizer is dependent on the cluster itself for bootstrapping, as is the case for the StandardAuthorizer (which stores ACLs in the metadata log.) By default, all listeners included in controller.listener.names will also be early start listeners. A listener should not appear in this list if it accepts external traffic.
The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.
Listener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, must also be set. Listener names and port numbers must be unique. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: PLAINTEXT://myhost:9092,SSL://:9091 CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093listener.security.protocol.map
The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used
The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used
The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.
The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level config.max.message.bytes
This configuration determines where we put the metadata log for clusters in KRaft mode. If it is not set, the metadata log is placed in the first log directory from log.dirs.
This is the maximum number of bytes in the log between the latest snapshot and the high-watermark needed before generating a new snapshot. The default value is 20971520. To generate snapshots based on the time elapsed, see the configuration. The Kafka node will generate a snapshot when either the maximum time interval is reached or the maximum bytes limit is reached.metadata.log.max.snapshot.interval.ms
This is the maximum number of milliseconds to wait to generate a snapshot if there are committed records in the log that are not included in the latest snapshot. A value of zero disables time based snapshot generation. The default value is 3600000. To generate snapshots based on the number of metadata bytes, see the configuration. The Kafka node will generate a snapshot when either the maximum time interval is reached or the maximum bytes limit is reached.metadata.log.max.record.bytes.between.snapshots
The maximum combined size of the metadata log and snapshots before deleting old snapshots and log files. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit.
The number of milliseconds to keep a metadata log file or snapshot before deleting it. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit.
When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
The node ID associated with the roles this process is playing when `process.roles` is non-empty. This is required configuration when running in KRaft mode.
The number of threads that the server uses for receiving requests from the network and sending responses to the network. Noted: each listener (except for controller listener) creates its own thread pool.
Number of fetcher threads used to replicate records from each source broker. The total number of fetchers on each broker is bound by multiplied by the number of brokers in the cluster.Increasing this value can increase the degree of I/O parallelism in the follower and leader broker at the cost of higher CPU and memory utilization.num.replica.fetchers
Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout.
For subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention period has elapsed after the consumer group loses all its consumers (i.e. becomes empty); 2) this retention period has elapsed since the last time an offset is committed for the partition and the group is no longer subscribed to the corresponding topic. For standalone consumers (using manual assignment), offsets will be expired after this retention period has elapsed since the time of last commit. Note that when a group is deleted via the delete-group request, its committed offsets will also be deleted without extra retention period; also when a topic is deleted via the delete-topic request, upon propagated metadata update any group's committed offsets for that topic will also be deleted without extra retention period.
The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.
The roles that this process plays: 'broker', 'controller', or 'broker,controller' if it is both. This configuration is only applicable for clusters in KRaft (Kafka Raft) mode (instead of ZooKeeper). Leave this config undefined or empty for Zookeeper clusters.
The maximum wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics
If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.
The maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction.
Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large).
The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.
The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. Transactional IDs will not expire while a the transaction is still ongoing.
Specifies the ZooKeeper connection string in the form where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form . The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of you would give the connection string as .hostname:porthostname1:port1,hostname2:port2,hostname3:port3/chroot/pathhostname1:port1,hostname2:port2,hostname3:port3/chroot/path
When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000
Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying.
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.
Secret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If the key is not set or set to empty string, brokers will disable the delegation token support.
The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.
The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.
The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.
Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time.
Specify which version of the inter-broker protocol will be used. This is typically bumped after all brokers were upgraded to a new version. Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check MetadataVersion for the full list.
The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan).
Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.
Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions
The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period.
The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact"
Specify the message format version the broker will use to append messages to the logs. The value should be a valid MetadataVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check MetadataVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.
The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling.
The maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, .Broker-wide connection rate limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections will be throttled if either the listener or the broker limit is reached, with the exception of inter-broker listener. Connections on the inter-broker listener will be throttled only when the listener-level rate limit is reached.listener.name.internal.max.connection.creation.rate
The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, . Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case.listener.name.internal.max.connections
The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached.
The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up.
The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal will be derived using the rules defined by applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS.ssl.principal.mapping.rulessasl.kerberos.principal.to.local.rules
The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via (broker config) or (topic config).message.max.bytesmax.message.bytes
Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via (broker config) or (topic config).message.max.bytesmax.message.bytes
The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader.
The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.
JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;loginModuleClass controlFlag (optionName=optionValue)*;
The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.
The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.
The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.
The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.
The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler.
The maximum receive size allowed before and during initial SASL authentication. Default receive size is 512KB. GSSAPI limits requests to 64K, but we allow upto 512KB by default for custom SASL mechanisms. In practice, PLAIN, SCRAM and OAUTH mechanisms can use much smaller limits.
Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time.
The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.
The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.
The maximum number of pending connections on the socket. In Linux, you may also need to configure `somaxconn` and `tcp_max_syn_backlog` kernel parameters accordingly to make the configuration takes effect.
A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.
Configures kafka broker to request client authentication. The following settings are common:
ssl.client.auth=required If set to required client authentication is required.
ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself
ssl.client.auth=none This means client authentication is not needed.
The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.
The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.
Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates
Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'
The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.
The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].
The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.
The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.
Typically set to when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the same-named system property.org.apache.zookeeper.ClientCnxnSocketNettyzookeeper.clientCnxnSocket
Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the system property (note the different name). Defaults to false if neither is set; when true, must be set (typically to ); other values to set may include , , , , , , , , , , , zookeeper.client.securezookeeper.clientCnxnSocketorg.apache.zookeeper.ClientCnxnSocketNettyzookeeper.ssl.cipher.suiteszookeeper.ssl.crl.enablezookeeper.ssl.enabled.protocolszookeeper.ssl.endpoint.identification.algorithmzookeeper.ssl.keystore.locationzookeeper.ssl.keystore.passwordzookeeper.ssl.keystore.typezookeeper.ssl.ocsp.enablezookeeper.ssl.protocolzookeeper.ssl.truststore.locationzookeeper.ssl.truststore.passwordzookeeper.ssl.truststore.type
Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase).zookeeper.ssl.keyStore.location
Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase). Note that ZooKeeper does not support a key password different from the keystore password, so be sure to set the key password in the keystore to be identical to the keystore password; otherwise the connection attempt to Zookeeper will fail.zookeeper.ssl.keyStore.password
Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase). The default value of means the type will be auto-detected based on the filename extension of the keystore.zookeeper.ssl.keyStore.typenull
Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase).zookeeper.ssl.trustStore.location
Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase).zookeeper.ssl.trustStore.password
Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the system property (note the camelCase). The default value of means the type will be auto-detected based on the filename extension of the truststore.zookeeper.ssl.trustStore.typenull
The alter configs policy class that should be used for validation. The class should implement the interface.org.apache.kafka.server.policy.AlterConfigPolicy
The fully qualified name of a class that implements interface, which is used by the broker for authorization.org.apache.kafka.server.authorizer.Authorizer
Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters
The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. By default, the <user> and <client-id> quotas that are stored in ZooKeeper are applied. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied.
Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout.
The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.
The create topic policy class that should be used for validation. The class should implement the interface.org.apache.kafka.server.policy.CreateTopicPolicy
A list of classes to use as Yammer metrics custom reporters. The reporters should implement trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends trait so that the registered MBean is compliant with the standard MBean convention.kafka.metrics.KafkaMetricsReporterkafka.metrics.KafkaMetricsReporterMBean
Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: `INTERNAL:SSL,EXTERNAL:SSL`. As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ). Note that in KRaft a default mapping from the listener names defined by to PLAINTEXT is assumed if no explicit mapping is provided and no other security protocol is in use.listener.name.internal.ssl.keystore.locationssl.keystore.locationcontroller.listener.names
This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.falseUNSUPPORTED_VERSION
This configuration controls how often the active controller should write no-op records to the metadata partition. If the value is 0, no-op records are not appended to the metadata partition. The default value is 500
A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter
The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise.
The time in ms that a topic partition leader will wait before expiring producer IDs. Producer IDs will not expire while a transaction associated to them is still ongoing. Note that producer IDs may expire sooner if the last write from the producer ID is deleted due to the topic's retention settings. Setting this value the same or higher than can help prevent expiration during retries and protect against message duplication, but the default should be reasonable for most use cases.delivery.timeout.ms
The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.
The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.
The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.
The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.
The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.
The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.
The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.
The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the interface.org.apache.kafka.common.security.auth.SecurityProviderCreator
The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory
A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see security authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the configuration.principal.builder.class
Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the system property (note the single word "ciphersuites"). The default value of means the list of enabled cipher suites is determined by the Java runtime being used.zookeeper.ssl.ciphersuitesnull
Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the system property (note the shorter name).zookeeper.ssl.crl
Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the system property (note the camelCase). The default value of means the enabled protocol will be the value of the configuration property.zookeeper.ssl.enabledProtocolsnullzookeeper.ssl.protocol
Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" meaning ZooKeeper hostname verification is enabled and an explicit blank value meaning it is disabled (disabling it is only recommended for testing purposes). An explicit value overrides any "true" or "false" value set via the system property (note the different name and values; true implies https and false implies blank).zookeeper.ssl.hostnameVerification
Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set via the system property (note the shorter name).zookeeper.ssl.ocsp
Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named system property.zookeeper.ssl.protocol
Type:
string
Default:
TLSv1.2
Valid Values:
Importance:
low
Update Mode:
read-only
More details about broker configuration can be found in the scala class .kafka.server.KafkaConfig
From Kafka version 1.1 onwards, some of the broker configs can be updated without restarting the broker. See the
column in Broker Configs for the update mode of each broker config.
Dynamic Update Mode
read-only: Requires a broker restart for update
per-broker: May be updated dynamically for each broker
cluster-wide: May be updated dynamically as a cluster-wide default. May also be updated as a per-broker value for testing.
To alter the current broker configs for broker id 0 (for example, the number of log cleaner threads):
To describe the current dynamic broker configs for broker id 0:
To delete a config override and revert to the statically configured or default value for broker id 0 (for example,
the number of log cleaner threads):
Some configs may be configured as a cluster-wide default to maintain consistent values across the whole cluster. All brokers
in the cluster will process the cluster default update. For example, to update log cleaner threads on all brokers:
To describe the currently configured dynamic cluster-wide default configs:
All configs that are configurable at cluster level may also be configured at per-broker level (e.g. for testing).
If a config value is defined at different levels, the following order of precedence is used:
Password config values that are dynamically updated are encrypted before storing in ZooKeeper. The broker config
must be configured in to enable dynamic update
of password configs. The secret may be different on different brokers.password.encoder.secretserver.properties
The secret used for password encoding may be rotated with a rolling restart of brokers. The old secret used for encoding
passwords currently in ZooKeeper must be provided in the static broker config and
the new secret must be provided in . All dynamic password configs stored in ZooKeeper
will be re-encoded with the new secret when the broker starts up.password.encoder.old.secretpassword.encoder.secret
In Kafka 1.1.x, all dynamically updated password configs must be provided in every alter request when updating configs
using even if the password config is not being altered. This constraint will be removed in
a future release.kafka-configs.sh
Updating Password Configs in ZooKeeper Before Starting Brokers
From Kafka 2.0.0 onwards, enables dynamic broker configs to be updated using ZooKeeper before
starting brokers for bootstrapping. This enables all password configs to be stored in encrypted form, avoiding the need for
clear passwords in . The broker config must also be specified
if any password configs are included in the alter command. Additional encryption parameters may also be specified. Password
encoder configs will not be persisted in ZooKeeper. For example, to store SSL key password for listener
on broker 0:
The configuration will be persisted in ZooKeeper in encrypted
form using the provided encoder configs. The encoder secret and iterations are not persisted in ZooKeeper.
kafka-configs.shserver.propertiespassword.encoder.secretINTERNAL
Brokers may be configured with SSL keystores with short validity periods to reduce the risk of compromised certificates.
Keystores may be updated dynamically without restarting the broker. The config name must be prefixed with the listener prefix
so that only the keystore config of a specific listener is updated.
The following configs may be updated in a single alter request at per-broker level:
listener.name.{listenerName}.
ssl.keystore.type
ssl.keystore.location
ssl.keystore.password
ssl.key.password
If the listener is the inter-broker listener, the update is allowed only if the new keystore is trusted by the truststore
configured for that listener. For other listeners, no trust validation is performed on the keystore by the broker. Certificates
must be signed by the same certificate authority that signed the old certificate to avoid any client authentication failures.
Updating SSL Truststore of an Existing Listener
Broker truststores may be updated dynamically without restarting the broker to add or remove certificates.
Updated truststore will be used to authenticate new client connections. The config name must be prefixed with the
listener prefix so that only the truststore config of a specific listener
is updated. The following configs may be updated in a single alter request at per-broker level:
listener.name.{listenerName}.
ssl.truststore.type
ssl.truststore.location
ssl.truststore.password
If the listener is the inter-broker listener, the update is allowed only if the existing keystore for that listener is trusted by
the new truststore. For other listeners, no trust validation is performed by the broker before the update. Removal of CA certificates
used to sign client certificates from the new truststore can lead to client authentication failures.
Updating Default Topic Configuration
Default topic configuration options used by brokers may be updated without broker restart. The configs are applied to topics
without a topic config override for the equivalent per-topic config. One or more of these configs may be overridden at
cluster-default level used by all brokers.
log.segment.bytes
log.roll.ms
log.roll.hours
log.roll.jitter.ms
log.roll.jitter.hours
log.index.size.max.bytes
log.flush.interval.messages
log.flush.interval.ms
log.retention.bytes
log.retention.ms
log.retention.minutes
log.retention.hours
log.index.interval.bytes
log.cleaner.delete.retention.ms
log.cleaner.min.compaction.lag.ms
log.cleaner.max.compaction.lag.ms
log.cleaner.min.cleanable.ratio
log.cleanup.policy
log.segment.delete.delay.ms
unclean.leader.election.enable
min.insync.replicas
max.message.bytes
compression.type
log.preallocate
log.message.timestamp.type
log.message.timestamp.difference.max.ms
From Kafka version 2.0.0 onwards, unclean leader election is automatically enabled by the controller when the config
is dynamically updated.
In Kafka version 1.1.x, changes to take effect only when a new controller is elected.
Controller re-election may be forced by running:
unclean.leader.election.enableunclean.leader.election.enable
Log cleaner configs may be updated dynamically at cluster-default level used by all brokers. The changes take effect
on the next iteration of log cleaning. One or more of these configs may be updated:
log.cleaner.threads
log.cleaner.io.max.bytes.per.second
log.cleaner.dedupe.buffer.size
log.cleaner.io.buffer.size
log.cleaner.io.buffer.load.factor
log.cleaner.backoff.ms
Updating Thread Configs
The size of various thread pools used by the broker may be updated dynamically at cluster-default level used by all brokers.
Updates are restricted to the range to to ensure that config updates are
handled gracefully.
currentSize / 2currentSize * 2
num.network.threads
num.io.threads
num.replica.fetchers
num.recovery.threads.per.data.dir
log.cleaner.threads
background.threads
Updating ConnectionQuota Configs
The maximum number of connections allowed for a given IP/host by the broker may be updated dynamically at cluster-default level used by all brokers.
The changes will apply for new connection creations and the existing connections count will be taken into account by the new limits.
max.connections.per.ip
max.connections.per.ip.overrides
Adding and Removing Listeners
Listeners may be added or removed dynamically. When a new listener is added, security configs of the listener must be provided
as listener configs with the listener prefix . If the new listener uses SASL,
the JAAS configuration of the listener must be provided using the JAAS configuration property
with the listener and mechanism prefix. See JAAS configuration for Kafka brokers for details.listener.name.{listenerName}.sasl.jaas.config
In Kafka version 1.1.x, the listener used by the inter-broker listener may not be updated dynamically. To update the inter-broker
listener to a new listener, the new listener may be added on all brokers without restarting the broker. A rolling restart is then
required to update .inter.broker.listener.name
In addition to all the security configs of new listeners, the following configs may be updated dynamically at per-broker level:
listeners
advertised.listeners
listener.security.protocol.map
Inter-broker listener must be configured using the static broker configuration
or .
inter.broker.listener.namesecurity.inter.broker.protocol
Configurations pertinent to topics have both a server default as well an optional per-topic override. If no per-topic configuration is given the server default is used. The override can be set at topic creation time by giving one or more options. This example creates a topic named my-topic with a custom max message size and flush rate:
Overrides can also be changed or set later using the alter configs command. This example updates the max message size for my-topic:
To check overrides set on the topic you can do
To remove an override you can do
The following are the topic-level configurations. The server's default configuration for this property is given under the Server Default Property heading. A given server default config value only applies to a topic if it does not have an explicit topic config override.
--config
This config designates the retention policy to use on log segments. The "delete" policy (which is the default) will discard old segments when their retention time or size limit has been reached. The "compact" policy will enable log compaction, which retains the latest value for each key. It is also possible to specify both policies in a comma-separated list (e.g. "delete,compact"). In this case, old segments will be discarded per the retention time and size configuration, while retained segments will be compacted.
Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.
The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan).
This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see the per-topic configuration section).
This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient.
A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.
This setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this.
A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.
The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.
[DEPRECATED] Specify the message format version the broker will use to append messages to the logs. The value of this config is always assumed to be `3.0` if `inter.broker.protocol.version` is 3.0 or higher (the actual config value is ignored). Otherwise, the value should be a valid ApiVersion. Some examples are: 0.10.0, 1.1, 2.8, 3.0. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.
The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if message.timestamp.type=LogAppendTime.
This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log. If the max.compaction.lag.ms or the min.compaction.lag.ms configurations are also specified, then the log compactor considers the log to be eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the max.compaction.lag.ms period.
When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, and allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set to 2, and produce with of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.min.insync.replicasacksmin.insync.replicasacks
This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.
This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to -1, no time limit is applied.
This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.
This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting.
This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data.
This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.falseUNSUPPORTED_VERSION
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).host1:port1,host2:port2,...
The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for after which it will throw an exception.max.block.ms
This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.
The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are , , , , or . Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression).nonegzipsnappylz4zstd
Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Produce requests will be failed before the number of retries has been exhausted if the timeout configured by expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use to control retry behavior.delivery.timeout.msdelivery.timeout.ms
Enabling idempotence requires this config value to be greater than 0. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.
Allowing retries while setting to and to greater than 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.enable.idempotencefalsemax.in.flight.requests.per.connection
Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates
Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'
The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.
The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.
The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes.
No attempt will be made to batch records larger than this size.
Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent.
A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.
Note: This setting gives the upper bound of the batch size to be sent. If we have fewer than this many bytes accumulated for this partition, we will 'linger' for the time waiting for more records to show up. This setting defaults to 0, which means we'll immediately send out a record even the accumulated batch size is under this setting.linger.mslinger.msbatch.size
Controls how the client uses DNS lookups. If set to , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as .use_all_dns_ipsresolve_canonical_bootstrap_servers_onlyuse_all_dns_ips
An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.
An upper bound on the time to report success or failure after a call to returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of and .send()request.timeout.mslinger.ms
The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay—that is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting , for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.batch.sizelinger.ms=5
The configuration controls how long the 's , , , , and methods will block. For this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For this timeout bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may timeout if the transaction coordinator could not be discovered or did not respond within the timeout.KafkaProducersend()partitionsFor()initTransactions()sendOffsetsToTransaction()commitTransaction()abortTransaction()send()partitionsFor()
The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this.
A class to use to determine which partition to be send to when produce the records. Available options are:
If not set, the default partitioning logic is used. This strategy will try sticking to a partition until at least batch.size bytes is produced to the partition. It works with the strategy:
If no partition is specified but a key is present, choose a partition based on a hash of the key
If no partition or key is present, choose the sticky partition that changes when at least batch.size bytes are produced to the partition.
org.apache.kafka.clients.producer.RoundRobinPartitioner: This partitioning strategy is that each record in a series of consecutive records will be sent to a different partition(no matter if the 'key' is provided or not), until we run out of partitions and start over again. Note: There's a known issue that will cause uneven distribution when new batch is created. Please check KAFKA-9965 for more detail.
Implementing the interface allows you to plug in a custom partitioner.org.apache.kafka.clients.producer.Partitioner
When set to 'true' the producer won't use record keys to choose a partition. If 'false', producer would choose a partition based on a hash of the key when a key is present. Note: this setting has no effect if a custom partitioner is used.
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries.replica.lag.time.max.ms
JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;loginModuleClass controlFlag (optionName=optionValue)*;
The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler
The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin
The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.
The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.
The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.
The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.
The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.
The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].
The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.
The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed:
acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to . retries-1
acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost.
acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting.
Note that enabling idempotence requires this config value to be 'all'. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.
Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters
When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires to be less than or equal to 5 (with message ordering preserved for any allowable value), to be greater than 0, and must be 'all'. max.in.flight.requests.per.connectionretriesacks
Idempotence is enabled by default if no conflicting configurations are set. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. If idempotence is explicitly enabled and conflicting configurations are set, a is thrown.ConfigException
A list of classes to use as interceptors. Implementing the interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors.org.apache.kafka.clients.producer.ProducerInterceptor
The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this configuration is set to be greater than 1 and is set to false, there is a risk of message reordering after a failed send due to retries (i.e., if retries are enabled); if retries are disabled or if is set to true, ordering will be preserved. Additionally, enabling idempotence requires the value of this configuration to be less than or equal to 5. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. enable.idempotenceenable.idempotence
The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.
Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the next access to it will force a metadata fetch request.
A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter
When set to 'true', the producer will try to adapt to broker performance and produce more messages to partitions hosted on faster brokers. If 'false', producer will try to distribute messages uniformly. Note: this setting has no effect if a custom partitioner is used
If a broker cannot process produce requests from a partition for time, the partitioner treats that partition as not available. If the value is 0, this logic is disabled. Note: this setting has no effect if a custom partitioner is used or is set to 'false'partitioner.availability.timeout.mspartitioner.adaptive.partitioning.enable
The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.
The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.
The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.
Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.
The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.
The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.
The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.
The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.
The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.
The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.
The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.
The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.
The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.
The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the interface.org.apache.kafka.common.security.auth.SecurityProviderCreator
A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.
The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory
The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.
The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
The maximum amount of time in milliseconds that a transaction will remain open before the coordinator proactively aborts it. The start of the transaction is set at the time that the first partition is added to it. If this value is larger than the setting in the broker, the request will fail with a error.transaction.max.timeout.msInvalidTxnTimeoutException
The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. If a TransactionalId is configured, is implied. By default the TransactionId is not configured, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting .enable.idempotencetransaction.state.log.replication.factor
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).host1:port1,host2:port2,...
The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as that many byte(s) of data is available or the fetch request times out waiting for data to arrive. Setting this to a larger value will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.
A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using or the Kafka-based offset management strategy.subscribe(topic)
The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than , but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.session.timeout.ms
The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via (broker config) or (topic config). See fetch.max.bytes for limiting the consumer request size.message.max.bytesmax.message.bytes
The timeout used to detect client failures when using Kafka's group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by and .group.min.session.timeout.msgroup.max.session.timeout.ms
Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates
Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'
The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.
The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.
Allow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automatically created only if the broker allows for it using `auto.create.topics.enable` broker configuration. This configuration must be set to `false` when using brokers older than 0.11.0
What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted):
earliest: automatically reset the offset to the earliest offset
latest: automatically reset the offset to the latest offset
none: throw exception to the consumer if no previous offset is found for the consumer's group
Controls how the client uses DNS lookups. If set to , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as .use_all_dns_ipsresolve_canonical_bootstrap_servers_onlyuse_all_dns_ips
Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a parameter.timeout
Whether internal topics matching a subscribed pattern should be excluded from the subscription. It is always possible to explicitly subscribe to an internal topic.
The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via (broker config) or (topic config). Note that the consumer performs multiple fetches in parallel.message.max.bytesmax.message.bytes
A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior.
Controls how to read messages written transactionally. If set to , consumer.poll() will only return transactional messages which have been committed. If set to (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. read_committedread_uncommitted
Messages will always be returned in offset order. Hence, in mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, consumers will not be able to read up to the high watermark when there are in flight transactions.read_committedread_committed
Further, when in the seekToEnd method will return the LSOread_committed
The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. For consumers using a non-null which reach this timeout, partitions will not be immediately reassigned. Instead, the consumer will stop sending heartbeats and partitions will be reassigned after expiration of . This mirrors the behavior of a static consumer which has shutdown.group.instance.idsession.timeout.ms
The maximum number of records returned in a single call to poll(). Note, that does not impact the underlying fetching behavior. The consumer will cache the records from each fetch request and returns them incrementally from each poll.max.poll.records
A list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use to distribute partition ownership amongst consumer instances when group management is used. Available options are:
org.apache.kafka.clients.consumer.RangeAssignor: Assigns partitions on a per-topic basis.
org.apache.kafka.clients.consumer.RoundRobinAssignor: Assigns partitions to consumers in a round-robin fashion.
org.apache.kafka.clients.consumer.StickyAssignor: Guarantees an assignment that is maximally balanced while preserving as many existing partition assignments as possible.
org.apache.kafka.clients.consumer.CooperativeStickyAssignor: Follows the same StickyAssignor logic, but allows for cooperative rebalancing.
The default assignor is [RangeAssignor, CooperativeStickyAssignor], which will use the RangeAssignor by default, but allows upgrading to the CooperativeStickyAssignor with just a single rolling bounce that removes the RangeAssignor from the list.
Implementing the interface allows you to plug in a custom assignment strategy.org.apache.kafka.clients.consumer.ConsumerPartitionAssignor
Type:
list
Default:
class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.
JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;loginModuleClass controlFlag (optionName=optionValue)*;
The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler
The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin
The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.
The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.
The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.
The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.
The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.
The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].
The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.
Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters
Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.
An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.
A rack identifier for this client. This can be any string value which indicates where this client is physically located. It corresponds with the broker config 'broker.rack'
The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.
A list of classes to use as interceptors. Implementing the interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.org.apache.kafka.clients.consumer.ConsumerInterceptor
The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.
A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter
The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.
The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.
The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.
Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.
The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.
The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.
The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.
The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.
The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.
The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.
The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.
The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.
The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.
The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the interface.org.apache.kafka.common.security.auth.SecurityProviderCreator
A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.
The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory
The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.
The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).host1:port1,host2:port2,...
Whether to enable exactly-once support for source connectors in the cluster by using transactions to write source records and their source offsets, and by proactively fencing out old task generations before bringing up new ones. To enable exactly-once source support on a new cluster, set this property to 'enabled'. To enable support on an existing cluster, first set to 'preparing' on every worker in the cluster, then set to 'enabled'. A rolling upgrade may be used for both changes. For more information on this feature, see the exactly-once source support documentation.
The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than , but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.session.timeout.ms
The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures.
The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by and .group.min.session.timeout.msgroup.max.session.timeout.ms
Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates
Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'
The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.
The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.
Controls how the client uses DNS lookups. If set to , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as .use_all_dns_ipsresolve_canonical_bootstrap_servers_onlyuse_all_dns_ips
Class name or alias of implementation of . Defines what client configurations can be overridden by the connector. The default implementation is `All`, meaning connector configurations can override all client properties. The other possible policies in the framework include `None` to disallow connectors from overriding client properties, and `Principal` to allow connectors to override only client principals.ConnectorClientConfigOverridePolicy
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.
JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;loginModuleClass controlFlag (optionName=optionValue)*;
The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler
The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin
The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.
The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.
The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.
The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].
The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.
When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining.
When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.
Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.
Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API.
List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank string will disable this feature. The default behavior is to use the regular listener (specified by the 'listeners' property).
Type:
list
Default:
null
Valid Values:
List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443.
Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters
An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.
Comma-separated names of classes, loaded and used in the order specified. Implementing the interface allows you to replace variable references in connector configurations, such as for externalized secrets. ConfigProviderConfigProvider
HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.
The algorithm to use for generating internal request keys. The algorithm 'HmacSHA256' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config.
Type:
string
Default:
HmacSHA256
Valid Values:
Any KeyGenerator algorithm supported by the worker JVM
The algorithm used to sign internal requestsThe algorithm 'inter.worker.signature.algorithm' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config.
A list of permitted algorithms for verifying internal requests, which must include the algorithm used for the inter.worker.signature.algorithm property. The algorithm(s) '[HmacSHA256]' will be used as a default on JVMs that provide them; on other JVMs, no default is used and a value for this property must be manually specified in the worker config.
Type:
list
Default:
HmacSHA256
Valid Values:
A list of one or more MAC algorithms, each supported by the worker JVM
List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084
Type:
list
Default:
http://:8083
Valid Values:
List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443.
The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.
A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter
Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. This property has no effect for source connectors running with exactly-once support.
List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of: a) directories immediately containing jars with plugins and their dependencies b) uber-jars with plugins and their dependencies c) directories immediately containing the package directory structure of classes of plugins and their dependencies Note: symlinks will be followed to discover dependencies or plugins. Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors Do not use config provider variables in this property, since the raw path is used by the worker's scanner before config providers are initialized and used to replace variables.
The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.
The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.
Comma-separated header rules, where each header rule is of the form '[action] [header name]:[header value]' and optionally surrounded by double quotes if any part of a header rule contains a comma
Comma-separated names of classes, loaded and called in the order specified. Implementing the interface allows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc. ConnectRestExtensionConnectRestExtension
The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.
Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.
The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.
The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.
The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.
The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.
The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.
The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.
The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.
The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.
The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.
The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned
The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.
The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.
A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.
Configures kafka broker to request client authentication. The following settings are common:
ssl.client.auth=required If set to required client authentication is required.
ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself
ssl.client.auth=none This means client authentication is not needed.
The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory
The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.
The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially.
Whether to allow automatic creation of topics used by source connectors, when source connectors are configured with `topic.creation.` properties. Each task will use an admin client to create its topics and will not depend on the Kafka brokers to create topics automatically.
Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter
The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.
The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.
Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.
If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.
Whether to include in the log the Connect record that resulted in a failure.For sink records, the topic, partition, offset, and timestamp will be logged. For source records, the key and value (and their schemas), all headers, and the timestamp, Kafka topic, Kafka partition, source partition, and source offset will be logged. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files.
Permitted values are requested, required. If set to "required", forces a preflight check for the connector to ensure that it can provide exactly-once semantics with the given configuration. Some connectors may be capable of providing exactly-once semantics but not signal to Connect that they support this; in that case, documentation for the connector should be consulted carefully before creating it, and the value for this property should be set to "requested". Additionally, if the value is set to "required" but the worker that performs preflight validation does not have exactly-once support enabled for source connectors, requests to create or validate the connector will fail.
Permitted values are: poll, interval, connector. If set to 'poll', a new producer transaction will be started and committed for every batch of records that each task from this connector provides to Connect. If set to 'connector', relies on connector-defined transaction boundaries; note that not all connectors are capable of defining their own transaction boundaries, and in that case, attempts to instantiate a connector with this value will fail. Finally, if set to 'interval', commits transactions only after a user-defined time interval has passed.
If 'transaction.boundary' is set to 'interval', determines the interval for producer transaction commits by connector tasks. If unset, defaults to the value of the worker-level 'offset.flush.interval.ms' property. It has no effect if a different transaction.boundary is specified.
The name of a separate offsets topic to use for this connector. If empty or not specified, the worker’s global offsets topic name will be used. If specified, the offsets topic will be created if it does not already exist on the Kafka cluster targeted by this connector (which may be different from the one used for the worker's global offsets topic if the bootstrap.servers property of the connector's producer has been overridden from the worker's). Only applicable in distributed mode; in standalone mode, setting this property will have no effect.
Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter
Regular expression giving topics to consume. Under the hood, the regex is compiled to a . Only one of topics or topics.regex should be specified.java.util.regex.Pattern
Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.
The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.
The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.
The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.
Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.
If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.
Whether to include in the log the Connect record that resulted in a failure.For sink records, the topic, partition, offset, and timestamp will be logged. For source records, the key and value (and their schemas), all headers, and the timestamp, Kafka topic, Kafka partition, source partition, and source offset will be logged. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files.
The name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. The topic name is blank by default, which means that no messages are to be recorded in the DLQ.
If true, add headers containing error context to the messages written to the dead letter queue. To avoid clashing with headers from the original record, all error context header keys, all error context header keys will start with __connect.errors.
An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix.
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).host1:port1,host2:port2,...
The maximum acceptable lag (number of offsets to catch up) for a client to be considered caught-up enough to receive an active task assignment. Upon assignment, it will still restore the rest of the changelog before processing. To avoid a pause in processing during rebalances, this config should correspond to a recovery time of well under a minute for a given workload. Must be at least 0.
An ID prefix string used for the client IDs of internal consumer, producer and restore-consumer, with pattern .<client.id>-StreamThread-<threadSequenceNumber$gt;-<consumer|producer|restore-consumer>
Default serializer / deserializer class for key that implements the interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as wellorg.apache.kafka.common.serialization.Serdeorg.apache.kafka.common.serialization.Serde
Default inner class of list serde for key that implements the interface. This configuration will be read if and only if configuration is set to org.apache.kafka.common.serialization.Serdedefault.key.serdeorg.apache.kafka.common.serialization.Serdes.ListSerde
Default class for key that implements the interface. This configuration will be read if and only if configuration is set to Note when list serde class is used, one needs to set the inner serde class that implements the interface via 'default.list.key.serde.inner'java.util.Listdefault.key.serdeorg.apache.kafka.common.serialization.Serdes.ListSerdeorg.apache.kafka.common.serialization.Serde
Default inner class of list serde for value that implements the interface. This configuration will be read if and only if configuration is set to org.apache.kafka.common.serialization.Serdedefault.value.serdeorg.apache.kafka.common.serialization.Serdes.ListSerde
Default class for value that implements the interface. This configuration will be read if and only if configuration is set to Note when list serde class is used, one needs to set the inner serde class that implements the interface via 'default.list.value.serde.inner'java.util.Listdefault.value.serdeorg.apache.kafka.common.serialization.Serdes.ListSerdeorg.apache.kafka.common.serialization.Serde
Default serializer / deserializer class for value that implements the interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as wellorg.apache.kafka.common.serialization.Serdeorg.apache.kafka.common.serialization.Serde
This config controls whether joins and merges may produce out-of-order results. The config value is the maximum amount of time in milliseconds a stream task will stay idle when it is fully caught up on some (but not all) input partitions to wait for producers to send additional records and avoid potential out-of-order record processing across multiple input streams. The default (zero) does not wait for producers to send more records, but it does wait to fetch data that is already present on the brokers. This default means that for records that are already present on the brokers, Streams will process them in timestamp order. Set to -1 to disable idling entirely and process any locally available data, even though doing so may produce out-of-order processing.
The maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the purpose of keeping the task available on one instance while it is warming up on another instance it has been reassigned to. Used to throttle how much extra broker traffic and cluster state can be used for high availability. Must be at least 1.Note that one warmup replica corresponds to one Stream Task. Furthermore, note that each warmup replica can only be promoted to an active task during a rebalance (normally during a so-called probing rebalance, which occur at a frequency specified by the `probing.rebalance.interval.ms` config). This means that the maximum rate at which active tasks can be migrated from one Kafka Streams Instance to another instance can be determined by (`max.warmup.replicas` / `probing.rebalance.interval.ms`).
The processing guarantee that should be used. Possible values are (default) and (requires brokers version 2.5 or higher). Deprecated options are (requires brokers version 0.11.0 or higher) and (requires brokers version 2.5 or higher). Note that exactly-once processing requires a cluster of at least three brokers by default what is the recommended setting for production; for development you can change this, by adjusting broker setting and .at_least_onceexactly_once_v2exactly_onceexactly_once_betatransaction.state.log.replication.factortransaction.state.log.min.isr
List of client tag keys used to distribute standby replicas across Kafka Streams instances. When configured, Kafka Streams will make a best-effort to distribute the standby tasks over each client tag dimension.
The replication factor for change log topics and repartition topics created by the stream processing application. The default of (meaning: use broker default replication factor) requires broker version 2.4 or newer-1
The maximum amount of time in milliseconds a task might stall due to internal errors and retries until an error is raised. For a timeout of 0ms, a task would raise an error for the first internal error. For any timeout larger than 0ms, a task will retry at least once before an error is raised.
A configuration telling Kafka Streams if it should optimize the topology and what optimizations to apply. Acceptable values are: "+NO_OPTIMIZATION+", "+OPTIMIZE+", or a comma separated list of specific optimizations: ("+REUSE_KTABLE_SOURCE_TOPICS+", "+MERGE_REPARTITION_TOPICS+" + "SINGLE_STORE_SELF_JOIN+")."NO_OPTIMIZATION" by default.
Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters
The frequency in milliseconds with which to commit processing progress. For at-least-once processing, committing means to save the position (ie, offsets) of the processor. For exactly-once processing, it means to commit the transaction which includes to save the position and to make the committed data in the output topic visible to consumers with isolation level read_committed. (Note, if is set to , ,the default value is , otherwise the default value is .processing.guaranteeexactly_once_v2exactly_once10030000
The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.
A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter
The maximum time in milliseconds to wait before triggering a rebalance to probe for warmup replicas that have finished warming up and are ready to become active. Probing rebalances will continue to be triggered until the assignment is balanced. Must be at least 1 minute.
The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.
The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.
The frequency in milliseconds with which to delete fully consumed records from repartition topics. Purging will occur after at least this value since the last purge, but may be delayed until later. (Note, unlike , the default for this value remains unchanged when is set to ).commit.interval.msprocessing.guaranteeexactly_once_v2
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.
Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or `MAX_VALUE` and use corresponding timeout parameters to control how long a client should retry a request.
The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.
The amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have not been modified for at least will be removedstate.cleanup.delay.ms
Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 3.3 to a newer version it is not required to specify this config. Default is `null`. Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4" (for upgrading from the corresponding old version).
Default serializer / deserializer for the inner class of a windowed record. Must implement the interface. Note that setting this config in KafkaStreams application would result in an error as it is meant to be used only from Plain consumer client.org.apache.kafka.common.serialization.Serde
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).host1:port1,host2:port2,...
Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates
Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'
The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.
The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.
Controls how the client uses DNS lookups. If set to , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as .use_all_dns_ipsresolve_canonical_bootstrap_servers_onlyuse_all_dns_ips
An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.
Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a parameter.timeout
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.
JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;loginModuleClass controlFlag (optionName=optionValue)*;
The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler
The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin
The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.
The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.
The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.
The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.
The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.
The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].
The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.
Deprecated. Whether to automatically include JmxReporter even if it's not listed in . This configuration will be removed in Kafka 4.0, users should instead include in in order to enable the JmxReporter.metric.reportersorg.apache.kafka.common.metrics.JmxReportermetric.reporters
The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.
A list of classes to use as metrics reporters. Implementing the interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.org.apache.kafka.common.metrics.MetricsReporter
The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.
The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.
Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or `MAX_VALUE` and use corresponding timeout parameters to control how long a client should retry a request.
The amount of time to wait before attempting to retry a failed request. This avoids repeatedly sending requests in a tight loop under some failure scenarios.
Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.
The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.
Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.
The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.
The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.
The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.
The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.
The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.
The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.
The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.
The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.
The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.
A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the interface.org.apache.kafka.common.security.auth.SecurityProviderCreator
A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.
The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory
The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.
The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
Kafka supports some configuration that can be enabled through Java system properties. System properties are usually set by passing the -D flag to the Java virtual machine in which Kafka components are running.
Below are the supported system properties.
This system property is used to disable the problematic login modules usage in SASL JAAS configuration. This property accepts comma-separated list of loginModule names. By default com.sun.security.auth.module.JndiLoginModule loginModule is disabled.
If users want to enable JndiLoginModule, users need to explicitly reset the system property like below. We advise the users to validate configurations and only allow trusted JNDI configurations. For more details CVE-2023-25194.
-Dorg.apache.kafka.disallowed.login.modules=
To disable more loginModules, update the system property with comma-separated loginModule names. Make sure to explicitly add JndiLoginModule module name to the comma-separated list like below.
代理维护的消息日志本身只是一个文件目录,每个文件都由一系列消息集填充,这些消息集以生产者和使用者使用的相同格式写入磁盘。
保持这种通用格式可以优化最重要的操作:持久日志块的网络传输。现代 unix 操作系统为数据传输提供了高度优化的代码路径
从页面缓存到套接字;在 Linux 中,这是通过 sendfile system 调用完成的。
要了解 sendfile 的影响,了解将数据从文件传输到套接字的通用数据路径非常重要:
操作系统将数据从磁盘读取到内核空间中的页面缓存中
应用程序将数据从内核空间读取到用户空间缓冲区中
应用程序将数据写回内核空间到套接字缓冲区中
操作系统将数据从套接字缓冲区复制到 NIC 缓冲区,并通过网络发送数据
这显然是低效的,有四个副本和两个系统调用。使用 sendfile,通过允许操作系统将数据从页面缓存直接发送到网络来避免这种重新复制。所以在这个优化
路径,则只需要到 NIC 缓冲区的最终副本。
静态成员资格旨在提高流应用程序、消费者组和其他建立在组重新平衡协议之上的应用程序的可用性。
再平衡协议依赖于组协调器将实体 ID 分配给组成员。这些生成的 ID 是临时的,当成员重新启动并重新加入时会发生变化。
对于基于使用者的应用,这种“动态成员资格”可能会导致在管理操作期间将大部分任务重新分配给不同的实例
例如代码部署、配置更新和定期重启。对于大型状态应用程序,随机任务在处理之前需要很长时间才能恢复其本地状态
并导致应用程序部分或完全不可用。受这一观察结果的启发,Kafka 的组管理协议允许组成员提供持久的实体 ID。
基于这些 ID 的组成员身份保持不变,因此不会触发重新平衡。
网络层是一个相当简单的NIO服务器,不会详细描述。发送文件实现是通过为接口提供一个方法来完成的。这允许文件支持的消息集使用更高效的实现,而不是进程内缓冲写入。线程模型是单个接受器线程和 N 个处理器线程,每个线程处理固定数量的连接。这种设计已经在其他地方进行了非常彻底的测试,发现易于实现且快速。该协议保持非常简单,以允许将来以其他语言实现客户端。TransferableRecordswriteTotransferTo
baseOffset: int64
batchLength: int32
partitionLeaderEpoch: int32
magic: int8 (current magic value is 2)
crc: int32
attributes: int16
bit 0~2:
0: no compression
1: gzip
2: snappy
3: lz4
4: zstd
bit 3: timestampType
bit 4: isTransactional (0 means not transactional)
bit 5: isControlBatch (0 means not a control batch)
bit 6: hasDeleteHorizonMs (0 means baseTimestamp is not set as the delete horizon for compaction)
bit 7~15: unused
lastOffsetDelta: int32
baseTimestamp: int64
maxTimestamp: int64
producerId: int64
producerEpoch: int16
baseSequence: int32
records: [Record]
使用消息偏移量作为消息 ID 是不寻常的。我们最初的想法是使用生产者生成的 GUID,并在每个代理上维护从 GUID 到偏移量的映射。但是,由于使用者必须维护每个服务器的 ID,因此 GUID 的全局唯一性不提供任何值。此外,维护从随机 id 到偏移量的映射的复杂性需要必须与磁盘同步的重权索引结构,本质上需要一个完整的持久随机访问数据结构。因此,为了简化查找结构,我们决定使用一个简单的每分区原子计数器,该计数器可以与分区 id 和节点 id 结合使用以唯一标识消息;这使得查找结构更简单,尽管每个使用者请求仍可能有多个查找。然而,一旦我们确定了计数器,直接使用偏移量的跳转似乎是很自然的——毕竟两者都是分区特有的单调递增整数。由于偏移量对消费者 API 是隐藏的,因此此决定最终是一个实现细节,我们采用了更有效的方法。
读取是通过提供消息的 64 位逻辑偏移量和 S 字节最大块大小来完成的。这将在 S 字节缓冲区中包含的消息上返回迭代器。S 旨在大于任何单个消息,但如果出现异常大的消息,可以多次重试读取,每次将缓冲区大小加倍,直到成功读取消息。可以指定最大消息和缓冲区大小,以使服务器拒绝大于某个大小的消息,并按客户端获取完整消息所需的最大值向客户端提供绑定。读取缓冲区很可能以部分消息结尾,这很容易通过大小分隔来检测。
> bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --execute
Current partition replica assignment
{"version":1,
"partitions":[{"topic":"foo1","partition":0,"replicas":[2,1]},
{"topic":"foo1","partition":1,"replicas":[1,3]},
{"topic":"foo1","partition":2,"replicas":[3,4]},
{"topic":"foo2","partition":0,"replicas":[4,2]},
{"topic":"foo2","partition":1,"replicas":[2,1]},
{"topic":"foo2","partition":2,"replicas":[1,3]}]
}
Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignments for foo1-0,foo1-1,foo1-2,foo2-0,foo2-1,foo2-2
> bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --verify
Status of partition reassignment:
Reassignment of partition [foo1,0] is completed
Reassignment of partition [foo1,1] is still in progress
Reassignment of partition [foo1,2] is still in progress
Reassignment of partition [foo2,0] is completed
Reassignment of partition [foo2,1] is completed
Reassignment of partition [foo2,2] is completed
> bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --execute
Current partition replica assignment
{"version":1,
"partitions":[{"topic":"foo1","partition":0,"replicas":[1,2]},
{"topic":"foo2","partition":1,"replicas":[3,4]}]
}
Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignments for foo1-0,foo2-1
> bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --verify
Status of partition reassignment:
Reassignment of partition [foo1,0] is completed
Reassignment of partition [foo2,1] is completed
> bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --execute
Current partition replica assignment
{"version":1,
"partitions":[{"topic":"foo","partition":0,"replicas":[5]}]}
Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignment for foo-0
> bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --verify
Status of partition reassignment:
Reassignment of partition [foo,0] is completed
$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --additional --execute --reassignment-json-file bigger-cluster.json --throttle 700000000
The inter-broker throttle limit was set to 700000000 B/s
> bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --verify --reassignment-json-file bigger-cluster.json
Status of partition reassignment:
Reassignment of partition [my-topic,1] is completed
Reassignment of partition [my-topic,0] is completed
Clearing broker-level throttles on brokers 1,2,3
Clearing topic-level throttles on topic my-topic
> bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type brokers
Configs for brokers '2' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000
Configs for brokers '1' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000
这显示了应用于复制协议的领导者端和从属端的限制。默认情况下,双方
分配相同的受限制吞吐量值。
要查看受限制的副本列表,请执行以下操作:
> bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type topics
Configs for topic 'my-topic' are leader.replication.throttled.replicas=1:102,0:101,
follower.replication.throttled.replicas=1:101,0:102
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1
Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type clients --entity-name clientA
Configs for client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users
Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
Configs for default user-principal are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users --entity-type clients
Configs for user-principal 'user1', default client-id are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
# Run in secondary's data center, reading from the remote `primary` cluster
$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters secondary
# Basic settings
clusters: west-1, west-2, east-1, east-2, north-1, north-2
west-1.bootstrap.servers = ...
west-2.bootstrap.servers = ...
east-1.bootstrap.servers = ...
east-2.bootstrap.servers = ...
north-1.bootstrap.servers = ...
north-2.bootstrap.servers = ...
# Replication flows for Active/Active in West DC
west-1->west-2.enabled = true
west-2->west-1.enabled = true
# Replication flows for Active/Active in East DC
east-1->east-2.enabled = true
east-2->east-1.enabled = true
# Replication flows for Active/Active in North DC
north-1->north-2.enabled = true
north-2->north-1.enabled = true
# Replication flows for XDCR via west-1, east-1, north-1
west-1->east-1.enabled = true
west-1->north-1.enabled = true
east-1->west-1.enabled = true
east-1->north-1.enabled = true
north-1->west-1.enabled = true
north-1->east-1.enabled = true
然后,在每个数据中心中,启动一个或多个镜像制作器,如下所示:
# In West DC:
$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters west-1 west-2
# In East DC:
$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters east-1 east-2
# In North DC:
$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters north-1 north-2
# Note: The cluster alias us-west must be defined in the configuration file
$ ./bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters us-west
# MBean: kafka.connect.mirror:type=MirrorSourceConnector,target=([-.w]+),topic=([-.w]+),partition=([0-9]+)
record-count # number of records replicated source -> target
record-age-ms # age of records when they are replicated
record-age-ms-min
record-age-ms-max
record-age-ms-avg
replication-latency-ms # time it takes records to propagate source->target
replication-latency-ms-min
replication-latency-ms-max
replication-latency-ms-avg
byte-rate # average number of bytes/sec in replicated records
# MBean: kafka.connect.mirror:type=MirrorCheckpointConnector,source=([-.w]+),target=([-.w]+)
checkpoint-latency-ms # time it takes to replicate consumer offsets
checkpoint-latency-ms-min
checkpoint-latency-ms-max
checkpoint-latency-ms-avg
# Grant permissions to user Alice
$ bin/kafka-acls.sh --bootstrap-server broker1:9092 --add --allow-principal User:Alice --producer --resource-pattern-type prefixed --topic acme.infosec.
客户端配额:Kafka 支持不同类型的(每用户主体)客户端配额。由于无论客户端写入或读取哪些主题,客户端的配额都适用,因此它们是在多租户群集中分配资源的方便有效的工具。例如,请求速率配额通过限制代理在该用户的请求处理路径上花费的时间来帮助限制用户对代理 CPU 使用率的影响,之后将启动限制。在许多情况下,使用请求速率配额隔离用户在多租户群集中比设置传入/传出网络带宽配额的影响更大,因为用于处理请求的代理 CPU 使用率过高会降低代理可以提供的有效带宽。此外,管理员还可以定义主题操作(例如创建、删除和更改)的配额,以防止 Kafka 集群被高并发主题操作淹没(请参阅 KIP-599 和配额类型)。controller_mutation_rate
# ZooKeeper
zookeeper.connect=[list of ZooKeeper servers]
# Log configuration
num.partitions=8
default.replication.factor=3
log.dir=[List of directories. Kafka should have its own dedicated disk(s) or SSD(s).]
# Other configurations
broker.id=[An integer. Start with 0 and increment by 1 for each new broker.]
listeners=[list of listeners]
auto.create.topics.enable=false
min.insync.replicas=2
queued.max.requests=[number of concurrent requests]
Rejected byte rate per topic, due to the record batch size being greater than max.message.bytes configuration. Omitting 'topic=(...)' will yield the all-topic rate.
Message validation failure rate due to no key specified for compacted topic
If a broker goes down, ISR for some of the partitions will
shrink. When that broker is up again, ISR will be expanded
once the replicas are fully caught up. Other than that, the
expected value for both ISR shrink rate and expansion rate is 0.
The number of connections disconnected on a processor due to a client not re-authenticating and then using the connection beyond its expiration time for anything other than re-authentication
ideally 0 when re-authentication is enabled, implying there are no longer any older, pre-2.2.0 clients connecting to this (listener, processor) combination
The total number of connections disconnected, across all processors, due to a client not re-authenticating and then using the connection beyond its expiration time for anything other than re-authentication
Two attributes. throttle-time indicates the amount of time in ms the client was throttled. Ideally = 0.
byte-rate indicates the data produce/consume rate of the client in bytes/sec.
For (user, client-id) quotas, both user and client-id are specified. If per-client-id quota is applied to the client, user is not specified. If per-user quota is applied, client-id is not specified.
Request quota metrics per (user, client-id), user or client-id
Two attributes. throttle-time indicates the amount of time in ms the client was throttled. Ideally = 0.
request-time indicates the percentage of time spent in broker network and I/O threads to process requests from client group.
For (user, client-id) quotas, both user and client-id are specified. If per-client-id quota is applied to the client, user is not specified. If per-user quota is applied, client-id is not specified.
Requests exempt from throttling
kafka.server:type=Request
exempt-throttle-time indicates the percentage of time spent in broker network and I/O threads to process requests
that are exempt from throttling.
maximum time, in milliseconds, it took to load offsets and group metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
average time, in milliseconds, it took to load offsets and group metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
maximum time, in milliseconds, it took to load transaction metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
average time, in milliseconds, it took to load transaction metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
The difference between now and the timestamp of the last record from the cluster metadata partition that was applied by the controller.
For active Controllers the value of this lag is always zero.
The average end-to-end latency of a record, measured by comparing the record timestamp with the system time when it has been fully processed by the node.
每个代理和控制器都必须设置该属性。属性中提供的节点 ID 必须与控制器服务器上的相应 ID 匹配。例如,在控制器 1 上,node.id 必须设置为 1,依此类推。每个节点 ID 在特定群集中的所有服务器中必须是唯一的。无论其值如何,任何两个服务器都不能具有相同的节点 ID。controller.quorum.voterscontroller.quorum.votersprocess.roles
In a KRaft cluster, a broker is any server which has the role enabled
in and a controller is any server which has the
role enabled. Listener configuration depends on the role. The listener defined by
is used exclusively for requests between brokers.
Controllers, on the other hand, must use separate listener which is defined by the
configuration. This cannot be set to the same
value as the inter-broker listener.brokerprocess.rolescontrollerinter.broker.listener.namecontroller.listener.names
Controllers receive requests both from other controllers and from brokers. For
this reason, even if a server does not have the role enabled
(i.e. it is just a broker), it must still define the controller listener along with
any security properties that are needed to configure it. For example, we might
use the following configuration on a standalone broker:controller
The controller listener is still configured in this example to use the
security protocol, but it is not included in since the broker
does not expose the controller listener itself. The port that will be used in this case
comes from the configuration, which defines
the complete list of controllers.SASL_SSLlistenerscontroller.quorum.voters
For KRaft servers which have both the broker and controller role enabled, the configuration
is similar. The only difference is that the controller listener must be included in
:listeners
It is a requirement for the port defined in to
exactly match one of the exposed controller listeners. For example, here the
listener is bound to port 9093. The connection string
defined by must then also use port 9093,
as it does here.controller.quorum.votersCONTROLLERcontroller.quorum.voters
The controller will accept requests on all listeners defined by .
Typically there would be just one controller listener, but it is possible to have more.
For example, this provides a way to change the active listener from one port or security
protocol to another through a roll of the cluster (one roll to expose the new listener,
and one roll to remove the old listener). When multiple controller listeners are defined,
the first one in the list will be used for outbound requests.controller.listener.names
It is conventional in Kafka to use a separate listener for clients. This allows the
inter-cluster listeners to be isolated at the network level. In the case of the controller
listener in KRaft, the listener should be isolated since clients do not work with it
anyway. Clients are expected to connect to any other listener configured on a broker.
Any requests that are bound for the controller will be forwarded as described
below
In the following section, this document covers how to enable SSL
on a listener for encryption as well as authentication. The subsequent section will then
cover additional authentication mechanisms using SASL.
Apache Kafka allows clients to use SSL for encryption of traffic as well as authentication. By default, SSL is disabled but can be turned on if needed.
The following paragraphs explain in detail how to set up your own PKI infrastructure, use it to create certificates and configure Kafka to use these.
The first step of deploying one or more brokers with SSL support is to generate a public/private keypair for every server.
Since Kafka expects all keys and certificates to be stored in keystores we will use Java's keytool command for this task.
The tool supports two different keystore formats, the Java specific jks format which has been deprecated by now, as well as PKCS12.
PKCS12 is the default format as of Java version 9, to ensure this format is being used regardless of the Java version in use all following
commands explicitly specify the PKCS12 format.
You need to specify two parameters in the above command:
keystorefile: the keystore file that stores the keys (and later the certificate) for this broker. The keystore file contains the private
and public keys of this broker, therefore it needs to be kept safe. Ideally this step is run on the Kafka broker that the key will be
used on, as this key should never be transmitted/leave the server that it is intended for.
validity: the valid time of the key in days. Please note that this differs from the validity period for the certificate, which
will be determined in Signing the certificate. You can use the same key to request multiple
certificates: if your key has a validity of 10 years, but your CA will only sign certificates that are valid for one year, you
can use the same key with 10 certificates over time.
To obtain a certificate that can be used with the private key that was just created a certificate signing request needs to be created. This
signing request, when signed by a trusted CA results in the actual certificate which can then be installed in the keystore and used for
authentication purposes.
To generate certificate signing requests run the following command for all server keystores created so far.
This command assumes that you want to add hostname information to the certificate, if this is not the case, you can omit the extension parameter . Please see below for more information on this.
Host name verification, when enabled, is the process of checking attributes from the certificate that is presented by the server you are
connecting to against the actual hostname or ip address of that server to ensure that you are indeed connecting to the correct server.
The main reason for this check is to prevent man-in-the-middle attacks.
For Kafka, this check has been disabled by default for a long time, but as of Kafka 2.0.0 host name verification of servers is enabled by default
for client connections as well as inter-broker connections.
Server host name verification may be disabled by setting to an empty string.
For dynamically configured broker listeners, hostname verification may be disabled using :ssl.endpoint.identification.algorithmkafka-configs.sh
Normally there is no good reason to disable hostname verification apart from being the quickest way to "just get it to work" followed
by the promise to "fix it later when there is more time"!
Getting hostname verification right is not that hard when done at the right time, but gets much harder once the cluster is up and
running - do yourself a favor and do it now!
If host name verification is enabled, clients will verify the server's fully qualified domain name (FQDN) or ip address against one of the following two fields:
While Kafka checks both fields, usage of the common name field for hostname verification has been
deprecated since 2000 and should be avoided if possible. In addition the
SAN field is much more flexible, allowing for multiple DNS and IP entries to be declared in a certificate.
Another advantage is that if the SAN field is used for hostname verification the common name can be set to a more meaningful value for
authorization purposes. Since we need the SAN field to be contained in the signed certificate, it will be specified when generating the
signing request. It can also be specified when generating the keypair, but this will not automatically be copied into the signing request.
To add a SAN field append the following argument to the keytool command:
-ext SAN=DNS:{FQDN},IP:{IPADDRESS}
After this step each machine in the cluster has a public/private key pair which can already be used to encrypt traffic and a certificate
signing request, which is the basis for creating a certificate. To add authentication capabilities this signing request needs to be signed
by a trusted authority, which will be created in this step.
A certificate authority (CA) is responsible for signing certificates. CAs works likes a government that issues passports - the government
stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps to ensure the passport is
authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed certificate is computationally difficult to
forge. Thus, as long as the CA is a genuine and trusted authority, the clients have a strong assurance that they are connecting to the authentic
machines.
For this guide we will be our own Certificate Authority. When setting up a production cluster in a corporate environment these certificates would
usually be signed by a corporate CA that is trusted throughout the company. Please see Common Pitfalls in
Production for some things to consider for this case.
Due to a bug in OpenSSL, the x509 module will not copy requested
extension fields from CSRs into the final certificate. Since we want the SAN extension to be present in our certificate to enable hostname
verification, we'll use the ca module instead. This requires some additional configuration to be in place before we generate our
CA keypair.
Save the following listing into a file called openssl-ca.cnf and adjust the values for validity and common attributes as necessary.
HOME = .
RANDFILE = $ENV::HOME/.rnd
####################################################################
[ ca ]
default_ca = CA_default # The default ca section
[ CA_default ]
base_dir = .
certificate = $base_dir/cacert.pem # The CA certifcate
private_key = $base_dir/cakey.pem # The CA private key
new_certs_dir = $base_dir # Location for new certs after signing
database = $base_dir/index.txt # Database index file
serial = $base_dir/serial.txt # The current serial number
default_days = 1000 # How long to certify for
default_crl_days = 30 # How long before next CRL
default_md = sha256 # Use public key default MD
preserve = no # Keep passed DN ordering
x509_extensions = ca_extensions # The extensions to add to the cert
email_in_dn = no # Don't concat the email in the DN
copy_extensions = copy # Required to copy SANs from CSR to cert
####################################################################
[ req ]
default_bits = 4096
default_keyfile = cakey.pem
distinguished_name = ca_distinguished_name
x509_extensions = ca_extensions
string_mask = utf8only
####################################################################
[ ca_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default = DE
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Test Province
localityName = Locality Name (eg, city)
localityName_default = Test Town
organizationName = Organization Name (eg, company)
organizationName_default = Test Company
organizationalUnitName = Organizational Unit (eg, division)
organizationalUnitName_default = Test Unit
commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_default = Test Name
emailAddress = Email Address
emailAddress_default = test@test.com
####################################################################
[ ca_extensions ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always, issuer
basicConstraints = critical, CA:true
keyUsage = keyCertSign, cRLSign
####################################################################
[ signing_policy ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
####################################################################
[ signing_req ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
Then create a database and serial number file, these will be used to keep track of which certificates were signed with this CA. Both of
these are simply text files that reside in the same directory as your CA keys.
With these steps done you are now ready to generate your CA that will be used to sign certificates later.
The CA is simply a public/private key pair and certificate that is signed by itself, and is only intended to sign other certificates.
This keypair should be kept very safe, if someone gains access to it, they can create and sign certificates that will be trusted by your
infrastructure, which means they will be able to impersonate anybody when connecting to any service that trusts this CA.
The next step is to add the generated CA to the **clients' truststore** so that the clients can trust this CA:
Note:
If you configure the Kafka brokers to require client authentication by setting ssl.client.auth to be "requested" or "required" in the
Kafka brokers config then you must provide a truststore for the Kafka brokers as well and it should have
all the CA certificates that clients' keys were signed by.
In contrast to the keystore in step 1 that stores each machine's own identity, the truststore of a client stores all the certificates
that the client should trust. Importing a certificate into one's truststore also means trusting all certificates that are signed by that
certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that it has issued. This
attribute is called the chain of trust, and it is particularly useful when deploying SSL on a large Kafka cluster. You can sign all certificates
in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. That way all machines can authenticate all
other machines.
Then sign it with the CA:
Finally, you need to import both the certificate of the CA and the signed certificate into the keystore:
The definitions of the parameters are the following:
certificate signing request: the csr created with the server key
server certificate: the file to write the signed certificate of the server to
This will leave you with one truststore called truststore.jks - this can be the same for all clients and brokers and does not
contain any sensitive information, so there is no need to secure this.
Additionally you will have one server.keystore.jks file per node which contains that nodes keys, certificate and your CAs certificate,
please refer to Configuring Kafka Brokers and Configuring Kafka Clients
for information on how to use these files.
For some tooling assistance on this topic, please check out the easyRSA project which has
extensive scripting in place to help with these steps.
SSL key and certificates in PEM format
From 2.7.0 onwards, SSL key and trust stores can be configured for Kafka brokers and clients directly in the configuration in PEM format.
This avoids the need to store separate files on the file system and benefits from password protection features of Kafka configuration.
PEM may also be used as the store type for file-based key and trust stores in addition to JKS and PKCS12. To configure PEM key store directly in the
broker or client configuration, private key in PEM format should be provided in and the certificate chain in PEM format
should be provided in . To configure trust store, trust certificates, e.g. public certificate of CA,
should be provided in . Since PEM is typically stored as multi-line base-64 strings, the configuration value
can be included in Kafka configuration as multi-line strings with lines terminating in backslash ('\') for line continuation.
ssl.keystore.keyssl.keystore.certificate.chainssl.truststore.certificates
Store password configs and are not used for PEM.
If private key is encrypted using a password, the key password must be provided in . Private keys may be provided
in unencrypted form without a password. In production deployments, configs should be encrypted or
externalized using password protection feature in Kafka in this case. Note that the default SSL engine factory has limited capabilities for decryption
of encrypted private keys when external tools like OpenSSL are used for encryption. Third party libraries like BouncyCastle may be integrated with a
custom to support a wider range of encrypted private keys.ssl.keystore.passwordssl.truststore.passwordssl.key.passwordSslEngineFactory
The above paragraphs show the process to create your own CA and use it to sign certificates for your cluster.
While very useful for sandbox, dev, test, and similar systems, this is usually not the correct process to create certificates for a production
cluster in a corporate environment.
Enterprises will normally operate their own CA and users can send in CSRs to be signed with this CA, which has the benefit of users not being
responsible to keep the CA secure as well as a central authority that everybody can trust.
However it also takes away a lot of control over the process of signing certificates from the user. Quite often the persons operating corporate
CAs will apply tight restrictions on certificates that can cause issues when trying to use these certificates with Kafka.
Extended Key Usage Certificates may contain an extension
field that controls the purpose for which the certificate can be used. If this field is empty, there are no restrictions on the usage,
but if any usage is specified in here, valid SSL implementations have to enforce these usages.
Relevant usages for Kafka are:
Client authentication
Server authentication
Kafka brokers need both these usages to be allowed, as for intra-cluster communication every broker will behave as both the client and
the server towards other brokers. It is not uncommon for corporate CAs to have a signing profile for webservers and use this for Kafka as
well, which will only contain the serverAuth usage value and cause the SSL handshake to fail.
Intermediate Certificates
Corporate Root CAs are often kept offline for security reasons. To enable day-to-day usage, so called intermediate CAs are created, which
are then used to sign the final certificates. When importing a certificate into the keystore that was signed by an intermediate CA it is
necessary to provide the entire chain of trust up to the root CA. This can be done by simply cating the certificate files into one
combined certificate file and then importing this with keytool.
Failure to copy extension fields
CA operators are often hesitant to copy and requested extension fields from CSRs and prefer to specify these themselves as this makes it
harder for a malicious party to obtain certificates with potentially misleading or fraudulent values.
It is adviseable to double check signed certificates, whether these contain all requested SAN fields to enable proper hostname verification.
The following command can be used to print certificate details to the console, which should be compared with what was originally requested:
If SSL is not enabled for inter-broker communication (see below for how to enable it), both PLAINTEXT and SSL ports will be necessary.
Following SSL configs are needed on the broker side
Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled.
Optional settings that are worth considering:
ssl.client.auth=none ("required" => client authentication is required, "requested" => client authentication is requested and client without certs can still connect. The usage of "requested" is discouraged as it provides a false sense of security and misconfigured clients will still connect successfully.)
ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. (Default is an empty list)
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (list out the SSL protocols that you are going to accept from clients. Do note that SSL is deprecated in favor of TLS and using SSL in production is not recommended)
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.secure.random.implementation=SHA1PRNG
If you want to enable SSL for inter-broker communication, add the following to the server.properties file (it defaults to PLAINTEXT)
security.inter.broker.protocol=SSL
Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the JCE Unlimited Strength Jurisdiction Policy Files must be obtained and installed in the JDK/JRE. See the
JCA Providers Documentation for more information.
The JRE/JDK will have a default pseudo-random number generator (PRNG) that is used for cryptography operations, so it is not required to configure the
implementation used with the . However, there are performance issues with some implementations (notably, the
default chosen on Linux systems, , utilizes a global lock). In cases where performance of SSL connections becomes an issue,
consider explicitly setting the implementation to be used. The implementation is non-blocking, and has shown very good performance
characteristics under heavy load (50 MB/sec of produced messages, plus replication traffic, per-broker).
ssl.secure.random.implementationNativePRNGSHA1PRNG
Once you start the broker you should be able to see in the server.log
To check quickly if the server keystore and truststore are setup properly you can run the following command
(Note: TLSv1 should be listed under ssl.enabled.protocols)
In the output of this command you should see server's certificate:
If the certificate does not show up or if there are any other error messages then your keystore is not setup properly.
with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)
SSL is supported only for the new Kafka Producer and Consumer, the older API is not supported. The configs for SSL will be the same for both producer and consumer.
If client authentication is not required in the broker, then the following is a minimal configuration example:
Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled.
If client authentication is required, then a keystore must be created like in step 1 and the following must also be configured:
Other configuration settings that may also be needed depending on our requirements and the broker configuration:
ssl.provider (Optional). The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.
ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. It should list at least one of the protocols configured on the broker side
ssl.truststore.type=JKS
ssl.keystore.type=JKS
Examples using console-producer and console-consumer:
> kafka-console-producer.sh --bootstrap-server localhost:9093 --topic test --producer.config client-ssl.properties
> kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties
KafkaServer is the section name in the JAAS file used by each
KafkaServer/Broker. This section provides SASL configuration options
for the broker including any SASL client connections made by the broker
for inter-broker communication. If multiple listeners are configured to use
SASL, the section name may be prefixed with the listener name in lower-case
followed by a period, e.g. sasl_ssl.KafkaServer.
Client section is used to authenticate a SASL connection with
zookeeper. It also allows the brokers to set SASL ACL on zookeeper
nodes which locks these nodes down so that only the brokers can
modify it. It is necessary to have the same principal name across all
brokers. If you want to use a section name other than Client, set the
system property zookeeper.sasl.clientconfig to the appropriate
name (e.g., -Dzookeeper.sasl.clientconfig=ZkClient).
ZooKeeper uses "zookeeper" as the service name by default. If you
want to change this, set the system property
zookeeper.sasl.client.username to the appropriate name
(e.g., -Dzookeeper.sasl.client.username=zk).
Brokers may also configure JAAS using the broker configuration property .
The property name must be prefixed with the listener prefix including the SASL mechanism,
i.e. . Only one
login module may be specified in the config value. If multiple mechanisms are configured on a
listener, configs must be provided for each mechanism using the listener and mechanism prefix.
For example,sasl.jaas.configlistener.name.{listenerName}.{saslMechanism}.sasl.jaas.config
Clients may specify JAAS configuration as a producer or consumer property without
creating a physical configuration file. This mode also enables different producers
and consumers within the same JVM to use different credentials by specifying
different properties for each client. If both static JAAS configuration system property
and client property
are specified, the client property will be used.java.security.auth.login.configsasl.jaas.config
To configure SASL authentication on the clients using static JAAS config file:
Add a JAAS config file with a client login section named KafkaClient. Configure
a login module in KafkaClient for the selected mechanism as described in the examples
for setting up GSSAPI (Kerberos),
PLAIN,
SCRAM or
OAUTHBEARER.
For example, GSSAPI
credentials may be configured as:
SASL may be used with PLAINTEXT or SSL as the transport layer using the
security protocol SASL_PLAINTEXT or SASL_SSL respectively. If SASL_SSL is
used, then SSL must also be configured.
Configure a SASL port in server.properties, by adding at least one of
SASL_PLAINTEXT or SASL_SSL to the listeners parameter, which
contains one or more comma-separated values:
If you are only configuring a SASL port (or if you want
the Kafka brokers to authenticate each other using SASL) then make sure
you set the same SASL protocol for inter-broker communication:
Select one or more supported mechanisms
to enable in the broker and follow the steps to configure SASL for the mechanism.
To enable multiple mechanisms in the broker, follow the steps
here.
SASL authentication is only supported for the new Java Kafka producer and
consumer, the older API is not supported.
To configure SASL authentication on the clients, select a SASL
mechanism that is enabled in
the broker for client authentication and follow the steps to configure SASL
for the selected mechanism.
Note: When establishing connections to brokers via SASL, clients may perform a reverse
DNS lookup of the broker address. Due to how the JRE implements reverse
DNS lookups, clients may observe slow SASL handshakes if fully qualified domain
names are not used, for both the client's and a broker's
.bootstrap.serversadvertised.listeners
Kerberos
If your organization is already using a Kerberos server (for example, by using Active Directory), there is no need to install a new server just for Kafka. Otherwise you will need to install one, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (Ubuntu, Redhat). Note that if you are using Oracle Java, you will need to download JCE policy files for your Java version and copy them to $JAVA_HOME/jre/lib/security.
Create Kerberos Principals
If you are using the organization's Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:
Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example (note that each broker should have its own keytab):
KafkaServer section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It
allows the broker to login using the keytab specified in this section. See notes for more details on Zookeeper SASL configuration.
Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting kafka broker.
Configure SASL port and SASL mechanisms in server.properties as described here. For example:
We must also configure the service name in server.properties, which should match the principal name of the kafka brokers. In the above example, principal is "kafka/kafka1.hostname.com@EXAMPLE.com", so:
Clients (producers, consumers, connect workers, etc) will authenticate to the cluster with their
own principal (usually with the same name as the user running the client), so obtain or create
these principals as needed. Then configure the JAAS configuration property for each client.
Different clients within a JVM may run as different users by specifying different principals.
The property in producer.properties or consumer.properties describes
how clients like producer and consumer can connect to the Kafka Broker. The following is an example
configuration for a client using a keytab (recommended for long-running processes):
For command-line utilities like kafka-console-consumer or kafka-console-producer, kinit can be used
along with "useTicketCache=true" as in:
JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described here. Clients use the login section named
KafkaClient. This option allows only one user for all client connections from a JVM.sasl.jaas.config
SASL/PLAIN is a simple username/password authentication mechanism that is typically used with TLS for encryption to implement secure authentication.
Kafka supports a default implementation for SASL/PLAIN which can be extended for production use as described here.
Under the default implementation of , the username is used as the authenticated for configuration of ACLs etc.
principal.builder.classPrincipal
Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
This configuration defines two users (admin and alice). The properties username and password
in the KafkaServer section are used by the broker to initiate connections to other brokers. In this example,
admin is the user for inter-broker communication. The set of properties user_userName defines
the passwords for all users that connect to the broker and the broker validates all client connections including
those from other brokers using these properties.
Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
The following is an example configuration for a client for the PLAIN mechanism:
The options username and password are used by clients to configure
the user for client connections. In this example, clients connect to the broker as user alice.
Different clients within a JVM may connect as different users by specifying different user names
and passwords in .sasl.jaas.config
JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described here. Clients use the login section named
KafkaClient. This option allows only one user for all client connections from a JVM.
Configure the following properties in producer.properties or consumer.properties:
SASL/PLAIN should be used only with SSL as transport layer to ensure that clear passwords are not transmitted on the wire without encryption.
The default implementation of SASL/PLAIN in Kafka specifies usernames and passwords in the JAAS configuration file as shown
here. From Kafka version 2.0 onwards, you can avoid storing clear passwords on disk
by configuring your own callback handlers that obtain username and password from an external source using the configuration options
and .sasl.server.callback.handler.classsasl.client.callback.handler.class
In production systems, external authentication servers may implement password authentication. From Kafka version 2.0 onwards,
you can plug in your own callback handlers that use external authentication servers for password verification by configuring
.sasl.server.callback.handler.class
Salted Challenge Response Authentication Mechanism (SCRAM) is a family of SASL mechanisms that
addresses the security concerns with traditional mechanisms that perform username/password authentication
like PLAIN and DIGEST-MD5. The mechanism is defined in RFC 5802.
Kafka supports SCRAM-SHA-256 and SCRAM-SHA-512 which
can be used with TLS to perform secure authentication. Under the default implementation of , the username is used as the authenticated
for configuration of ACLs etc. The default SCRAM implementation in Kafka
stores SCRAM credentials in Zookeeper and is suitable for use in Kafka installations where Zookeeper
is on a private network. Refer to Security Considerations
for more details.principal.builder.classPrincipal
The SCRAM implementation in Kafka uses Zookeeper as credential store. Credentials can be created in
Zookeeper using kafka-configs.sh. For each SCRAM mechanism enabled, credentials must be created
by adding a config with the mechanism name. Credentials for inter-broker communication must be created
before Kafka brokers are started. Client credentials may be created and updated dynamically and updated
credentials will be used to authenticate new connections.
Create SCRAM credentials for user alice with password alice-secret:
The default iteration count of 4096 is used if iterations are not specified. A random salt is created
and the SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in Zookeeper.
See RFC 5802 for details on SCRAM identity and the individual fields.
The following examples also require a user admin for inter-broker communication which can be created using:
Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
The properties username and password in the KafkaServer section are used by
the broker to initiate connections to other brokers. In this example, admin is the user for
inter-broker communication.
Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
The following is an example configuration for a client for the SCRAM mechanisms:
The options username and password are used by clients to configure
the user for client connections. In this example, clients connect to the broker as user alice.
Different clients within a JVM may connect as different users by specifying different user names
and passwords in .sasl.jaas.config
JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described here. Clients use the login section named
KafkaClient. This option allows only one user for all client connections from a JVM.
Configure the following properties in producer.properties or consumer.properties:
The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in Zookeeper. This
is suitable for production use in installations where Zookeeper is secure and on a private network.
Kafka supports only the strong hash functions SHA-256 and SHA-512 with a minimum iteration count
of 4096. Strong hash functions combined with strong passwords and high iteration counts protect
against brute force attacks if Zookeeper security is compromised.
SCRAM should be used only with TLS-encryption to prevent interception of SCRAM exchanges. This
protects against dictionary or brute force attacks and against impersonation if Zookeeper is compromised.
From Kafka version 2.0 onwards, the default SASL/SCRAM credential store may be overridden using custom callback handlers
by configuring in installations where Zookeeper is not secure.sasl.server.callback.handler.class
For more details on security considerations, refer to
RFC 5802.
The OAuth 2 Authorization Framework "enables a third-party application to obtain limited access to an HTTP service,
either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP
service, or by allowing the third-party application to obtain access on its own behalf." The SASL OAUTHBEARER mechanism
enables the use of the framework in a SASL (i.e. a non-HTTP) context; it is defined in RFC 7628.
The default OAUTHBEARER implementation in Kafka creates and validates Unsecured JSON Web Tokens
and is only suitable for use in non-production Kafka installations. Refer to Security Considerations
for more details.
Under the default implementation of , the principalName of OAuthBearerToken is used as the authenticated for configuration of ACLs etc.
principal.builder.classPrincipal
Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
The property unsecuredLoginStringClaim_sub in the KafkaServer section is used by
the broker when it initiates connections to other brokers. In this example, admin will appear in the
subject (sub) claim and will be the user for inter-broker communication.
Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
The following is an example configuration for a client for the OAUTHBEARER mechanisms:
The option unsecuredLoginStringClaim_sub is used by clients to configure
the subject (sub) claim, which determines the user for client connections.
In this example, clients connect to the broker as user alice.
Different clients within a JVM may connect as different users by specifying different subject (sub)
claims in .sasl.jaas.config
JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described here. Clients use the login section named
KafkaClient. This option allows only one user for all client connections from a JVM.
Configure the following properties in producer.properties or consumer.properties:
security.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
sasl.mechanism=OAUTHBEARER
The default implementation of SASL/OAUTHBEARER depends on the jackson-databind library.
Since it's an optional dependency, users have to configure it as a dependency via their build tool.
The default implementation of SASL/OAUTHBEARER in Kafka creates and validates Unsecured JSON Web Tokens.
While suitable only for non-production use, it does provide the flexibility to create arbitrary tokens in a DEV or TEST environment.
Here are the various supported JAAS module options on the client side (and on the broker side if OAUTHBEARER is the inter-broker protocol):
JAAS Module Option for Unsecured Token Creation
Documentation
unsecuredLoginStringClaim_<claimname>="value"
Creates a String claim with the given name and value. Any valid
claim name can be specified except 'iat' and 'exp' (these are
automatically generated).
unsecuredLoginNumberClaim_<claimname>="value"
Creates a Number claim with the given name and value. Any valid
claim name can be specified except 'iat' and 'exp' (these are
automatically generated).
unsecuredLoginListClaim_<claimname>="value"
Creates a String List claim with the given name and values parsed
from the given value where the first character is taken as the delimiter. For
example: unsecuredLoginListClaim_fubar="|value1|value2". Any valid
claim name can be specified except 'iat' and 'exp' (these are
automatically generated).
unsecuredLoginExtension_<extensionname>="value"
Creates a String extension with the given name and value.
For example: unsecuredLoginExtension_traceId="123". A valid extension name
is any sequence of lowercase or uppercase alphabet characters. In addition, the "auth" extension name is reserved.
A valid extension value is any combination of characters with ASCII codes 1-127.
unsecuredLoginPrincipalClaimName
Set to a custom claim name if you wish the name of the String
claim holding the principal name to be something other than 'sub'.
unsecuredLoginLifetimeSeconds
Set to an integer value if the token expiration is to be set to something
other than the default value of 3600 seconds (which is 1 hour). The
'exp' claim will be set to reflect the expiration time.
unsecuredLoginScopeClaimName
Set to a custom claim name if you wish the name of the String or
String List claim holding any token scope to be something other than
'scope'.
Here are the various supported JAAS module options on the broker side for Unsecured JSON Web Token validation:
JAAS Module Option for Unsecured Token Validation
Documentation
unsecuredValidatorPrincipalClaimName="value"
Set to a non-empty value if you wish a particular String claim
holding a principal name to be checked for existence; the default is to check
for the existence of the 'sub' claim.
unsecuredValidatorScopeClaimName="value"
Set to a custom claim name if you wish the name of the String or
String List claim holding any token scope to be something other than
'scope'.
unsecuredValidatorRequiredScope="value"
Set to a space-delimited list of scope values if you wish the
String/String List claim holding the token scope to be checked to
make sure it contains certain values.
unsecuredValidatorAllowableClockSkewMs="value"
Set to a positive integer value if you wish to allow up to some number of
positive milliseconds of clock skew (the default is 0).
The default unsecured SASL/OAUTHBEARER implementation may be overridden (and must be overridden in production environments)
using custom login and SASL Server callback handlers.
Kafka periodically refreshes any token before it expires so that the client can continue to make
connections to brokers. The parameters that impact how the refresh algorithm
operates are specified as part of the producer/consumer/broker configuration
and are as follows. See the documentation for these properties elsewhere for
details. The default values are usually reasonable, in which case these
configuration parameters would not need to be explicitly set.
Production use cases will require writing an implementation of
org.apache.kafka.common.security.auth.AuthenticateCallbackHandler that can handle an instance of
org.apache.kafka.common.security.oauthbearer.OAuthBearerTokenCallback and declaring it via either the
sasl.login.callback.handler.class configuration option for a
non-broker client or via the
listener.name.sasl_ssl.oauthbearer.sasl.login.callback.handler.class
configuration option for brokers (when SASL/OAUTHBEARER is the inter-broker
protocol).
Production use cases will also require writing an implementation of
org.apache.kafka.common.security.auth.AuthenticateCallbackHandler that can handle an instance of
org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallback and declaring it via the
listener.name.sasl_ssl.oauthbearer.sasl.server.callback.handler.class
broker configuration option.
The default implementation of SASL/OAUTHBEARER in Kafka creates and validates Unsecured JSON Web Tokens.
This is suitable only for non-production use.
OAUTHBEARER should be used in production enviromnments only with TLS-encryption to prevent interception of tokens.
The default unsecured SASL/OAUTHBEARER implementation may be overridden (and must be overridden in production environments)
using custom login and SASL Server callback handlers as described above.
For more details on OAuth 2 security considerations in general, refer to RFC 6749, Section 10.
SASL mechanism can be modified in a running cluster using the following sequence:
Enable new SASL mechanism by adding the mechanism to sasl.enabled.mechanisms in server.properties for each broker. Update JAAS config file to include both
mechanisms as described here. Incrementally bounce the cluster nodes.
Restart clients using the new mechanism.
To change the mechanism of inter-broker communication (if this is required), set sasl.mechanism.inter.broker.protocol in server.properties to the new mechanism and
incrementally bounce the cluster again.
To remove old mechanism (if this is required), remove the old mechanism from sasl.enabled.mechanisms in server.properties and remove the entries for the
old mechanism from JAAS config file. Incrementally bounce the cluster again.
Delegation token based authentication is a lightweight authentication mechanism to complement existing SASL/SSL
methods. Delegation tokens are shared secrets between kafka brokers and clients. Delegation tokens will help processing
frameworks to distribute the workload to available workers in a secure environment without the added cost of distributing
Kerberos TGT/keytabs or keystores when 2-way SSL is used. See KIP-48
for more details.
Under the default implementation of , the owner of delegation token is used as the authenticated for configuration of ACLs etc.
principal.builder.classPrincipal
Typical steps for delegation token usage are:
User authenticates with the Kafka cluster via SASL or SSL, and obtains a delegation token. This can be done
using Admin APIs or using kafka-delegation-tokens.sh script.
User securely passes the delegation token to Kafka clients for authenticating with the Kafka cluster.
Token owner/renewer can renew/expire the delegation tokens.
A secret is used to generate and verify delegation tokens. This is supplied using config
option delegation.token.secret.key. The same secret key must be configured across all the brokers.
If the secret is not set or set to empty string, brokers will disable the delegation token authentication.
In the current implementation, token details are stored in Zookeeper and is suitable for use in Kafka installations where
Zookeeper is on a private network. Also currently, this secret is stored as plain text in the server.properties
config file. We intend to make these configurable in a future Kafka release.
A token has a current life, and a maximum renewable life. By default, tokens must be renewed once every 24 hours
for up to 7 days. These can be configured using delegation.token.expiry.time.ms
and delegation.token.max.lifetime.ms config options.
Tokens can also be cancelled explicitly. If a token is not renewed by the token’s expiration time or if token
is beyond the max life time, it will be deleted from all broker caches as well as from zookeeper.
Tokens can be created by using Admin APIs or using kafka-delegation-tokens.sh script.
Delegation token requests (create/renew/expire/describe) should be issued only on SASL or SSL authenticated channels.
Tokens can not be requests if the initial authentication is done through delegation token.
A token can be created by the user for that user or others as well by specifying the --owner-principal parameter.
Owner/Renewers can renew or expire tokens. Owner/renewers can always describe their own tokens.
To describe other tokens, a DESCRIBE_TOKEN permission needs to be added on the User resource representing the owner of the token.
kafka-delegation-tokens.sh script examples are given below.
Delegation token authentication piggybacks on the current SASL/SCRAM authentication mechanism. We must enable
SASL/SCRAM mechanism on Kafka cluster as described in here.
Configuring Kafka Clients:
Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
The following is an example configuration for a client for the token authentication:
The options username and password are used by clients to configure the token id and
token HMAC. And the option tokenauth is used to indicate the server about token authentication.
In this example, clients connect to the broker using token id: tokenID123. Different clients within a
JVM may connect using different tokens by specifying different token details in .sasl.jaas.config
JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described here. Clients use the login section named
KafkaClient. This option allows only one user for all client connections from a JVM.
We require a re-deployment when the secret needs to be rotated. During this process, already connected clients
will continue to work. But any new connection requests and renew/expire requests with old tokens can fail. Steps are given below.
Expire all existing tokens.
Rotate the secret by rolling upgrade, and
Generate new tokens
We intend to automate this in a future Kafka release.
Kafka ships with a pluggable authorization framework, which is configured with the authorizer.class.name property in the server confgiuration.
Configured implementations must extend .
Kafka provides default implementations which store ACLs in the cluster metadata (either Zookeeper or the KRaft metadata log).
For Zookeeper-based clusters, the provided implementation is configured as follows:
For KRaft clusters, use the following configuration on all nodes (brokers, controllers, or combined broker/controller nodes):
Kafka ACLs are defined in the general format of "Principal {P} is [Allowed|Denied] Operation {O} From Host {H} on any Resource {R} matching ResourcePattern {RP}".
You can read more about the ACL structure in KIP-11 and
resource patterns in KIP-290.
In order to add, remove, or list ACLs, you can use the Kafka ACL CLI . By default, if no ResourcePatterns match a specific Resource R,
then R has no associated ACLs, and therefore no one other than super users is allowed to access R.
If you want to change that behavior, you can include the following in server.properties.
One can also add super users in server.properties like the following (note that the delimiter is semicolon since SSL user names may contain comma). Default PrincipalType string "User" is case sensitive.
org.apache.kafka.server.authorizer.Authorizer
In KRaft clusters, admin requests such as and are sent to the broker listeners by the client. The broker then forwards the request to the active controller through the first listener configured in .
Authorization of these requests is done on the controller node. This is achieved by way of an request which packages both the underlying request from the client as well as the client principal.
When the controller receives the forwarded request from the broker, it first authorizes the request using the authenticated broker principal.
Then it authorizes the underlying request using the forwarded principal.
All of this implies that Kafka must understand how to serialize and deserialize the client principal. The authentication framework allows for customized principals by overriding the configuration.
In order for customized principals to work with KRaft, the configured class must implement so that Kafka knows how to serialize and deserialize the principals.
The default implementation uses the Kafka RPC format defined in the source code: .
For more detail about request forwarding in KRaft, see KIP-590CreateTopicsDeleteTopicscontroller.listener.namesEnvelopeEnvelopeEnvelopeprincipal.builder.classorg.apache.kafka.common.security.auth.KafkaPrincipalSerdeorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderclients/src/main/resources/common/message/DefaultPrincipalData.json
By default, the SSL user name will be of the form "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown". One can change that by setting to a customized rule in server.properties.
This config allows a list of rules for mapping X.500 distinguished name to short name. The rules are evaluated in order and the first rule that matches a distinguished name is used to map it to a short name. Any later rules in the list are ignored.
The format of is a list where each rule starts with "RULE:" and contains an expression as the following formats. Default rule will return
string representation of the X.500 certificate distinguished name. If the distinguished name matches the pattern, then the replacement command will be run over the name.
This also supports lowercase/uppercase options, to force the translated result to be all lower/uppercase case. This is done by adding a "/L" or "/U' to the end of the rule.
Example values are:
Above rules translate distinguished name "CN=serviceuser,OU=ServiceUsers,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" to "serviceuser"
and "CN=adminUser,OU=Admin,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" to "adminuser@admin".
For advanced use cases, one can customize the name by setting a customized PrincipalBuilder in server.properties like the following.
ssl.principal.mapping.rulesssl.principal.mapping.rules
By default, the SASL user name will be the primary part of the Kerberos principal. One can change that by setting to a customized rule in server.properties.
The format of is a list where each rule works in the same way as the auth_to_local in Kerberos configuration file (krb5.conf). This also support additional lowercase/uppercase rule, to force the translated result to be all lowercase/uppercase. This is done by adding a "/L" or "/U" to the end of the rule. check below formats for syntax.
Each rules starts with RULE: and contains an expression as the following formats. See the kerberos documentation for more details.
An example of adding a rule to properly translate user@MYDOMAIN.COM to user while also keeping the default rule in place is:
sasl.kerberos.principal.to.local.rulessasl.kerberos.principal.to.local.rules
Kafka Authorization management CLI can be found under bin directory with all the other CLIs. The CLI script is called kafka-acls.sh. Following lists all the options that the script supports:
Option
Description
Default
Option type
--add
Indicates to the script that user is trying to add an acl.
Action
--remove
Indicates to the script that user is trying to remove an acl.
Action
--list
Indicates to the script that user is trying to list acls.
Action
--bootstrap-server
A list of host/port pairs to use for establishing the connection to the Kafka cluster. Only one of --bootstrap-server or --authorizer option must be specified.
Configuration
--command-config
A property file containing configs to be passed to Admin Client. This option can only be used with --bootstrap-server option.
Configuration
--cluster
Indicates to the script that the user is trying to interact with acls on the singular cluster resource.
ResourcePattern
--topic [topic-name]
Indicates to the script that the user is trying to interact with acls on topic resource pattern(s).
ResourcePattern
--group [group-name]
Indicates to the script that the user is trying to interact with acls on consumer-group resource pattern(s)
ResourcePattern
--transactional-id [transactional-id]
The transactionalId to which ACLs should be added or removed. A value of * indicates the ACLs should apply to all transactionalIds.
ResourcePattern
--delegation-token [delegation-token]
Delegation token to which ACLs should be added or removed. A value of * indicates ACL should apply to all tokens.
ResourcePattern
--user-principal [user-principal]
A user resource to which ACLs should be added or removed. This is currently supported in relation with delegation tokens.
A value of * indicates ACL should apply to all users.
ResourcePattern
--resource-pattern-type [pattern-type]
Indicates to the script the type of resource pattern, (for --add), or resource pattern filter, (for --list and --remove), the user wishes to use.
When adding acls, this should be a specific pattern type, e.g. 'literal' or 'prefixed'.
When listing or removing acls, a specific pattern type filter can be used to list or remove acls from a specific type of resource pattern,
or the filter values of 'any' or 'match' can be used, where 'any' will match any pattern type, but will match the resource name exactly,
and 'match' will perform pattern matching to list or remove all acls that affect the supplied resource(s).
WARNING: 'match', when used in combination with the '--remove' switch, should be used with care.
literal
Configuration
--allow-principal
Principal is in PrincipalType:name format that will be added to ACL with Allow permission. Default PrincipalType string "User" is case sensitive. You can specify multiple --allow-principal in a single command.
Principal
--deny-principal
Principal is in PrincipalType:name format that will be added to ACL with Deny permission. Default PrincipalType string "User" is case sensitive. You can specify multiple --deny-principal in a single command.
Principal
--principal
Principal is in PrincipalType:name format that will be used along with --list option. Default PrincipalType string "User" is case sensitive. This will list the ACLs for the specified principal. You can specify multiple --principal in a single command.
Principal
--allow-host
IP address from which principals listed in --allow-principal will have access.
if --allow-principal is specified defaults to * which translates to "all hosts"
Host
--deny-host
IP address from which principals listed in --deny-principal will be denied access.
if --deny-principal is specified defaults to * which translates to "all hosts"
Host
--operation
Operation that will be allowed or denied.
Valid values are:
Read
Write
Create
Delete
Alter
Describe
ClusterAction
DescribeConfigs
AlterConfigs
IdempotentWrite
CreateTokens
DescribeTokens
All
All
Operation
--producer
Convenience option to add/remove acls for producer role. This will generate acls that allows WRITE,
DESCRIBE and CREATE on topic.
Convenience
--consumer
Convenience option to add/remove acls for consumer role. This will generate acls that allows READ,
DESCRIBE on topic and READ on consumer-group.
Convenience
--idempotent
Enable idempotence for the producer. This should be used in combination with the --producer option.
Note that idempotence is enabled automatically if the producer is authorized to a particular transactional-id.
Convenience
--force
Convenience option to assume yes to all queries and do not prompt.
Convenience
--authorizer
(DEPRECATED: not supported in KRaft) Fully qualified class name of the authorizer.
kafka.security.authorizer.AclAuthorizer
Configuration
--authorizer-properties
(DEPRECATED: not supported in KRaft) key=val pairs that will be passed to authorizer for initialization. For the default authorizer in ZK clsuters, the example values are: zookeeper.connect=localhost:2181
Configuration
--zk-tls-config-file
(DEPRECATED: not supported in KRaft) Identifies the file where ZooKeeper client TLS connectivity properties for the authorizer are defined.
Any properties other than the following (with or without an "authorizer." prefix) are ignored:
zookeeper.clientCnxnSocket, zookeeper.ssl.cipher.suites, zookeeper.ssl.client.enable,
zookeeper.ssl.crl.enable, zookeeper.ssl.enabled.protocols, zookeeper.ssl.endpoint.identification.algorithm,
zookeeper.ssl.keystore.location, zookeeper.ssl.keystore.password, zookeeper.ssl.keystore.type,
zookeeper.ssl.ocsp.enable, zookeeper.ssl.protocol, zookeeper.ssl.truststore.location,
zookeeper.ssl.truststore.password, zookeeper.ssl.truststore.type
Adding Acls
Suppose you want to add an acl "Principals User:Bob and User:Alice are allowed to perform Operation Read and Write on Topic Test-Topic from IP 198.51.100.0 and IP 198.51.100.1". You can do that by executing the CLI with following options:
By default, all principals that don't have an explicit acl that allows access for an operation to a resource are denied. In rare cases where an allow acl is defined that allows access to all but some principal we will have to use the --deny-principal and --deny-host option. For example, if we want to allow all users to Read from Test-topic but only deny User:BadBob from IP 198.51.100.3 we can do so using following commands:
Note that and only support IP addresses (hostnames are not supported).
Above examples add acls to a topic by specifying --topic [topic-name] as the resource pattern option. Similarly user can add acls to cluster by specifying --cluster and to a consumer group by specifying --group [group-name].
You can add acls on any resource of a certain type, e.g. suppose you wanted to add an acl "Principal User:Peter is allowed to produce to any Topic from IP 198.51.200.0"
You can do that by using the wildcard resource '*', e.g. by executing the CLI with following options:
You can add acls on prefixed resource patterns, e.g. suppose you want to add an acl "Principal User:Jane is allowed to produce to any Topic whose name starts with 'Test-' from any host".
You can do that by executing the CLI with following options:
Note, --resource-pattern-type defaults to 'literal', which only affects resources with the exact same name or, in the case of the wildcard resource name '*', a resource with any name.
Removing Acls
Removing acls is pretty much the same. The only difference is instead of --add option users will have to specify --remove option. To remove the acls added by the first example above we can execute the CLI with following options:
If you want to remove the acl added to the prefixed resource pattern above we can execute the CLI with following options:
List Acls
We can list acls for any resource by specifying the --list option with the resource. To list all acls on the literal resource pattern Test-topic, we can execute the CLI with following options:
However, this will only return the acls that have been added to this exact resource pattern. Other acls can exist that affect access to the topic,
e.g. any acls on the topic wildcard '*', or any acls on prefixed resource patterns. Acls on the wildcard resource pattern can be queried explicitly:
However, it is not necessarily possible to explicitly query for acls on prefixed resource patterns that match Test-topic as the name of such patterns may not be known.
We can list all acls affecting Test-topic by using '--resource-pattern-type match', e.g.
This will list acls on all matching literal, wildcard and prefixed resource patterns.
> bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic Test-topic --resource-pattern-type match
Adding or removing a principal as producer or consumer
The most common use case for acl management are adding/removing a principal as producer or consumer so we added convenience options to handle these cases. In order to add User:Bob as a producer of Test-topic we can execute the following command:
Similarly to add Alice as a consumer of Test-topic with consumer group Group-1 we just have to pass --consumer option:
Note that for consumer option we must also specify the consumer group.
In order to remove a principal from producer or consumer role we just need to pass --remove option.
Admin API based acl management
Users having Alter permission on ClusterResource can use Admin API for ACL management. kafka-acls.sh script supports AdminClient API to manage ACLs without interacting with zookeeper/authorizer directly.
All the above examples can be executed by using --bootstrap-server option. For example:
Protocol calls are usually performing some operations on certain resources in Kafka. It is required to know the
operations and resources to set up effective protection. In this section we'll list these operations and
resources, then list the combination of these with the protocols to see the valid scenarios.
There are a few operation primitives that can be used to build up privileges. These can be matched up with
certain resources to allow specific protocol calls for a given user. These are:
The operations above can be applied on certain resources which are described below.
Topic: this simply represents a Topic. All protocol calls that are acting on topics (such as reading,
writing them) require the corresponding privilege to be added. If there is an authorization error with a
topic resource, then a TOPIC_AUTHORIZATION_FAILED (error code: 29) will be returned.
Group: this represents the consumer groups in the brokers. All protocol calls that are working with
consumer groups, like joining a group must have privileges with the group in subject. If the privilege is not
given then a GROUP_AUTHORIZATION_FAILED (error code: 30) will be returned in the protocol response.
Cluster: this resource represents the cluster. Operations that are affecting the whole cluster, like
controlled shutdown are protected by privileges on the Cluster resource. If there is an authorization problem
on a cluster resource, then a CLUSTER_AUTHORIZATION_FAILED (error code: 31) will be returned.
TransactionalId: this resource represents actions related to transactions, such as committing.
If any error occurs, then a TRANSACTIONAL_ID_AUTHORIZATION_FAILED (error code: 53) will be returned by brokers.
DelegationToken: this represents the delegation tokens in the cluster. Actions, such as describing
delegation tokens could be protected by a privilege on the DelegationToken resource. Since these objects have
a little special behavior in Kafka it is recommended to read
KIP-48
and the related upstream documentation at Authentication using Delegation Tokens.
User: CreateToken and DescribeToken operations can be granted to User resources to allow creating and describing
tokens for other users. More info can be found in KIP-373.
In the below table we'll list the valid operations on resources that are executed by the Kafka API protocols.
Protocol (API key)
Operation
Resource
Note
PRODUCE (0)
Write
TransactionalId
An transactional producer which has its transactional.id set requires this privilege.
PRODUCE (0)
IdempotentWrite
Cluster
An idempotent produce action requires this privilege.
PRODUCE (0)
Write
Topic
This applies to a normal produce action.
FETCH (1)
ClusterAction
Cluster
A follower must have ClusterAction on the Cluster resource in order to fetch partition data.
FETCH (1)
Read
Topic
Regular Kafka consumers need READ permission on each partition they are fetching.
LIST_OFFSETS (2)
Describe
Topic
METADATA (3)
Describe
Topic
METADATA (3)
Create
Cluster
If topic auto-creation is enabled, then the broker-side API will check for the existence of a Cluster
level privilege. If it's found then it'll allow creating the topic, otherwise it'll iterate through the
Topic level privileges (see the next one).
METADATA (3)
Create
Topic
This authorizes auto topic creation if enabled but the given user doesn't have a cluster level
permission (above).
LEADER_AND_ISR (4)
ClusterAction
Cluster
STOP_REPLICA (5)
ClusterAction
Cluster
UPDATE_METADATA (6)
ClusterAction
Cluster
CONTROLLED_SHUTDOWN (7)
ClusterAction
Cluster
OFFSET_COMMIT (8)
Read
Group
An offset can only be committed if it's authorized to the given group and the topic too (see below).
Group access is checked first, then Topic access.
OFFSET_COMMIT (8)
Read
Topic
Since offset commit is part of the consuming process, it needs privileges for the read action.
OFFSET_FETCH (9)
Describe
Group
Similarly to OFFSET_COMMIT, the application must have privileges on group and topic level too to be able
to fetch. However in this case it requires describe access instead of read. Group access is checked first,
then Topic access.
OFFSET_FETCH (9)
Describe
Topic
FIND_COORDINATOR (10)
Describe
Group
The FIND_COORDINATOR request can be of "Group" type in which case it is looking for consumergroup
coordinators. This privilege would represent the Group mode.
FIND_COORDINATOR (10)
Describe
TransactionalId
This applies only on transactional producers and checked when a producer tries to find the transaction
coordinator.
JOIN_GROUP (11)
Read
Group
HEARTBEAT (12)
Read
Group
LEAVE_GROUP (13)
Read
Group
SYNC_GROUP (14)
Read
Group
DESCRIBE_GROUPS (15)
Describe
Group
LIST_GROUPS (16)
Describe
Cluster
When the broker checks to authorize a list_groups request it first checks for this cluster
level authorization. If none found then it proceeds to check the groups individually. This operation
doesn't return CLUSTER_AUTHORIZATION_FAILED.
LIST_GROUPS (16)
Describe
Group
If none of the groups are authorized, then just an empty response will be sent back instead
of an error. This operation doesn't return CLUSTER_AUTHORIZATION_FAILED. This is applicable from the
2.1 release.
SASL_HANDSHAKE (17)
The SASL handshake is part of the authentication process and therefore it's not possible to
apply any kind of authorization here.
API_VERSIONS (18)
The API_VERSIONS request is part of the Kafka protocol handshake and happens on connection
and before any authentication. Therefore it's not possible to control this with authorization.
CREATE_TOPICS (19)
Create
Cluster
If there is no cluster level authorization then it won't return CLUSTER_AUTHORIZATION_FAILED but
fall back to use topic level, which is just below. That'll throw error if there is a problem.
CREATE_TOPICS (19)
Create
Topic
This is applicable from the 2.0 release.
DELETE_TOPICS (20)
Delete
Topic
DELETE_RECORDS (21)
Delete
Topic
INIT_PRODUCER_ID (22)
Write
TransactionalId
INIT_PRODUCER_ID (22)
IdempotentWrite
Cluster
OFFSET_FOR_LEADER_EPOCH (23)
ClusterAction
Cluster
If there is no cluster level privilege for this operation, then it'll check for topic level one.
OFFSET_FOR_LEADER_EPOCH (23)
Describe
Topic
This is applicable from the 2.1 release.
ADD_PARTITIONS_TO_TXN (24)
Write
TransactionalId
This API is only applicable to transactional requests. It first checks for the Write action on the
TransactionalId resource, then it checks the Topic in subject (below).
ADD_PARTITIONS_TO_TXN (24)
Write
Topic
ADD_OFFSETS_TO_TXN (25)
Write
TransactionalId
Similarly to ADD_PARTITIONS_TO_TXN this is only applicable to transactional request. It first checks
for Write action on the TransactionalId resource, then it checks whether it can Read on the given group
(below).
ADD_OFFSETS_TO_TXN (25)
Read
Group
END_TXN (26)
Write
TransactionalId
WRITE_TXN_MARKERS (27)
ClusterAction
Cluster
TXN_OFFSET_COMMIT (28)
Write
TransactionalId
TXN_OFFSET_COMMIT (28)
Read
Group
TXN_OFFSET_COMMIT (28)
Read
Topic
DESCRIBE_ACLS (29)
Describe
Cluster
CREATE_ACLS (30)
Alter
Cluster
DELETE_ACLS (31)
Alter
Cluster
DESCRIBE_CONFIGS (32)
DescribeConfigs
Cluster
If broker configs are requested, then the broker will check cluster level privileges.
DESCRIBE_CONFIGS (32)
DescribeConfigs
Topic
If topic configs are requested, then the broker will check topic level privileges.
ALTER_CONFIGS (33)
AlterConfigs
Cluster
If broker configs are altered, then the broker will check cluster level privileges.
ALTER_CONFIGS (33)
AlterConfigs
Topic
If topic configs are altered, then the broker will check topic level privileges.
ALTER_REPLICA_LOG_DIRS (34)
Alter
Cluster
DESCRIBE_LOG_DIRS (35)
Describe
Cluster
An empty response will be returned on authorization failure.
SASL_AUTHENTICATE (36)
SASL_AUTHENTICATE is part of the authentication process and therefore it's not possible to
apply any kind of authorization here.
You can secure a running cluster via one or more of the supported protocols discussed previously. This is done in phases:
Incrementally bounce the cluster nodes to open additional secured port(s).
Restart clients using the secured rather than PLAINTEXT port (assuming you are securing the client-broker connection).
Incrementally bounce the cluster again to enable broker-to-broker security (if this is required)
A final incremental bounce to close the PLAINTEXT port.
The specific steps for configuring SSL and SASL are described in sections 7.3 and 7.4.
Follow these steps to enable security for your desired protocol(s).
The security implementation lets you configure different protocols for both broker-client and broker-broker communication.
These must be enabled in separate bounces. A PLAINTEXT port must be left open throughout so brokers and/or clients can continue to communicate.
When performing an incremental bounce stop the brokers cleanly via a SIGTERM. It's also good practice to wait for restarted replicas to return to the ISR list before moving onto the next node.
As an example, say we wish to encrypt both broker-client and broker-broker communication with SSL. In the first incremental bounce, an SSL port is opened on each node:
We then restart the clients, changing their config to point at the newly opened, secured port:
In the second incremental server bounce we instruct Kafka to use SSL as the broker-broker protocol (which will use the same SSL port):
In the final bounce we secure the cluster by closing the PLAINTEXT port:
Alternatively we might choose to open multiple ports so that different protocols can be used for broker-broker and broker-client communication. Say we wished to use SSL encryption throughout (i.e. for broker-broker and broker-client communication) but we'd like to add SASL authentication to the broker-client connection also. We would achieve this by opening two additional ports during the first bounce:
We would then restart the clients, changing their config to point at the newly opened, SASL & SSL secured port:
The second server bounce would switch the cluster to use encrypted broker-broker communication via the SSL port we previously opened on port 9092:
The final bounce secures the cluster by closing the PLAINTEXT port.
ZooKeeper can be secured independently of the Kafka cluster. The steps for doing this are covered in section 7.7.2.
ZooKeeper supports mutual TLS (mTLS) authentication beginning with the 3.5.x versions.
Kafka supports authenticating to ZooKeeper with SASL and mTLS -- either individually or both together --
beginning with version 2.5. See
KIP-515: Enable ZK client to use the new TLS supported authentication
for more details.
When using mTLS alone, every broker and any CLI tools (such as the ZooKeeper Security Migration Tool)
should identify itself with the same Distinguished Name (DN) because it is the DN that is ACL'ed.
This can be changed as described below, but it involves writing and deploying a custom ZooKeeper authentication provider.
Generally each certificate should have the same DN but a different Subject Alternative Name (SAN)
so that hostname verification of the brokers and any CLI tools by ZooKeeper will succeed.
When using SASL authentication to ZooKeeper together with mTLS, both the SASL identity and
either the DN that created the znode (i.e. the creating broker's certificate)
or the DN of the Security Migration Tool (if migration was performed after the znode was created)
will be ACL'ed, and all brokers and CLI tools will be authorized even if they all use different DNs
because they will all use the same ACL'ed SASL identity.
It is only when using mTLS authentication alone that all the DNs must match (and SANs become critical --
again, in the absence of writing and deploying a custom ZooKeeper authentication provider as described below).
Use the broker properties file to set TLS configs for brokers as described below.
Use the --zk-tls-config-file <file> option to set TLS configs in the Zookeeper Security Migration Tool.
The kafka-acls.sh and kafka-configs.sh CLI tools also support the --zk-tls-config-file <file> option.
Use the -zk-tls-config-file <file> option (note the single-dash rather than double-dash)
to set TLS configs for the zookeeper-shell.sh CLI tool.
To enable ZooKeeper SASL authentication on brokers, there are two necessary steps:
Create a JAAS login file and set the appropriate system property to point to it as described above
Set the configuration property zookeeper.set.acl in each broker to true
The metadata stored in ZooKeeper for the Kafka cluster is world-readable, but can only be modified by the brokers. The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of that data can cause cluster disruption. We also recommend limiting the access to ZooKeeper via network segmentation (only brokers and some admin tools need access to ZooKeeper).
ZooKeeper mTLS authentication can be enabled with or without SASL authentication. As mentioned above,
when using mTLS alone, every broker and any CLI tools (such as the ZooKeeper Security Migration Tool)
must generally identify itself with the same Distinguished Name (DN) because it is the DN that is ACL'ed, which means
each certificate should have an appropriate Subject Alternative Name (SAN) so that
hostname verification of the brokers and any CLI tool by ZooKeeper will succeed.
It is possible to use something other than the DN for the identity of mTLS clients by writing a class that
extends org.apache.zookeeper.server.auth.X509AuthenticationProvider and overrides the method
protected String getClientId(X509Certificate clientCert).
Choose a scheme name and set authProvider.[scheme] in ZooKeeper to be the fully-qualified class name
of the custom implementation; then set ssl.authProvider=[scheme] to use it.
Here is a sample (partial) ZooKeeper configuration for enabling TLS authentication.
These configurations are described in the
ZooKeeper Admin Guide.
IMPORTANT: ZooKeeper does not support setting the key password in the ZooKeeper server keystore
to a value different from the keystore password itself.
Be sure to set the key password to be the same as the keystore password.
Here is a sample (partial) Kafka Broker configuration for connecting to ZooKeeper with mTLS authentication.
These configurations are described above in Broker Configs.
# connect to the ZooKeeper port configured for TLS
zookeeper.connect=zk1:2182,zk2:2182,zk3:2182
# required to use TLS to ZooKeeper (default is false)
zookeeper.ssl.client.enable=true
# required to use TLS to ZooKeeper
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
# define key/trust stores to use TLS to ZooKeeper; ignored unless zookeeper.ssl.client.enable=true
zookeeper.ssl.keystore.location=/path/to/kafka/keystore.jks
zookeeper.ssl.keystore.password=kafka-ks-passwd
zookeeper.ssl.truststore.location=/path/to/kafka/truststore.jks
zookeeper.ssl.truststore.password=kafka-ts-passwd
# tell broker to create ACLs on znodes
zookeeper.set.acl=true
IMPORTANT: ZooKeeper does not support setting the key password in the ZooKeeper client (i.e. broker) keystore
to a value different from the keystore password itself.
Be sure to set the key password to be the same as the keystore password.
If you are running a version of Kafka that does not support security or simply with security disabled, and you want to make the cluster secure, then you need to execute the following steps to enable ZooKeeper authentication with minimal disruption to your operations:
Enable SASL and/or mTLS authentication on ZooKeeper. If enabling mTLS, you would now have both a non-TLS port and a TLS port, like this:
Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations (including connecting to the TLS-enabled ZooKeeper port) as required, which enables brokers to authenticate to ZooKeeper. At the end of the rolling restart, brokers are able to manipulate znodes with strict ACLs, but they will not create znodes with those ACLs
If you enabled mTLS, disable the non-TLS port in ZooKeeper
Perform a second rolling restart of brokers, this time setting the configuration parameter zookeeper.set.acl to true, which enables the use of secure ACLs when creating znodes
Execute the ZkSecurityMigrator tool. To execute the tool, there is this script: bin/zookeeper-security-migration.sh with zookeeper.acl set to secure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the option if you enable mTLS.--zk-tls-config-file <file>
It is also possible to turn off authentication in a secure cluster. To do it, follow these steps:
Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations, which enables brokers to authenticate, but setting zookeeper.set.acl to false. At the end of the rolling restart, brokers stop creating znodes with secure ACLs, but are still able to authenticate and manipulate all znodes
Execute the ZkSecurityMigrator tool. To execute the tool, run this script bin/zookeeper-security-migration.sh with zookeeper.acl set to unsecure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the option if you need to set TLS configuration.--zk-tls-config-file <file>
If you are disabling mTLS, enable the non-TLS port in ZooKeeper
Perform a second rolling restart of brokers, this time omitting the system property that sets the JAAS login file and/or removing ZooKeeper mutual TLS configuration (including connecting to the non-TLS-enabled ZooKeeper port) as required
If you are disabling mTLS, disable the TLS port in ZooKeeper
Here is an example of how to run the migration tool:
It is also necessary to enable SASL and/or mTLS authentication on the ZooKeeper ensemble. To do it, we need to perform a rolling restart of the server and set a few properties. See above for mTLS information. Please refer to the ZooKeeper documentation for more detail:
ZooKeeper connections that use mutual TLS are encrypted.
Beginning with ZooKeeper version 3.5.7 (the version shipped with Kafka version 2.5) ZooKeeper supports a sever-side config
ssl.clientAuth (case-insensitively: want/need/none are the valid options, the default is need),
and setting this value to none in ZooKeeper allows clients to connect via a TLS-encrypted connection
without presenting their own certificate. Here is a sample (partial) Kafka Broker configuration for connecting to ZooKeeper with just TLS encryption.
These configurations are described above in Broker Configs.
# connect to the ZooKeeper port configured for TLS
zookeeper.connect=zk1:2182,zk2:2182,zk3:2182
# required to use TLS to ZooKeeper (default is false)
zookeeper.ssl.client.enable=true
# required to use TLS to ZooKeeper
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
# define trust stores to use TLS to ZooKeeper; ignored unless zookeeper.ssl.client.enable=true
# no need to set keystore information assuming ssl.clientAuth=none on ZooKeeper
zookeeper.ssl.truststore.location=/path/to/kafka/truststore.jks
zookeeper.ssl.truststore.password=kafka-ts-passwd
# tell broker to create ACLs on znodes (if using SASL authentication, otherwise do not set this)
zookeeper.set.acl=true
Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka and other systems. It makes it simple to quickly define connectors that move large collections of data into and out of Kafka. Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing with low latency. An export job can deliver data from Kafka topics into secondary storage and query systems or into batch systems for offline analysis.
Kafka Connect features include:
A common framework for Kafka connectors - Kafka Connect standardizes integration of other data systems with Kafka, simplifying connector development, deployment, and management
Distributed and standalone modes - scale up to a large, centrally managed service supporting an entire organization or scale down to development, testing, and small production deployments
REST interface - submit and manage connectors to your Kafka Connect cluster via an easy to use REST API
Automatic offset management - with just a little information from connectors, Kafka Connect can manage the offset commit process automatically so connector developers do not need to worry about this error prone part of connector development
Distributed and scalable by default - Kafka Connect builds on the existing group management protocol. More workers can be added to scale up a Kafka Connect cluster.
Streaming/batch integration - leveraging Kafka's existing capabilities, Kafka Connect is an ideal solution for bridging streaming and batch data systems
The quickstart provides a brief example of how to run a standalone version of Kafka Connect. This section describes how to configure, run, and manage Kafka Connect in more detail.
Kafka Connect currently supports two modes of execution: standalone (single process) and distributed.
In standalone mode all work is performed in a single process. This configuration is simpler to setup and get started with and may be useful in situations where only one worker makes sense (e.g. collecting log files), but it does not benefit from some of the features of Kafka Connect such as fault tolerance. You can start a standalone process with the following command:
The first parameter is the configuration for the worker. This includes settings such as the Kafka connection parameters, serialization format, and how frequently to commit offsets. The provided example should work well with a local cluster running with the default configuration provided by . It will require tweaking to use with a different configuration or production deployment. All workers (both standalone and distributed) require a few configs:config/server.properties
bootstrap.servers - List of Kafka servers used to bootstrap connections to Kafka
key.converter - Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
value.converter - Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
plugin.path (default ) - a list of paths that contain Connect plugins (connectors, converters, transformations). Before running quick starts, users must add the absolute path that contains the example FileStreamSourceConnector and FileStreamSinkConnector packaged in , because these connectors are not included by default to the or the of the Connect worker (see plugin.path property for examples).emptyconnect-file-"version".jarCLASSPATHplugin.path
The important configuration options specific to standalone mode are:
offset.storage.file.filename - File to store source connector offsets
The parameters that are configured here are intended for producers and consumers used by Kafka Connect to access the configuration, offset and status topics. For configuration of the producers used by Kafka source tasks and the consumers used by Kafka sink tasks, the same parameters can be used but need to be prefixed with and respectively. The only Kafka client parameter that is inherited without a prefix from the worker configuration is , which in most cases will be sufficient, since the same cluster is often used for all purposes. A notable exception is a secured cluster, which requires extra parameters to allow connections. These parameters will need to be set up to three times in the worker configuration, once for management access, once for Kafka sources and once for Kafka sinks.producer.consumer.bootstrap.servers
Starting with 2.3.0, client configuration overrides can be configured individually per connector by using the prefixes and for Kafka sources or Kafka sinks respectively. These overrides are included with the rest of the connector's configuration properties.producer.override.consumer.override.
The remaining parameters are connector configuration files. You may include as many as you want, but all will execute within the same process (on different threads). You can also choose not to specify any connector configuration files on the command line, and instead use the REST API to create connectors at runtime after your standalone worker starts.
Distributed mode handles automatic balancing of work, allows you to scale up (or down) dynamically, and offers fault tolerance both in the active tasks and for configuration and offset commit data. Execution is very similar to standalone mode:
The difference is in the class which is started and the configuration parameters which change how the Kafka Connect process decides where to store configurations, how to assign work, and where to store offsets and task statues. In the distributed mode, Kafka Connect stores the offsets, configs and task statuses in Kafka topics. It is recommended to manually create the topics for offset, configs and statuses in order to achieve the desired the number of partitions and replication factors. If the topics are not yet created when starting Kafka Connect, the topics will be auto created with default number of partitions and replication factor, which may not be best suited for its usage.
In particular, the following configuration parameters, in addition to the common settings mentioned above, are critical to set before starting your cluster:
group.id (default ) - unique name for the cluster, used in forming the Connect cluster group; note that this must not conflict with consumer group IDsconnect-cluster
config.storage.topic (default ) - topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated, compacted topic. You may need to manually create the topic to ensure the correct configuration as auto created topics may have multiple partitions or be automatically configured for deletion rather than compactionconnect-configs
offset.storage.topic (default ) - topic to use for storing offsets; this topic should have many partitions, be replicated, and be configured for compactionconnect-offsets
status.storage.topic (default ) - topic to use for storing statuses; this topic can have multiple partitions, and should be replicated and configured for compactionconnect-status
Note that in distributed mode the connector configurations are not passed on the command line. Instead, use the REST API described below to create, modify, and destroy connectors.
Connector configurations are simple key-value mappings. In both standalone and distributed mode, they are included in the JSON payload for the REST request that creates (or modifies) the connector. In standalone mode these can also be defined in a properties file and passed to the Connect process on the command line.
Most configurations are connector dependent, so they can't be outlined here. However, there are a few common options:
name - Unique name for the connector. Attempting to register again with the same name will fail.
connector.class - The Java class for the connector
tasks.max - The maximum number of tasks that should be created for this connector. The connector may create fewer tasks if it cannot achieve this level of parallelism.
key.converter - (optional) Override the default key converter set by the worker.
value.converter - (optional) Override the default value converter set by the worker.
The config supports several formats: the full name or alias of the class for this connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name or use FileStreamSink or FileStreamSinkConnector to make the configuration a bit shorter.connector.class
Sink connectors also have a few additional options to control their input. Each sink connector must set one of the following:
topics - A comma-separated list of topics to use as input for this connector
topics.regex - A Java regular expression of topics to use as input for this connector
For any other options, you should consult the documentation for the connector.
Connectors can be configured with transformations to make lightweight message-at-a-time modifications. They can be convenient for data massaging and event routing.
A transformation chain can be specified in the connector configuration.
transforms - List of aliases for the transformation, specifying the order in which the transformations will be applied.
transforms.$alias.type - Fully qualified class name for the transformation.
transforms.$alias.$transformationSpecificConfig Configuration properties for the transformation
For example, lets take the built-in file source connector and use a transformation to add a static field.
Throughout the example we'll use schemaless JSON data format. To use schemaless format, we changed the following two lines in from true to false:connect-standalone.properties
The file source connector reads each line as a String. We will wrap each line in a Map and then add a second field to identify the origin of the event. To do this, we use two transformations:
HoistField to place the input line inside a Map
InsertField to add the static field. In this example we'll indicate that the record came from a file connector
After adding the transformations, file looks as following:connect-file-source.properties
All the lines starting with were added for the transformations. You can see the two transformations we created: "InsertSource" and "MakeMap" are aliases that we chose to give the transformations. The transformation types are based on the list of built-in transformations you can see below. Each transformation type has additional configuration: HoistField requires a configuration called "field", which is the name of the field in the map that will include the original String from the file. InsertField transformation lets us specify the field name and the value that we are adding.transforms
When we ran the file source connector on my sample file without the transformations, and then read them using , the results were:kafka-console-consumer.sh
"foo"
"bar"
"hello world"
We then create a new file connector, this time after adding the transformations to the configuration file. This time, the results will be:
You can see that the lines we've read are now part of a JSON map, and there is an extra field with the static value we specified. This is just one example of what you can do with transformations.
Several widely-applicable data and routing transformations are included with Kafka Connect:
InsertField - Add a field using either static data or record metadata
ReplaceField - Filter or rename fields
MaskField - Replace field with valid null value for the type (0, empty string, etc) or custom replacement (non-empty string or numeric value only)
ValueToKey - Replace the record key with a new key formed from a subset of fields in the record value
HoistField - Wrap the entire event as a single field inside a Struct or a Map
ExtractField - Extract a specific field from Struct and Map and include only this field in results
SetSchemaMetadata - modify the schema name or version
TimestampRouter - Modify the topic of a record based on original topic and timestamp. Useful when using a sink that needs to write to different tables or indexes based on timestamps
RegexRouter - modify the topic of a record based on original topic, replacement string and a regular expression
Filter - Removes messages from all further processing. This is used with a predicate to selectively filter certain messages.
InsertHeader - Add a header using static data
HeadersFrom - Copy or move fields in the key or value to the record headers
DropHeaders - Remove headers by name
Details on how to configure each transformation are listed below:
Insert field(s) using attributes from the record metadata or a configured static value.Use the concrete transformation type designed for the record key () or value ().
org.apache.kafka.connect.transforms.InsertField$Keyorg.apache.kafka.connect.transforms.InsertField$Value
Filter or rename fields.Use the concrete transformation type designed for the record key () or value ().
org.apache.kafka.connect.transforms.ReplaceField$Keyorg.apache.kafka.connect.transforms.ReplaceField$Value
Mask specified fields with a valid null value for the field type (i.e. 0, false, empty string, and so on).For numeric and string fields, an optional replacement value can be specified that is converted to the correct type.Use the concrete transformation type designed for the record key () or value ().
org.apache.kafka.connect.transforms.MaskField$Keyorg.apache.kafka.connect.transforms.MaskField$Value
Wrap data using the specified field name in a Struct when schema present, or a Map in the case of schemaless data.Use the concrete transformation type designed for the record key () or value ().
org.apache.kafka.connect.transforms.HoistField$Keyorg.apache.kafka.connect.transforms.HoistField$Value
Extract the specified field from a Struct when schema present, or a Map in the case of schemaless data. Any null values are passed through unmodified.Use the concrete transformation type designed for the record key () or value ().
org.apache.kafka.connect.transforms.ExtractField$Keyorg.apache.kafka.connect.transforms.ExtractField$Value
Set the schema name, version or both on the record's key () or value () schema.
org.apache.kafka.connect.transforms.SetSchemaMetadata$Keyorg.apache.kafka.connect.transforms.SetSchemaMetadata$Value
Update the record's topic field as a function of the original topic value and the record timestamp.This is mainly useful for sink connectors, since the topic field is often used to determine the equivalent entity name in the destination system(e.g. database table or search index name).
Update the record topic using the configured regular expression and replacement string.Under the hood, the regex is compiled to a . If the pattern matches the input topic, is used with the replacement string to obtain the new topic.
java.util.regex.Patternjava.util.regex.Matcher#replaceFirst()
Flatten a nested data structure, generating names for each field by concatenating the field names at each level with a configurable delimiter character. Applies to Struct when schema present, or a Map in the case of schemaless data. Array fields and their contents are not modified. The default delimiter is '.'.Use the concrete transformation type designed for the record key () or value ().
org.apache.kafka.connect.transforms.Flatten$Keyorg.apache.kafka.connect.transforms.Flatten$Value
Cast fields or the entire key or value to a specific type, e.g. to force an integer field to a smaller width. Cast from integers, floats, boolean and string to any other type, and cast binary to string (base64 encoded).Use the concrete transformation type designed for the record key () or value ().
org.apache.kafka.connect.transforms.Cast$Keyorg.apache.kafka.connect.transforms.Cast$Value
List of fields and the type to cast them to of the form field1:type,field2:type to cast fields of Maps or Structs. A single type to cast the entire value. Valid types are int8, int16, int32, int64, float32, float64, boolean, and string. Note that binary fields can only be cast to string.
Type:
list
Default:
Valid Values:
list of colon-delimited pairs, e.g. foo:bar,abc:xyz
Convert timestamps between different formats such as Unix epoch, strings, and Connect Date/Timestamp types.Applies to individual fields or to the entire value.Use the concrete transformation type designed for the record key () or value ().
org.apache.kafka.connect.transforms.TimestampConverter$Keyorg.apache.kafka.connect.transforms.TimestampConverter$Value
A SimpleDateFormat-compatible format for the timestamp. Used to generate the output when type=string or used to parse the input if the input is a string.
The desired Unix precision for the timestamp: seconds, milliseconds, microseconds, or nanoseconds. Used to generate the output when type=unix or used to parse the input if the input is a Long.Note: This SMT will cause precision loss during conversions from, and to, values with sub-millisecond components.
Drops all records, filtering them from subsequent transformations in the chain. This is intended to be used conditionally to filter out records matching (or not matching) a particular Predicate.
Moves or copies fields in the key/value of a record into that record's headers. Corresponding elements of and together identify a field and the header it should be moved or copied to. Use the concrete transformation type designed for the record key () or value ().
fieldsheadersorg.apache.kafka.connect.transforms.HeaderFrom$Keyorg.apache.kafka.connect.transforms.HeaderFrom$Value
Either if the fields are to be moved to the headers (removed from the key/value), or if the fields are to be copied to the headers (retained in the key/value).movecopy
Transformations can be configured with predicates so that the transformation is applied only to messages which satisfy some condition. In particular, when combined with the Filter transformation predicates can be used to selectively filter out certain messages.
Predicates are specified in the connector configuration.
predicates - Set of aliases for the predicates to be applied to some of the transformations.
predicates.$alias.type - Fully qualified class name for the predicate.
predicates.$alias.$predicateSpecificConfig - Configuration properties for the predicate.
All transformations have the implicit config properties and . A predicular predicate is associated with a transformation by setting the transformation's config to the predicate's alias. The predicate's value can be reversed using the configuration property.predicatenegatepredicatenegate
For example, suppose you have a source connector which produces messages to many different topics and you want to:
filter out the messages in the 'foo' topic entirely
apply the ExtractField transformation with the field name 'other_field' to records in all topics except the topic 'bar'
To do this we need first to filter out the records destined for the topic 'foo'. The Filter transformation removes records from further processing, and can use the TopicNameMatches predicate to apply the transformation only to records in topics which match a certain regular expression. TopicNameMatches's only configuration property is which is a Java regular expression for matching against the topic name. The configuration would look like this:pattern
Next we need to apply ExtractField only when the topic name of the record is not 'bar'. We can't just use TopicNameMatches directly, because that would apply the transformation to matching topic names, not topic names which do not match. The transformation's implicit config properties allows us to invert the set of records which a predicate matches. Adding the configuration for this to the previous example we arrive at:negate
Since Kafka Connect is intended to be run as a service, it also provides a REST API for managing connectors. This REST API is available in both standalone and distributed mode. The REST API server can be configured using the configuration option.
This field should contain a list of listeners in the following format: . Currently supported protocols are and .
For example:listenersprotocol://host:port,protocol2://host2:port2httphttps
By default, if no are specified, the REST server runs on port 8083 using the HTTP protocol. When using HTTPS, the configuration has to include the SSL configuration.
By default, it will use the settings. In case it is needed to use different configuration for the REST API than for connecting to Kafka brokers, the fields can be prefixed with .
When using the prefix, only the prefixed options will be used and the options without the prefix will be ignored. Following fields can be used to configure HTTPS for the REST API:listenersssl.*listeners.httpsssl.*
ssl.keystore.location
ssl.keystore.password
ssl.keystore.type
ssl.key.password
ssl.truststore.location
ssl.truststore.password
ssl.truststore.type
ssl.enabled.protocols
ssl.provider
ssl.protocol
ssl.cipher.suites
ssl.keymanager.algorithm
ssl.secure.random.implementation
ssl.trustmanager.algorithm
ssl.endpoint.identification.algorithm
ssl.client.auth
The REST API is used not only by users to monitor / manage Kafka Connect. In distributed mode, it is also used for the Kafka Connect cross-cluster communication. Some requests received on the follower nodes REST API will be forwarded to the leader node REST API.
In case the URI under which is given host reachable is different from the URI which it listens on, the configuration options , and
can be used to change the URI which will be used by the follower nodes to connect with the leader. When using both HTTP and HTTPS listeners, the option can be also used to define which listener
will be used for the cross-cluster communication. When using HTTPS for communication between nodes, the same or options will be used to configure the HTTPS client.rest.advertised.host.namerest.advertised.portrest.advertised.listenerrest.advertised.listenerssl.*listeners.https
The following are the currently supported REST API endpoints:
GET /connectors - return a list of active connectors
POST /connectors - create a new connector; the request body should be a JSON object containing a string field and an object field with the connector configuration parametersnameconfig
GET /connectors/{name} - get information about a specific connector
GET /connectors/{name}/config - get the configuration parameters for a specific connector
PUT /connectors/{name}/config - update the configuration parameters for a specific connector
GET /connectors/{name}/status - get current status of the connector, including if it is running, failed, paused, etc., which worker it is assigned to, error information if it has failed, and the state of all its tasks
GET /connectors/{name}/tasks - get a list of tasks currently running for a connector
GET /connectors/{name}/tasks/{taskid}/status - get current status of the task, including if it is running, failed, paused, etc., which worker it is assigned to, and error information if it has failed
PUT /connectors/{name}/pause - pause the connector and its tasks, which stops message processing until the connector is resumed. Any resources claimed by its tasks are left allocated, which allows the connector to begin processing data quickly once it is resumed.
PUT /connectors/{name}/stop - stop the connector and shut down its tasks, deallocating any resources claimed by its tasks. This is more efficient from a resource usage standpoint than pausing the connector, but can cause it to take longer to begin processing data once resumed.
PUT /connectors/{name}/resume - resume a paused or stopped connector (or do nothing if the connector is not paused or stopped)
POST /connectors/{name}/restart?includeTasks=<true|false>&onlyFailed=<true|false> - restart a connector and its tasks instances.
the "includeTasks" parameter specifies whether to restart the connector instance and task instances ("includeTasks=true") or just the connector instance ("includeTasks=false"), with the default ("false") preserving the same behavior as earlier versions.
the "onlyFailed" parameter specifies whether to restart just the instances with a FAILED status ("onlyFailed=true") or all instances ("onlyFailed=false"), with the default ("false") preserving the same behavior as earlier versions.
POST /connectors/{name}/tasks/{taskId}/restart - restart an individual task (typically because it has failed)
DELETE /connectors/{name} - delete a connector, halting all tasks and deleting its configuration
GET /connectors/{name}/topics - get the set of topics that a specific connector is using since the connector was created or since a request to reset its set of active topics was issued
PUT /connectors/{name}/topics/reset - send a request to empty the set of active topics of a connector
GET /connectors/{name}/offsets - get the current offsets for a connector (see KIP-875 for more details)
Kafka Connect also provides a REST API for getting information about connector plugins:
GET /connector-plugins- return a list of connector plugins installed in the Kafka Connect cluster. Note that the API only checks for connectors on the worker that handles the request, which means you may see inconsistent results, especially during a rolling upgrade if you add new connector jars
PUT /connector-plugins/{connector-type}/config/validate - validate the provided configuration values against the configuration definition. This API performs per config validation, returns suggested values and error messages during validation.
The following is a supported REST request at the top-level (root) endpoint:
GET /- return basic information about the Kafka Connect cluster such as the version of the Connect worker that serves the REST request (including git commit ID of the source code) and the Kafka cluster ID that is connected to.
Kafka Connect provides error reporting to handle errors encountered along various stages of processing. By default, any error encountered during conversion or within transformations will cause the connector to fail. Each connector configuration can also enable tolerating such errors by skipping them, optionally writing each error and the details of the failed operation and problematic record (with various levels of detail) to the Connect application log. These mechanisms also capture errors when a sink connector is processing the messages consumed from its Kafka topics, and all of the errors can be written to a configurable "dead letter queue" (DLQ) Kafka topic.
To report errors within a connector's converter, transforms, or within the sink connector itself to the log, set in the connector configuration to log details of each error and problem record's topic, partition, and offset. For additional debugging purposes, set to also log the problem record key, value, and headers to the log (note this may log sensitive information).errors.log.enable=trueerrors.log.include.messages=true
To report errors within a connector's converter, transforms, or within the sink connector itself to a dead letter queue topic, set , and optionally .errors.deadletterqueue.topic.nameerrors.deadletterqueue.context.headers.enable=true
By default connectors exhibit "fail fast" behavior immediately upon an error or exception. This is equivalent to adding the following configuration properties with their defaults to a connector configuration:
# disable retries on failure
errors.retry.timeout=0
# do not log the error and their contexts
errors.log.enable=false
# do not record errors in a dead letter queue topic
errors.deadletterqueue.topic.name=
# Fail on first error
errors.tolerance=none
These and other related connector configuration properties can be changed to provide different behavior. For example, the following configuration properties can be added to a connector configuration to setup error handling with multiple retries, logging to the application logs and the Kafka topic, and tolerating all errors by reporting them rather than failing the connector task:my-connector-errors
# retry for at most 10 minutes times waiting up to 30 seconds between consecutive failures
errors.retry.timeout=600000
errors.retry.delay.max.ms=30000
# log error context along with application logs, but do not include configs and messages
errors.log.enable=true
errors.log.include.messages=false
# produce error context into the Kafka topic
errors.deadletterqueue.topic.name=my-connector-errors
# Tolerate all errors.
errors.tolerance=all
Kafka Connect is capable of providing exactly-once semantics for sink connectors (as of version 0.11.0) and source connectors (as of version 3.3.0). Please note that support for exactly-once semantics is highly dependent on the type of connector you run. Even if you set all the correct worker properties in the configuration for each node in a cluster, if a connector is not designed to, or cannot take advantage of the capabilities of the Kafka Connect framework, exactly-once may not be possible.
If a sink connector supports exactly-once semantics, to enable exactly-once at the Connect worker level, you must ensure its consumer group is configured to ignore records in aborted transactions. You can do this by setting the worker property to or, if running a version of Kafka Connect that supports it, using a connector client config override policy that allows the property to be set to in individual connector configs. There are no additional ACL requirements.consumer.isolation.levelread_committedconsumer.override.isolation.levelread_committed
If a source connector supports exactly-once semantics, you must configure your Connect cluster to enable framework-level support for exactly-once source connectors. Additional ACLs may be necessary if running against a secured Kafka cluster. Note that exactly-once support for source connectors is currently only available in distributed mode; standalone Connect workers cannot provide exactly-once semantics.
Worker configuration
For new Connect clusters, set the property to in the worker config for each node in the cluster. For existing clusters, two rolling upgrades are necessary. During the first upgrade, the property should be set to , and during the second, it should be set to .exactly.once.source.supportenabledexactly.once.source.supportpreparingenabled
ACL requirements
With exactly-once source support enabled, the principal for each Connect worker will require the following ACLs:
Operation
Resource Type
Resource Name
Note
Write
TransactionalId
connect-cluster-${groupId}, where is the of the cluster${groupId}group.id
Describe
TransactionalId
connect-cluster-${groupId}, where is the of the cluster${groupId}group.id
IdempotentWrite
Cluster
ID of the Kafka cluster that hosts the worker's config topic
The IdempotentWrite ACL has been deprecated as of 2.8 and will only be necessary for Connect clusters running on pre-2.8 Kafka clusters
And the principal for each individual connector will require the following ACLs:
Operation
Resource Type
Resource Name
Note
Write
TransactionalId
${groupId}-${connector}-${taskId}, for each task that the connector will create, where is the of the Connect cluster, is the name of the connector, and is the ID of the task (starting from zero)${groupId}group.id${connector}${taskId}
A wildcard prefix of can be used for convenience if there is no risk of conflict with other transactional IDs or if conflicts are acceptable to the user.${groupId}-${connector}*
Describe
TransactionalId
${groupId}-${connector}-${taskId}, for each task that the connector will create, where is the of the Connect cluster, is the name of the connector, and is the ID of the task (starting from zero)${groupId}group.id${connector}${taskId}
A wildcard prefix of can be used for convenience if there is no risk of conflict with other transactional IDs or if conflicts are acceptable to the user.${groupId}-${connector}*
Write
Topic
Offsets topic used by the connector, which is either the value of the property in the connector’s configuration if provided, or the value of the property in the worker’s configuration if not.offsets.storage.topicoffsets.storage.topic
Read
Topic
Offsets topic used by the connector, which is either the value of the property in the connector’s configuration if provided, or the value of the property in the worker’s configuration if not.offsets.storage.topicoffsets.storage.topic
Describe
Topic
Offsets topic used by the connector, which is either the value of the property in the connector’s configuration if provided, or the value of the property in the worker’s configuration if not.offsets.storage.topicoffsets.storage.topic
Create
Topic
Offsets topic used by the connector, which is either the value of the property in the connector’s configuration if provided, or the value of the property in the worker’s configuration if not.offsets.storage.topicoffsets.storage.topic
Only necessary if the offsets topic for the connector does not exist yet
IdempotentWrite
Cluster
ID of the Kafka cluster that the source connector writes to
The IdempotentWrite ACL has been deprecated as of 2.8 and will only be necessary for Connect clusters running on pre-2.8 Kafka clusters
This guide describes how developers can write new connectors for Kafka Connect to move data between Kafka and other systems. It briefly reviews a few key concepts and then describes how to create a simple connector.
To copy data between Kafka and another system, users create a for the system they want to pull data from or push data to. Connectors come in two flavors: import data from another system (e.g. would import a relational database into Kafka) and export data (e.g. would export the contents of a Kafka topic to an HDFS file).ConnectorSourceConnectorsJDBCSourceConnectorSinkConnectorsHDFSSinkConnector
Connectors do not perform any data copying themselves: their configuration describes the data to be copied, and the is responsible for breaking that job into a set of that can be distributed to workers. These also come in two corresponding flavors: and .ConnectorTasksTasksSourceTaskSinkTask
With an assignment in hand, each must copy its subset of the data to or from Kafka. In Kafka Connect, it should always be possible to frame these assignments as a set of input and output streams consisting of records with consistent schemas. Sometimes this mapping is obvious: each file in a set of log files can be considered a stream with each parsed line forming a record using the same schema and offsets stored as byte offsets in the file. In other cases it may require more effort to map to this model: a JDBC connector can map each table to a stream, but the offset is less clear. One possible mapping uses a timestamp column to generate queries incrementally returning new data, and the last queried timestamp can be used as the offset.Task
Each stream should be a sequence of key-value records. Both the keys and values can have complex structure -- many primitive types are provided, but arrays, objects, and nested data structures can be represented as well. The runtime data format does not assume any particular serialization format; this conversion is handled internally by the framework.
In addition to the key and value, records (both those generated by sources and those delivered to sinks) have associated stream IDs and offsets. These are used by the framework to periodically commit the offsets of data that have been processed so that in the event of failures, processing can resume from the last committed offsets, avoiding unnecessary reprocessing and duplication of events.
Not all jobs are static, so implementations are also responsible for monitoring the external system for any changes that might require reconfiguration. For example, in the example, the might assign a set of tables to each . When a new table is created, it must discover this so it can assign the new table to one of the by updating its configuration. When it notices a change that requires reconfiguration (or a change in the number of ), it notifies the framework and the framework updates any corresponding .ConnectorJDBCSourceConnectorConnectorTaskTasksTasksTasks
Developing a connector only requires implementing two interfaces, the and . A simple example is included with the source code for Kafka in the package. This connector is meant for use in standalone mode and has implementations of a / to read each line of a file and emit it as a record and a / that writes each record to a file.ConnectorTaskfileSourceConnectorSourceTaskSinkConnectorSinkTask
The rest of this section will walk through some code to demonstrate the key steps in creating a connector, but developers should also refer to the full example source code as many details are omitted for brevity.
We'll cover the as a simple example. implementations are very similar. Start by creating the class that inherits from and add a field that will store the configuration information to be propagated to the task(s) (the topic to send data to, and optionally - the filename to read from and the maximum batch size):SourceConnectorSinkConnectorSourceConnector
public class FileStreamSourceConnector extends SourceConnector {
private Map<String, String> props;
The easiest method to fill in is , which defines the class that should be instantiated in worker processes to actually read the data:taskClass()
@Override
public Class<? extends Task> taskClass() {
return FileStreamSourceTask.class;
}
We will define the class below. Next, we add some standard lifecycle methods, and :FileStreamSourceTaskstart()stop()
@Override
public void start(Map<String, String> props) {
// Initialization logic and setting up of resources can take place in this method.
// This connector doesn't need to do any of that, but we do log a helpful message to the user.
this.props = props;
AbstractConfig config = new AbstractConfig(CONFIG_DEF, props);
String filename = config.getString(FILE_CONFIG);
filename = (filename == null || filename.isEmpty()) ? "standard input" : config.getString(FILE_CONFIG);
log.info("Starting file source connector reading from {}", filename);
}
@Override
public void stop() {
// Nothing to do since no background monitoring is required.
}
Finally, the real core of the implementation is in . In this case we are only
handling a single file, so even though we may be permitted to generate more tasks as per the
argument, we return a list with only one entry:taskConfigs()maxTasks
@Override
public List<Map<String, String>> taskConfigs(int maxTasks) {
// Note that the task configs could contain configs additional to or different from the connector configs if needed. For instance,
// if different tasks have different responsibilities, or if different tasks are meant to process different subsets of the source data stream).
ArrayList<Map<String, String>> configs = new ArrayList<>();
// Only one input stream makes sense.
configs.add(props);
return configs;
}
Even with multiple tasks, this method implementation is usually pretty simple. It just has to determine the number of input tasks, which may require contacting the remote service it is pulling data from, and then divvy them up. Because some patterns for splitting work among tasks are so common, some utilities are provided in to simplify these cases.ConnectorUtils
Note that this simple example does not include dynamic input. See the discussion in the next section for how to trigger updates to task configs.
Next we'll describe the implementation of the corresponding . The implementation is short, but too long to cover completely in this guide. We'll use pseudo-code to describe most of the implementation, but you can refer to the source code for the full example.SourceTask
Just as with the connector, we need to create a class inheriting from the appropriate base class. It also has some standard lifecycle methods:Task
public class FileStreamSourceTask extends SourceTask {
private String filename;
private InputStream stream;
private String topic;
private int batchSize;
@Override
public void start(Map<String, String> props) {
filename = props.get(FileStreamSourceConnector.FILE_CONFIG);
stream = openOrThrowError(filename);
topic = props.get(FileStreamSourceConnector.TOPIC_CONFIG);
batchSize = props.get(FileStreamSourceConnector.TASK_BATCH_SIZE_CONFIG);
}
@Override
public synchronized void stop() {
stream.close();
}
These are slightly simplified versions, but show that these methods should be relatively simple and the only work they should perform is allocating or freeing resources. There are two points to note about this implementation. First, the method does not yet handle resuming from a previous offset, which will be addressed in a later section. Second, the method is synchronized. This will be necessary because are given a dedicated thread which they can block indefinitely, so they need to be stopped with a call from a different thread in the Worker.start()stop()SourceTasks
Next, we implement the main functionality of the task, the method which gets events from the input system and returns a :poll()List<SourceRecord>
@Override
public List<SourceRecord> poll() throws InterruptedException {
try {
ArrayList<SourceRecord> records = new ArrayList<>();
while (streamValid(stream) && records.isEmpty()) {
LineAndOffset line = readToNextLine(stream);
if (line != null) {
Map<String, Object> sourcePartition = Collections.singletonMap("filename", filename);
Map<String, Object> sourceOffset = Collections.singletonMap("position", streamOffset);
records.add(new SourceRecord(sourcePartition, sourceOffset, topic, Schema.STRING_SCHEMA, line));
if (records.size() >= batchSize) {
return records;
}
} else {
Thread.sleep(1);
}
}
return records;
} catch (IOException e) {
// Underlying stream was killed, probably as a result of calling stop. Allow to return
// null, and driving thread will handle any shutdown if necessary.
}
return null;
}
Again, we've omitted some details, but we can see the important steps: the method is going to be called repeatedly, and for each call it will loop trying to read records from the file. For each line it reads, it also tracks the file offset. It uses this information to create an output with four pieces of information: the source partition (there is only one, the single file being read), source offset (byte offset in the file), output topic name, and output value (the line, and we include a schema indicating this value will always be a string). Other variants of the constructor can also include a specific output partition, a key, and headers.poll()SourceRecordSourceRecord
Note that this implementation uses the normal Java interface and may sleep if data is not available. This is acceptable because Kafka Connect provides each task with a dedicated thread. While task implementations have to conform to the basic interface, they have a lot of flexibility in how they are implemented. In this case, an NIO-based implementation would be more efficient, but this simple approach works, is quick to implement, and is compatible with older versions of Java.InputStreampoll()
Although not used in the example, also provides two APIs to commit offsets in the source system: and . The APIs are provided for source systems which have an acknowledgement mechanism for messages. Overriding these methods allows the source connector to acknowledge messages in the source system, either in bulk or individually, once they have been written to Kafka.
The API stores the offsets in the source system, up to the offsets that have been returned by . The implementation of this API should block until the commit is complete. The API saves the offset in the source system for each after it is written to Kafka. As Kafka Connect will record offsets automatically, s are not required to implement them. In cases where a connector does need to acknowledge messages in the source system, only one of the APIs is typically required.SourceTaskcommitcommitRecordcommitpollcommitRecordSourceRecordSourceTask
The previous section described how to implement a simple . Unlike and , and have very different interfaces because uses a pull interface and uses a push interface. Both share the common lifecycle methods, but the interface is quite different:SourceTaskSourceConnectorSinkConnectorSourceTaskSinkTaskSourceTaskSinkTaskSinkTask
public abstract class SinkTask implements Task {
public void initialize(SinkTaskContext context) {
this.context = context;
}
public abstract void put(Collection<SinkRecord> records);
public void flush(Map<TopicPartition, OffsetAndMetadata> currentOffsets) {
}
The documentation contains full details, but this interface is nearly as simple as the . The method should contain most of the implementation, accepting sets of , performing any required translation, and storing them in the destination system. This method does not need to ensure the data has been fully written to the destination system before returning. In fact, in many cases internal buffering will be useful so an entire batch of records can be sent at once, reducing the overhead of inserting events into the downstream data store. The contain essentially the same information as : Kafka topic, partition, offset, the event key and value, and optional headers.SinkTaskSourceTaskput()SinkRecordsSinkRecordsSourceRecords
The method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once
delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the operation atomically commits the data and offsets to a final location in HDFS.flush()offsetsflush()
When error reporting is enabled for a connector, the connector can use an to report problems with individual records sent to a sink connector. The following example shows how a connector's subclass might obtain and use the , safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature:ErrantRecordReporterSinkTaskErrantRecordReporter
private ErrantRecordReporter reporter;
@Override
public void start(Map<String, String> props) {
...
try {
reporter = context.errantRecordReporter(); // may be null if DLQ not enabled
} catch (NoSuchMethodException | NoClassDefFoundError e) {
// Will occur in Connect runtimes earlier than 2.6
reporter = null;
}
}
@Override
public void put(Collection<SinkRecord> records) {
for (SinkRecord record: records) {
try {
// attempt to process and send record to data sink
process(record);
} catch(Exception e) {
if (reporter != null) {
// Send errant record to error reporter
reporter.report(record, e);
} else {
// There's no error reporter, so fail
throw new ConnectException("Failed on record", e);
}
}
}
}
The implementation included a stream ID (the input filename) and offset (position in the file) with each record. The framework uses this to commit offsets periodically so that in the case of a failure, the task can recover and minimize the number of events that are reprocessed and possibly duplicated (or to resume from the most recent offset if Kafka Connect was stopped gracefully, e.g. in standalone mode or due to a job reconfiguration). This commit process is completely automated by the framework, but only the connector knows how to seek back to the right position in the input stream to resume from that location.SourceTask
To correctly resume upon startup, the task can use the passed into its method to access the offset data. In , we would add a bit more code to read the offset (if it exists) and seek to that position:SourceContextinitialize()initialize()
stream = new FileInputStream(filename);
Map<String, Object> offset = context.offsetStorageReader().offset(Collections.singletonMap(FILENAME_FIELD, filename));
if (offset != null) {
Long lastRecordedOffset = (Long) offset.get("position");
if (lastRecordedOffset != null)
seekToOffset(stream, lastRecordedOffset);
}
Of course, you might need to read many keys for each of the input streams. The interface also allows you to issue bulk reads to efficiently load all offsets, then apply them by seeking each input stream to the appropriate position.OffsetStorageReader
With the passing of KIP-618, Kafka Connect supports exactly-once source connectors as of version 3.3.0. In order for a source connector to take advantage of this support, it must be able to provide meaningful source offsets for each record that it emits, and resume consumption from the external system at the exact position corresponding to any of those offsets without dropping or duplicating messages.
Defining transaction boundaries
By default, the Kafka Connect framework will create and commit a new Kafka transaction for each batch of records that a source task returns from its method. However, connectors can also define their own transaction boundaries, which can be enabled by users by setting the property to in the config for the connector.polltransaction.boundaryconnector
If enabled, the connector's tasks will have access to a from their , which they can use to control when transactions are aborted and committed.TransactionContextSourceTaskContext
For example, to commit a transaction at least every ten records:
private int recordsSent;
@Override
public void start(Map<String, String> props) {
this.recordsSent = 0;
}
@Override
public List<SourceRecord> poll() {
List<SourceRecord> records = fetchRecords();
boolean shouldCommit = false;
for (SourceRecord record : records) {
if (++this.recordsSent >= 10) {
shouldCommit = true;
}
}
if (shouldCommit) {
this.recordsSent = 0;
this.context.transactionContext().commitTransaction();
}
return records;
}
Or to commit a transaction for exactly every tenth record:
private int recordsSent;
@Override
public void start(Map<String, String> props) {
this.recordsSent = 0;
}
@Override
public List<SourceRecord> poll() {
List<SourceRecord> records = fetchRecords();
for (SourceRecord record : records) {
if (++this.recordsSent % 10 == 0) {
this.context.transactionContext().commitTransaction(record);
}
}
return records;
}
Most connectors do not need to define their own transaction boundaries. However, it may be useful if files or objects in the source system are broken up into multiple source records, but should be delivered atomically. Additionally, it may be useful if it is impossible to give each source record a unique source offset, if every record with a given offset is delivered within a single transaction.
Note that if the user has not enabled connector-defined transaction boundaries in the connector configuration, the returned by will be .TransactionContextcontext.transactionContext()null
Validation APIs
A few additional preflight validation APIs can be implemented by source connector developers.
Some users may require exactly-once semantics from a connector. In this case, they may set the property to in the configuration for the connector. When this happens, the Kafka Connect framework will ask the connector whether it can provide exactly-once semantics with the specified configuration. This is done by invoking the method on the connector.exactly.once.supportrequiredexactlyOnceSupport
If a connector doesn't support exactly-once semantics, it should still implement this method to let users know for certain that it cannot provide exactly-once semantics:
@Override
public ExactlyOnceSupport exactlyOnceSupport(Map<String, String> props) {
// This connector cannot provide exactly-once semantics under any conditions
return ExactlyOnceSupport.UNSUPPORTED;
}
Otherwise, a connector should examine the configuration, and return if it can provide exactly-once semantics:ExactlyOnceSupport.SUPPORTED
@Override
public ExactlyOnceSupport exactlyOnceSupport(Map<String, String> props) {
// This connector can always provide exactly-once semantics
return ExactlyOnceSupport.SUPPORTED;
}
Additionally, if the user has configured the connector to define its own transaction boundaries, the Kafka Connect framework will ask the connector whether it can define its own transaction boundaries with the specified configuration, using the method:canDefineTransactionBoundaries
@Override
public ConnectorTransactionBoundaries canDefineTransactionBoundaries(Map<String, String> props) {
// This connector can always define its own transaction boundaries
return ConnectorTransactionBoundaries.SUPPORTED;
}
This method should only be implemented for connectors that can define their own transaction boundaries in some cases. If a connector is never able to define its own transaction boundaries, it does not need to implement this method.
Kafka Connect is intended to define bulk data copying jobs, such as copying an entire database rather than creating many jobs to copy each table individually. One consequence of this design is that the set of input or output streams for a connector can vary over time.
Source connectors need to monitor the source system for changes, e.g. table additions/deletions in a database. When they pick up changes, they should notify the framework via the object that reconfiguration is necessary. For example, in a :ConnectorContextSourceConnector
if (inputsChanged())
this.context.requestTaskReconfiguration();
The framework will promptly request new configuration information and update the tasks, allowing them to gracefully commit their progress before reconfiguring them. Note that in the this monitoring is currently left up to the connector implementation. If an extra thread is required to perform this monitoring, the connector must allocate it itself.SourceConnector
Ideally this code for monitoring changes would be isolated to the and tasks would not need to worry about them. However, changes can also affect tasks, most commonly when one of their input streams is destroyed in the input system, e.g. if a table is dropped from a database. If the encounters the issue before the , which will be common if the needs to poll for changes, the will need to handle the subsequent error. Thankfully, this can usually be handled simply by catching and handling the appropriate exception.ConnectorTaskConnectorConnectorTask
SinkConnectors usually only have to handle the addition of streams, which may translate to new entries in their outputs (e.g., a new database table). The framework manages any changes to the Kafka input, such as when the set of input topics changes because of a regex subscription. should expect new input streams, which may require creating new resources in the downstream system, such as a new table in a database. The trickiest situation to handle in these cases may be conflicts between multiple seeing a new input stream for the first time and simultaneously trying to create the new resource. , on the other hand, will generally require no special code for handling a dynamic set of streams.SinkTasksSinkTasksSinkConnectors
Kafka Connect allows you to validate connector configurations before submitting a connector to be executed and can provide feedback about errors and recommended values. To take advantage of this, connector developers need to provide an implementation of to expose the configuration definition to the framework.config()
The following code in defines the configuration and exposes it to the framework.FileStreamSourceConnector
static final ConfigDef CONFIG_DEF = new ConfigDef()
.define(FILE_CONFIG, Type.STRING, null, Importance.HIGH, "Source filename. If not specified, the standard input will be used")
.define(TOPIC_CONFIG, Type.STRING, ConfigDef.NO_DEFAULT_VALUE, new ConfigDef.NonEmptyString(), Importance.HIGH, "The topic to publish data to")
.define(TASK_BATCH_SIZE_CONFIG, Type.INT, DEFAULT_TASK_BATCH_SIZE, Importance.LOW,
"The maximum number of records the source task can read from the file each time it is polled");
public ConfigDef config() {
return CONFIG_DEF;
}
ConfigDef class is used for specifying the set of expected configurations. For each configuration, you can specify the name, the type, the default value, the documentation, the group information, the order in the group, the width of the configuration value and the name suitable for display in the UI. Plus, you can provide special validation logic used for single configuration validation by overriding the class. Moreover, as there may be dependencies between configurations, for example, the valid values and visibility of a configuration may change according to the values of other configurations. To handle this, allows you to specify the dependents of a configuration and to provide an implementation of to get valid values and set visibility of a configuration given the current configuration values.ValidatorConfigDefRecommender
Also, the method in provides a default validation implementation which returns a list of allowed configurations together with configuration errors and recommended values for each configuration. However, it does not use the recommended values for configuration validation. You may provide an override of the default implementation for customized configuration validation, which may use the recommended values.validate()Connector
The FileStream connectors are good examples because they are simple, but they also have trivially structured data -- each line is just a string. Almost all practical connectors will need schemas with more complex data formats.
To create more complex data, you'll need to work with the Kafka Connect API. Most structured records will need to interact with two classes in addition to primitive types: and .dataSchemaStruct
The API documentation provides a complete reference, but here is a simple example creating a and :SchemaStruct
If you are implementing a source connector, you'll need to decide when and how to create schemas. Where possible, you should avoid recomputing them as much as possible. For example, if your connector is guaranteed to have a fixed schema, create it statically and reuse a single instance.
However, many connectors will have dynamic schemas. One simple example of this is a database connector. Considering even just a single table, the schema will not be predefined for the entire connector (as it varies from table to table). But it also may not be fixed for a single table over the lifetime of the connector since the user may execute an command. The connector must be able to detect these changes and react appropriately.ALTER TABLE
Sink connectors are usually simpler because they are consuming data and therefore do not need to create schemas. However, they should take just as much care to validate that the schemas they receive have the expected format. When the schema does not match -- usually indicating the upstream producer is generating invalid data that cannot be correctly translated to the destination system -- sink connectors should throw an exception to indicate this error to the system.
Kafka Connect's REST layer provides a set of APIs to enable administration of the cluster. This includes APIs to view the configuration of connectors and the status of their tasks, as well as to alter their current behavior (e.g. changing configuration and restarting tasks).
When a connector is first submitted to the cluster, a rebalance is triggered between the Connect workers in order to distribute the load that consists of the tasks of the new connector.
This same rebalancing procedure is also used when connectors increase or decrease the number of tasks they require, when a connector's configuration is changed, or when a
worker is added or removed from the group as part of an intentional upgrade of the Connect cluster or due to a failure.
In versions prior to 2.3.0, the Connect workers would rebalance the full set of connectors and their tasks in the cluster as a simple way to make sure that each worker has approximately the same amount of work.
This behavior can be still enabled by setting .
connect.protocol=eager
Starting with 2.3.0, Kafka Connect is using by default a protocol that performs
incremental cooperative rebalancing
that incrementally balances the connectors and tasks across the Connect workers, affecting only tasks that are new, to be removed, or need to move from one worker to another.
Other tasks are not stopped and restarted during the rebalance, as they would have been with the old protocol.
If a Connect worker leaves the group, intentionally or due to a failure, Connect waits for before triggering a rebalance.
This delay defaults to five minutes () to tolerate failures or upgrades of workers without immediately redistributing the load of a departing worker.
If this worker returns within the configured delay, it gets its previously assigned tasks in full.
However, this means that the tasks will remain unassigned until the time specified by elapses.
If a worker does not return within that time limit, Connect will reassign those tasks among the remaining workers in the Connect cluster.
scheduled.rebalance.max.delay.ms300000msscheduled.rebalance.max.delay.ms
The new Connect protocol is enabled when all the workers that form the Connect cluster are configured with , which is also the default value when this property is missing.
Therefore, upgrading to the new Connect protocol happens automatically when all the workers upgrade to 2.3.0.
A rolling upgrade of the Connect cluster will activate incremental cooperative rebalancing when the last worker joins on version 2.3.0.
connect.protocol=compatible
You can use the REST API to view the current status of a connector and its tasks, including the ID of the worker to which each was assigned. For example, the request shows the status of a connector named :
GET /connectors/file-source/statusfile-source
Connectors and their tasks publish status updates to a shared topic (configured with ) which all workers in the cluster monitor. Because the workers consume this topic asynchronously, there is typically a (short) delay before a state change is visible through the status API. The following states are possible for a connector or one of its tasks:
status.storage.topic
UNASSIGNED: The connector/task has not yet been assigned to a worker.
RUNNING: The connector/task is running.
PAUSED: The connector/task has been administratively paused.
FAILED: The connector/task has failed (usually by raising an exception, which is reported in the status output).
RESTARTING: The connector/task is either actively restarting or is expected to restart soon
In most cases, connector and task states will match, though they may be different for short periods of time when changes are occurring or if tasks have failed. For example, when a connector is first started, there may be a noticeable delay before the connector and its tasks have all transitioned to the RUNNING state. States will also diverge when tasks fail since Connect does not automatically restart failed tasks. To restart a connector/task manually, you can use the restart APIs listed above. Note that if you try to restart a task while a rebalance is taking place, Connect will return a 409 (Conflict) status code. You can retry after the rebalance completes, but it might not be necessary since rebalances effectively restart all the connectors and tasks in the cluster.
Starting with 2.5.0, Kafka Connect uses the to also store information related to the topics that each connector is using. Connect Workers use these per-connector topic status updates to respond to requests to the REST endpoint by returning the set of topic names that a connector is using. A request to the REST endpoint resets the set of active topics for a connector and allows a new set to be populated, based on the connector's latest pattern of topic usage. Upon connector deletion, the set of the connector's active topics is also deleted. Topic tracking is enabled by default but can be disabled by setting . If you want to disallow requests to reset the active topics of connectors during runtime, set the Worker property .
status.storage.topicGET /connectors/{name}/topicsPUT /connectors/{name}/topics/resettopic.tracking.enable=falsetopic.tracking.allow.reset=false
It's sometimes useful to temporarily stop the message processing of a connector. For example, if the remote system is undergoing maintenance, it would be preferable for source connectors to stop polling it for new data instead of filling logs with exception spam. For this use case, Connect offers a pause/resume API. While a source connector is paused, Connect will stop polling it for additional records. While a sink connector is paused, Connect will stop pushing new messages to it. The pause state is persistent, so even if you restart the cluster, the connector will not begin message processing again until the task has been resumed. Note that there may be a delay before all of a connector's tasks have transitioned to the PAUSED state since it may take time for them to finish whatever processing they were in the middle of when being paused. Additionally, failed tasks will not transition to the PAUSED state until they have been restarted.
Kafka Streams is a client library for processing and analyzing data stored in Kafka. It builds upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, exactly-once processing semantics and simple yet efficient management of application state.
Kafka Streams has a low barrier to entry: You can quickly write and run a small-scale proof-of-concept on a single machine; and you only need to run additional instances of your application on multiple machines to scale up to high-volume production workloads. Kafka Streams transparently handles the load balancing of multiple instances of the same application by leveraging Kafka's parallelism model.