This k6 extension provides the ability to load test Kafka using a producer. You can send many messages with each connection to Kafka. These messages are an array of objects containing a key and a value. There is also a consumer for testing purposes, that is, to make sure you send the correct data to Kafka, but it is not meant to be used for testing Kafka under load. There is support for producing and consuming messages in many formats using various serializers and deserializers. It can fetch schema from Schema Registry and also accepts hard-coded schema. Compression is also supported.
The real purpose of this extension is to test Apache Kafka and the system you've designed that uses Apache Kafka. So, you can test your consumers, and hence your system, by auto-generating messages and sending them to your system via Apache Kafka.
To build the source, you should have the latest version of Go installed. The latest version should match k6 and xk6. I recommend you to have gvm installed.
If you want to learn more about the extension, visit How to Load Test Your Kafka Producers and Consumers using k6 article on the k6 blog.
- Produce/consume messages as String, ByteArray, JSON, and Avro format (custom schema)
- Authentication with SASL PLAIN and SCRAM
- Create and list topics
- Support for user-provided Avro key and value schemas
- Support for loading Avro schemas from Schema Registry
- Support for byte array for binary data (from binary protocols)
- Support consumption from all partitions with a group ID
- Support Kafka message compression: Gzip, Snappy, Lz4 & Zstd
Since v0.8.0, there is an official Docker image plus binaries in the assets. Before running your script, make sure to make it available to the container by mounting a volume (a directory) or passing it via stdin.
docker run --rm -i mostafamoradian/xk6-kafka:latest run - <scripts/test_json.js
The k6 binary can be built on various platforms, and each platform has its own set of requirements. The following shows how to build k6 binary with this extension on GNU/Linux distributions.
- gvm for easier installation and management of Go versions on your machine
- Git for cloning the project
- xk6 for building k6 binary with extensions
Feel free the first two steps if you already have Go installed.
-
Install gvm by following its installation guide.
-
Install the latest version of Go using gvm. You need Go 1.4 installed for bootstrapping into higher Go versions, as explained here.
-
Install
xk6
:go install go.k6.io/xk6/cmd/xk6@latest
-
Build the binary:
xk6 build --with github.com/mostafa/xk6-kafka@latest
Note: you can always use the latest version of k6 to build the extension, but the earliest version of k6 that supports extensions via xk6 is v0.32.0. The xk6 is constantly evolving, so some APIs may not be backward compatible.
There are lots of examples in the script directory that show how to use various features of the extension.
You can start testing your own environment right away, but it takes some time to develop the script, so it would better to test your script against a development environment, and then start testing your own environment.
I recommend the fast-data-dev Docker image by Lenses.io, a Kafka setup for development that includes Kafka, Zookeeper, Schema Registry, Kafka-Connect, Landoop Tools, 20+ connectors. It is relatively easy to set up if you have Docker installed. Just monitor Docker logs to have a working setup before attempting to test because the initial setup, leader election, and test data ingestion take time.
-
Run the Kafka environment and expose the container ports:
sudo docker run \ --detach --rm \ --name lensesio \ -p 2181:2181 \ -p 3030:3030 \ -p 8081-8083:8081-8083 \ -p 9581-9585:9581-9585 \ -p 9092:9092 \ -e ADV_HOST=127.0.0.1 \ lensesio/fast-data-dev
-
After running the command, visit localhost:3030 to get into the fast-data-dev environment.
-
You can run the command to see the container logs:
sudo docker logs -f -t lensesio
If you have errors running the Kafka development environment, refer to the fast-data-dev documentation.
All the exported functions are available by importing them from k6/x/kafka
. They are subject to change in the future versions when a new major version is released. These are the exported functions:
The JavaScript API
/**
* Create a new Writer object for writing messages to Kafka.
*
* @constructor
* @param {[string]} brokers An array of brokers.
* @param {string} topic The topic to write to.
* @param {string} auth: The authentication credentials for SASL PLAIN/SCRAM.
* @param {string} compression The Compression algorithm.
* @returns {object} A Writer object.
*/
function writer(brokers: [string], topic: string, auth: string, compression: string) => object {}
/**
* Write a sequence of messages to Kafka.
*
* @function
* @param {object} writer The writer object created with the writer constructor.
* @param {[object]} messages An array of message objects containing an optional key and a value.
* @param {string} keySchema An optional Avro schema for the key.
* @param {string} valueSchema An optional Avro schema for the value.
* @returns {string} A string containing the error.
*/
function produce(writer: object, messages: [object], keySchema: string, valueSchema: string) => string {}
/**
* Write a sequence of messages to Kafka with a specific serializer/deserializer.
*
* @function
* @param {object} writer The writer object created with the writer constructor.
* @param {[object]} messages An array of message objects containing an optional key and a value.
* @param {string} configurationJson Serializer, deserializer and schemaRegistry configuration.
* @param {string} keySchema An optional Avro schema for the key.
* @param {string} valueSchema An optional Avro schema for the value.
* @returns {string} A string containing the error.
*/
function produceWithConfiguration(writer: object, messages: [object], configurationJson: string, keySchema: string, valueSchema: string) => string {}
/**
* Create a new Reader object for reading messages from Kafka.
*
* @constructor
* @param {[string]} brokers An array of brokers.
* @param {string} topic The topic to read from.
* @param {number} partition The partition.
* @param {number} groupID The group ID.
* @param {number} offset The offset to begin reading from.
* @param {string} auth Authentication credentials for SASL PLAIN/SCRAM.
* @returns {object} A Reader object.
*/
function reader(brokers: [string], topic: string, partition: number, groupID: string, offset: number, auth: string) => object {}
/**
* Read a sequence of messages from Kafka.
*
* @function
* @param {object} reader The reader object created with the reader constructor.
* @param {number} limit How many messages should be read in one go, which blocks. Defaults to 1.
* @param {string} keySchema An optional Avro schema for the key.
* @param {string} valueSchema An optional Avro schema for the value.
* @returns {string} A string containing the error.
*/
function consume(reader: object, limit: number, keySchema: string, valueSchema: string) => string {}
/**
* Read a sequence of messages from Kafka.
*
* @function
* @param {object} reader The reader object created with the reader constructor.
* @param {number} limit How many messages should be read in one go, which blocks. Defaults to 1.
* @param {string} configurationJson Serializer, deserializer and schemaRegistry configuration.
* @param {string} keySchema An optional Avro schema for the key.
* @param {string} valueSchema An optional Avro schema for the value.
* @returns {string} A string containing the error.
*/
function consumeWithConfiguration(reader: object, limit: number, configurationJson: string, keySchema: string, valueSchema: string) => string {}
/**
* Create a topic in Kafka. It does nothing if the topic exists.
*
* @function
* @param {string} address The broker address.
* @param {string} topic The topic name.
* @param {number} partitions The number of partitions.
* @param {number} replicationFactor The replication factor in a clustered setup.
* @param {string} compression The compression algorithm.
* @returns {string} A string containing the error.
*/
function createTopic(address: string, topic: string, partitions: number, replicationFactor number, compression string) => string {}
/**
* List all topics in Kafka.
*
* @function
* @param {string} address The broker address.
* @returns {string} A nested list of strings containing a list of topics and the error (if any).
*/
function listTopics(address: string) => [[string], string] {}
The example scripts are available as test_<format/feature>.js
with more code and commented sections in the scripts directory. The scripts usually have 4 parts:
- The imports at the top show the exported functions from the Go extension and k6.
- The Avro schema defines a key and a value schema that are used by both producer and consumer, according to the Avro schema specification. These are defined in the test_avro.js script.
- The message producer:
- The
writer
function is used to open a connection to the bootstrap servers. The first argument is an array of strings that signifies the bootstrap server addresses, and the second is the topic you want to write to. You can reuse this writer object to produce as many messages as possible. This object is created in init code and is reused in the exported default function. - The
produce
function sends a list of messages to Kafka. The first argument is theproducer
object, and the second is the list of messages (with key and value). The third and the fourth arguments are the key schema and value schema in Avro format if Avro format is used. The values are treated as normal strings if the schema are not passed to the function for either the key or the value. Use an empty string,""
if either of the schema is Avro and the other will be a string. You can use theproduceWithConfiguration
function to pass separate serializer, deserializer, and schema registry settings, as shown in the test_avro_with_schema_registry js script. The produce function returns anerror
if it fails. The check is optional, buterror
beingundefined
means thatproduce
function successfully sent the message. - The
producer.close()
function closes theproducer
object (intearDown
).
- The
- The message consumer:
- The
reader
function is used to open a connection to the bootstrap servers. The first argument is an array of strings that signifies the bootstrap server addresses, and the second is the topic you want to read from. This object is created in init code and is reused in the exported default function. - The
consume
function is used to read a list of messages from Kafka. The first argument is theconsumer
object, and the second is the number of messages to read in one go. The third and the fourth arguments are the key schema and value schema in Avro format, if Avro format is used. The values are treated as normal strings if the schema are not passed to the function for either the key or the value. Use an empty string,""
if either of the schema is Avro and the other will be a string. You can use theconsumeWithConfiguration
function to pass separate serializer, deserializer, and schema registry settings, as shown in the test_avro_with_schema_registry js script. The consume function returns an empty array if it fails. The check is optional, but it checks to see if the length of the message array is exactly 10. - The
consumer.close()
function closes theconsumer
object (intearDown
).
- The
You can run k6 with the Kafka extension using the following command:
./k6 run --vus 50 --duration 60s scripts/test_json.js
And here's the test result output:
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: scripts/test_json.js
output: -
scenarios: (100.00%) 1 scenario, 50 max VUs, 1m30s max duration (incl. graceful stop):
* default: 50 looping VUs for 1m0s (gracefulStop: 30s)
running (1m00.4s), 00/50 VUs, 6554 complete and 0 interrupted iterations
default ✓ [======================================] 50 VUs 1m0s
✓ is sent
✓ 10 messages returned
checks.........................: 100.00% ✓ 661954 ✗ 0
data_received..................: 0 B 0 B/s
data_sent......................: 0 B 0 B/s
iteration_duration.............: avg=459.31ms min=188.19ms med=456.26ms max=733.67ms p(90)=543.22ms p(95)=572.76ms
iterations.....................: 6554 108.563093/s
kafka.reader.dial.count........: 6554 108.563093/s
kafka.reader.error.count.......: 0 0/s
kafka.reader.fetches.count.....: 6554 108.563093/s
kafka.reader.message.bytes.....: 6.4 MB 106 kB/s
kafka.reader.message.count.....: 77825 1289.124612/s
kafka.reader.rebalance.count...: 0 0/s
kafka.reader.timeouts.count....: 0 0/s
kafka.writer.dial.count........: 6554 108.563093/s
kafka.writer.error.count.......: 0 0/s
kafka.writer.message.bytes.....: 54 MB 890 kB/s
kafka.writer.message.count.....: 655400 10856.309293/s
kafka.writer.rebalance.count...: 6554 108.563093/s
kafka.writer.write.count.......: 655400 10856.309293/s
vus............................: 50 min=50 max=50
vus_max........................: 50 min=50 max=50
To avoid getting the following error while running the test:
Failed to write message: [5] Leader Not Available: the cluster is in the middle of a leadership election and there is currently no leader for this partition and hence it is unavailable for writes
You can now use createTopic
function to create topics in Kafka. The scripts/test_topics.js
script shows how to list topics on all Kakfa partitions and also how to create a topic.
You always have the option to create it using kafka-topics
command:
$ docker exec -it lensesio bash
(inside container)$ kafka-topics --create --topic xk6_kafka_avro_topic --bootstrap-server localhost:9092
(inside container)$ kafka-topics --create --topic xk6_kafka_json_topic --bootstrap-server localhost:9092
If you want to test SASL authentication, have a look at this commit message, where I describe how to run a test environment.
I'd be thrilled to receive contributions and feedback on this piece of software. You're always welcome to create an issue if you find one (or many). I'd do my best to address the issues.
This was a proof of concept, but seems to be used by some companies nowadays, but it still isn't supported by the k6 team, rather by me personally, and the APIs may change in the future. USE AT YOUR OWN RISK!
This work is licensed under the Apache License 2.0.