If you’ve read the previous article describing Kafka in a Nutshell you may be itching to write an application using Kafka as a data backend. This article will get you part of the way there by describing how to deploy Kafka locally using Docker and test it using kafkacat.
Running Kafka Locally
First, if you haven’t already, download and install Docker. Once you have Docker installed, create a default virtual machine that will host your local Docker containers.
> docker-machine create --driver virtualbox default
Clone the Kafka docker repository
You could use the Docker pull command here but I find it instructive to be able to view the source files for your container.
> git clone https://github.com/wurstmeister/kafka-docker
Set a default topic
Open
docker-compose-single-broker.yml
and set a default topic and advertised name. You will want to use the IP address of your default Docker machine. Copy it to the clipboard with the following command.> docker-machine ip default | pbcopy
In
docker-compose-single-broker.yml
, edit the KAFKA_ADVERTISED_HOST_NAME
with the IP address you copied above and the KAFKA_CREATE_TOPICS
with the name of the default topic you would like created. The 1:1
refers to the number of partition and the replication factor for your partition.environment:
KAFKA\_ADVERTISED\_HOST\_NAME: 192.168.99.100 # IP address pasted to clipboard aboveKAFKA\_CREATE\_TOPICS: "test:1:1"
Run your Docker container
Running the Docker container will start an instance of Zookeeper and Kafka that you can test locally.
> docker-compose -f docker-compose-single-broker.yml up -d
You will receive a message saying the Zookeeper and Kafka are up and running How come?
Starting kafkadocker\_zookeeper\_1...
Starting kafkadocker\_kafka\_1...
Interacting with Kafka
kafkacat provides a generic command line interface to Kafka for producing and consuming data. You can install kafkacat with Homebrew.
> brew install kafkacat
Once installed, interacting with Kafka is relatively simple. kafkacat provides two modes, consumer and producer. You enter producer mode with the
-P
option. The -b
option specifies the Kafka broker to talk to and the -t
option specifies the topic to produce to. Kafka runs on port 9092
with an IP address machine that of our Virtual Machine. Running the following command will open stdin to receive messages, simply type each message followed by Enter
to produce to your Kafka broker.
> kafkacat -P -b $(docker-machine ip default):9092 -t test
Kafka
quick
guide.
start
^C
To consume messages, you run kafkacat with the
-C
option. Kafka will replay the messages you have sent as a producer.
> kafkacat -C -b $(docker-machine ip default):9092 -t test
Kafka
quick
guide.
start
Where to go from here?
Now that you have a running Kafka broker there are a number of things you can do to become more familiar with Kafka. First, I recommend reading through the kafkacat examples to become familiar with producing and consuming messages. Next, you can use one of the many Kafka clients to produce and consume messages from your application.
Comments
Post a Comment