Skip to main content

Kafka 101: Kafka Quick Start Guide

If you’ve read the previous article describing Kafka in a Nutshell you may be itching to write an application using Kafka as a data backend. This article will get you part of the way there by describing how to deploy Kafka locally using Docker and test it using kafkacat.

Running Kafka Locally

First, if you haven’t already, download and install Docker. Once you have Docker installed, create a default virtual machine that will host your local Docker containers.
> docker-machine create --driver virtualbox default

Clone the Kafka docker repository

You could use the Docker pull command here but I find it instructive to be able to view the source files for your container.
> git clone https://github.com/wurstmeister/kafka-docker

Set a default topic

Open docker-compose-single-broker.yml and set a default topic and advertised name. You will want to use the IP address of your default Docker machine. Copy it to the clipboard with the following command.
> docker-machine ip default | pbcopy
In docker-compose-single-broker.yml, edit the KAFKA_ADVERTISED_HOST_NAME with the IP address you copied above and the KAFKA_CREATE_TOPICS with the name of the default topic you would like created. The 1:1 refers to the number of partition and the replication factor for your partition.
environment:
KAFKA\_ADVERTISED\_HOST\_NAME: 192.168.99.100 # IP address pasted to clipboard above
KAFKA\_CREATE\_TOPICS: "test:1:1"

Run your Docker container

Running the Docker container will start an instance of Zookeeper and Kafka that you can test locally.
> docker-compose -f docker-compose-single-broker.yml up -d
You will receive a message saying the Zookeeper and Kafka are up and running How come?
Starting kafkadocker\_zookeeper\_1...
Starting kafkadocker\_kafka\_1...

Interacting with Kafka

kafkacat provides a generic command line interface to Kafka for producing and consuming data. You can install kafkacat with Homebrew.
> brew install kafkacat
Once installed, interacting with Kafka is relatively simple. kafkacat provides two modes, consumer and producer. You enter producer mode with the -P option. The -b option specifies the Kafka broker to talk to and the -t option specifies the topic to produce to. Kafka runs on port 9092 with an IP address machine that of our Virtual Machine. Running the following command will open stdin to receive messages, simply type each message followed by Enter to produce to your Kafka broker.
> kafkacat -P -b $(docker-machine ip default):9092 -t test
Kafka
quick
guide.
start
^C
To consume messages, you run kafkacat with the -C option. Kafka will replay the messages you have sent as a producer.
> kafkacat -C -b $(docker-machine ip default):9092 -t test
Kafka
quick
guide.
start

Where to go from here?

Now that you have a running Kafka broker there are a number of things you can do to become more familiar with Kafka. First, I recommend reading through the kafkacat examples to become familiar with producing and consuming messages. Next, you can use one of the many Kafka clients to produce and consume messages from your application.

Comments

Popular posts from this blog

Merge AVHDX Hyper-V Checkpoints

When you create a snapshot of a virtual machine in Microsoft Hyper-V, a new file is created with the  .avhdx  file extension. The name of the file begins with the name of its parent VHDX file, but it also has a GUID following that, uniquely representing that checkpoint (sometimes called snapshots). You can see an example of this in the Windows Explorer screenshot below. Creating lots of snapshots will result in many  .avhdx  files, which can quickly become unmanageable. Consequently, you might want to merge these files together. If you want to merge the  .avhdx  file with its parent  .vhdx  file, it’s quite easy to accomplish. PowerShell Method Windows 10 includes support for a  Merge-VHD  PowerShell command, which is incredibly easy to use. In fact, you don’t even need to be running PowerShell “as Administrator” in order to merge VHDX files that you have access to. All you need to do is call  Merge-VHD  with the  -Path  parameter, pointing to the  .avhdx  file, and the  -Des

Openstack manila phần 4: Native GlusterFS Driver

Tiếp tục loạt bài về Openstack Manila hôm nay tôi sẽ cấu hình backend sử dụng GlusterFS Yêu cầu phiên bản GlusterFS >= 3.6. Với glusterfs nếu cluster của bạn không hỗ trợ snapshot thì trên manila cũng sẽ mất đi tính năng này. Để cấu hình snapshot ta sẽ cấu hình Thin Provision theo bài hướng dẫn link Với bài lab của mình có 2 node và chạy kiểu replicate. Mình sẽ tạo các thinly provisioned và tạo volume trên đó. Mô hình cài đặt Cài đặt glusterfs-v3.7 add-apt-repository ppa:gluster/glusterfs-3.7 -y apt-get update apt-get install glusterfs-server -y Tham khảo script tạo thin LV và gluster volume Script tạo thinly provisioned chạy trên 2 node apt-get install xfsprogs -y pvcreate /dev/sdb vgcreate myVG /dev/sdb lvcreate -L 8G -T myVG/thinpool for ((i = 1;i<= 5; i++ )) do mkdir -p /manila/manila-"$i" for (( j = 1; j<= 5; j++)) do lvcreate -V "${i}"Gb -T myVG/thinpool -n vol-"$i"-"$j" mkfs.xfs /dev/my

Zabbix, AWS and Auto Registration

One of the things I love the most with AWS is  auto-scaling . You choose an AMI, set some parameters and AWS will spin instances up and down whenever a threshold is breached. But with all these instances spinning up and down there are some unknowns. For example, what is the IP address of the new instance? Its host name? This can be critical when other components of your infrastructure are dependent on knowing these parameters. I had this problem when I started to use  Zabbix  as the monitoring system. At first it seemed like a complicated one, but Zabbix has a wonderful feature called  Auto Registration  which can be used exactly for this situation. I will try to show how to configure auto registration both on the client (EC2 instance running Ubuntu 14.04) and on the Zabbix server (Zabbix Server 2.4.2). Zabbix-agent Installation and Configuration Let’s start with installing zabbix-agent on the Ubuntu client: 1 2 $ sudo apt-get update $ sudo apt-get install -y zab