No luck finding Kafka weaknesses
Having used and deployed Kafka in conjunction with our Profium Sense product, it’s time to refill my coffee cup and reflect what’s missing in Kafka.
We at Profium have enjoyed the performance and fault-tolerance to start with, we’ve appreciated the durability of messages and the connectivity to various programming language environments. What we first thought was a weakness was monitoring capabilities i.e. how would IT operations identify bottlenecks in flows to and from Kafka. This aspect has matured as open source and commercial solutions are now available to monitor Kafka deployment and allow us to sleep well while while Kafka is running. Well, IT operations folks never sleep right? 🙂
Fine-tuning Kafka to a scalable production environment requires careful thought to determine, for example, how many partitions one should configure for a topic processed by multiple consumers? Best practices for tuning number of partitions (and other parameters) is fortunately now available online so costly mistakes or downtime can be avoided.
Understanding security requirements for Kafka has lead me to find answers to avoid man-in-the-middle access of your information as well as controlling ACLs as to who gets to do what. And if you do this with zookeeper you wish to secure zookeeper access, too, with publicly available tools such as Kafka security manager.
However, studying Kafka KIPs indicates to me the Kafka world is not perfect just yet. My personal wishlist includes the ability hot-deploy User Definable Functions (UDFs) in KSQL server without need to compile/restart the servers whenever functions are updated.