Throughout the week, I read a lot of blog-posts, articles, and so forth that has to do with things that interest me:
- AI/data science
- data in general
- data architecture
- distributed computing
- SQL Server
- transactions (both db as well as non db)
- and other “stuff”
This blog-post is the “roundup” of the things that have been most interesting to me for the week just ending.
- Understanding Materialized Views — 3 : Stream-Table Joins with CDC. In a roundup a couple of weeks ago, I linked to a post about materialized views, and I wrote how I couldn’t wait for a follow-up post. Well, here it is. In this post, the author looks at joining streams with lookup tables to create materialized views. Very cool!
- What’s New in Apache Kafka 3.0.0. I guess the title says it all. Apache Kafka version 3.0 has just been released, and this blog post looks at some of the new features, fixes, and improvements.
- How to Load Test Your Kafka Producers and Consumers using k6. A couple of weeks ago, I came across k6, a modern load testing framework for both developers as testers. I thought it would be cool if I somehow could load-test Kafka producers and consumers in the framework. Well, I can now do it, and the post I have linked to discusses the newly developed Kafka k6 extension: xk6-kafka. I cannot wait to put it through its paces.
- Announcing ksqlDB 0.21.0. Above I linked to the announcement of Kafka 3.0. This post discusses the new ksqlDB 0.21.0 release and looks at some of the new features.
- Kappa Architecture is Mainstream Replacing Lambda. In this post, the author looks at the benefits the Kappa architecture provides over the Lambda architecture. One of the major, major benefits is a much simpler infrastructure.
WIND (What Is Niels Doing)
Right now, I am “prepping” for two conference talks this coming week:
- Analyze Billions of Rows of Data in Real-Time Using Azure Data Explorer: On Wednesday (Sept 29), I deliver this presentation which is an overview of Azure Data Explorer, and how it is ideal for near-real-time analytics of huge volumes of data.
- Improve Customer Lifetime Value using Azure Databricks & Delta Lake. Then on Thursday (Sept 30), I present how you can calculate and improve Customer Lifetime Value (CLV) using Azure Databricks.
That’s all for this week. I hope you enjoy what I did put together. Please comment on this post or ping me if you have ideas for what to cover.
comments powered by Disqus