Throughout the week, I read a lot of blog-posts, articles, and so forth that has to do with things that interest me:
- AI/data science
- data in general
- data architecture
- streaming
- distributed computing
- SQL Server
- transactions (both db as well as non db)
- and other “stuff”
This blog post is the “roundup” of the things that have been most interesting to me for the week just ending.
Azure Data Explorer
- Continuous export for Azure Application Insights using Azure Data Explorer (Kusto). If you “live in Azure-land”, you probably use Application Insights to collect data for analysis. The analysis in App Insight is based on the Kusto engine but does not have all bells and whistles. To do more advanced analytics of your App Insights data, you should export your data to Azure Data Explorer (ADX). This post discusses a pattern for continuously streaming data from Application Insights data to ADX.
- Usage examples for Azure Data Explorer connector to Power Automate. I love Azure Data Explorer (heh- who would have thunk), but there is some “stuff” that frustrates me. Among them is that it is hard (neigh impossible) to do things based on scheduled tasks, triggers, etc. Well, neigh impossible until you read this Microsoft doc. This document looks at how to “hook up” Microsoft Power Automate to ADX, and it also includes several typical Power Automate connector usage examples. Very cool!
- Azure Data Explorer L300 Workshop. A couple of weeks ago, the Azure Data Explorer PM (Program Manager) team held a workshop spread over three days (around 2 - 3 hours per day). The workshop covered a lot, and the good thing was that it was recorded. The link here is to the YouTube playlist of the workshop. If you are interested in ADX, you must have a look!
Streaming
- Using Spark Structured Streaming to Scale Your Analytics. This post looks at how a Databricks customer leverages Structured Streaming and the Databricks Lakehouse Platform to scale its analytics and keep its data products up to date.
- How to Find, Share, and Understand Your Data Streams with Stream Catalog. If you want to get the most out of your data, it is vital to be able to discover, understand, organize and reuse your data. These requirements have led to the notion of a data catalogue. The post linked to looks at the Confluent Stream Catalog, what it is and what you can do with it.
- ML Prediction on Streaming Data Using Kafka Streams. This post looks at how you can boost the performance of your Python-trained ML models by serving them over a Kafka streaming platform in a Scala application.
- How streaming data and a lakehouse paradigm can help manage risk in volatile trading markets. In financial markets, managing risk is a must. It is the key to success if you can do it well. The post linked to, looks at how you can ingest and process large amounts of raw data to compute real-time portfolio valuations, risk metrics, and more using integrated technology from Confluent and Databricks. Very interesting!
~ Finally
That’s all for this week. I hope you enjoy what I did put together. Please comment on this post or ping me if you have ideas for what to cover.
comments powered by Disqus