Get started integrating WarpStream with Tigris so you can have a durable multi-cloud message broker that's fast anywhere on the planet!| Tigris Object Storage Blog
Earlier this week, we launched PeerDB Streams, our latest product offering for real-time replication from Postgres to queues and message brokers such as Kafka, Redpanda, Google PubSub, Azure Event Hubs, and others. Today, we are announcing one of the...| PeerDB Blog
🚀 Today, we're excited to announce that PeerDB Cloud is officially entering public beta. If you're a data engineer or an organization looking for a fast, simple, and cost-effective way to replicate data from Postgres to data warehouses such as Snowf...| PeerDB Blog
We spent the past 7 months building a solid experience to replicate data from Postgres to Data Warehouses such as Snowflake, BigQuery, ClickHouse and Postgres. Now, we want to expand and bring a similar experience for Queues. With that spirit, we are...| PeerDB Blog
At PeerDB, we are building a fast and a cost-effective way to replicate data from Postgres to Data Warehouses and Queues. Today we are releasing our Azure Event Hubs connector. With this, you get a fast, simple, and reliable way to Change Data Captur...| PeerDB Blog
This article isn't about Franz Kafka or his great novella The Metamorphosis, where the main character one day realizes that he has transformed into a Human-size Bug.| Agile & Coding
As described in our previous articles, we are leveraging Kafka widely at Michelin. Each factory in the group hosts its own Kafka cluster, representing 60+ clusters, to be able to work in complete autonomy. Another cluster, a Confluent Cloud cluster, serves the central needs. When topics need to be replicated| Michelin IT Engineering Blog
Table of Contents1 Introduction2 Sponsorship and Community Support3 Notable Enhancements and New Features3.1 Live Consumer Management: A New Operational Model3.1.1 Why This Matters3.1.2 Partition-Level Control3.2 Complete Topic Lifecycle Management3.3 UI Customization and Branding3.4 Enhanced OSS Monitoring Capabilities3.5 Performance and Reliability Improvements3.5.1 Balanced Virtual Partitions Distribution3.5.2 Advanced Error Handling: Dynamic DLQ Strategies3.5.3 Enhanced Error Tracking3.5....| Closer to Code
Announcing Stonemq: A high-performance and efficient message queue developed in Rust| Rex Wang
本文分析和研究了社区其他消息队列的弹性扩缩容方法,给出了Mafka全链路的弹性策略建议| Rex Wang
本文分析了云原生时代Mafka目前面临的扩展性挑战,以及面临的业务需求挑战,提出了Mafka的长期发展计划| Rex Wang
本文介绍了美团消息队列产品Mafka| Rex Wang
本文对目前业界比较流行的三个消息队列产品做了对比和分析| Rex Wang
SUÇLU BİR ÇOCUK GİBİYDİM ÖNÜNDE “Ne de olsa, yürekli olmanın belli sonuçları var. Önce şuna karşılık vereyim: Filozof ve Psikoanalizci Otto Gross o kadar haksız değil anlaşılan, örneğin; benim durumum uyuyor onun dediklerine, duygularımı, gücümü böylesine harcıyorum da, gene ölmüyorum! Sonra da şuna karşılık ve- eyim: Geleceği düşündüğüm yok, bilmiyorum çünkü. Bildiğim şu: Senden ayrı […] KAFKA: BAŞIMI DİZLERİNE KOYSAM, ELİNİ DUYSAM SA...| Cafrande Kültür Sanat
Introduction A major concern when developing Kafka Streams applications is handling processing errors. Processing errors occur when the implemented logic in a Kafka Streams application fails to process a record correctly. Let’s illustrate what processing errors are with a concrete example from our Supply Chain domain: a DeliveryBooked event| Michelin IT Engineering Blog
Learn the differences between Kafka and RabbitMQ. Compare which message broker is best for your project's needs in performance and scalability.| SeventhState.io
A personal blog about functional programming, category theory, chess, physics and linux topics| beuke.org
My attempt to decipher the Raft whitepaper and how KRaft implementation adheres to the raft philosophy and techniques.| Uddeshya’s Musings
Recently, my wife and I vacationed in the Czech Republic, also known as Czechia (CHEK-ee-uh). We stayed in Prague, the country’s capital, but we took two day trips, one to Terezin and the oth…| Gershon Ben-Avraham
How the components of the chat system communicate, and what are the specifications of the chat API.| iO tech_hub
In this blog, I share my experience of learning Kotlin, Kafka, and Docker while building a Spring-boot application. Join me on this journey as I explore these technologies and provide insights into my project approach, technology integration, and what i like to call the minimum business logic approach.| iO tech_hub
I had the pleasure of attending the Kafka Summit in London last week, and it was a very exciting event! I gave a presentation on streaming Kafka events into Apache Iceberg, followed by some insightful questions and follow-on discussions. There were more than 90 sessions, with topics ranging from stream processing with Kafka Streams and […]| Tabular
Short introduction into Debezium| ConSol Blog
A brief look at how Kafka Streams and TypeStream compare in the context of event-driven microservices.| Luca Pette
A brief history of TypeStream| Luca Pette
A primer to help you write your first Kafka Streams application.| Luca Pette
How to build HTTP endpoints with Kafka Streams Interactive Queries| Luca Pette
One of our large scale data infrastructure challenges here at Cloudflare is around providing HTTP traffic analytics to our customers. HTTP Analytics is available to all our customers via two options:| The Cloudflare Blog
Explore Maurice Merleau-Ponty's insightful reflections on human existence in relation to Kafka's writings in 'The Visible and the Invisible'.| The Miskatonian
Producing and processing real-time data are two sides of a coin. Imagine you run a company that continuously generates a steady stream of data that needs to be processed efficiently. Traditional solutions for handling this data using Kafka's producer and consumer APIs can be bulky because of the lines of| EverythingDevOps
Terra investigates an incident where some SQL updates caused Kafka Connect to send a larger amount of data than normal to our brokers.| Honeycomb
While applications are producing and consuming messages to and fro Kafka, you'll notice that new consumers of existing topics start emerging. These new consumers (applications) might have been written by the same engineers who wrote the original producer of those messages or by people you don't know. The emergence of| EverythingDevOps
Apache Kafka offers you three key features. It's the ability to publish & subscribe to events, store them, and process them in real-time or at a later point. In this article, you'll better understand all the components that make these features possible. You'll go deeper into the internal components of the| EverythingDevOps
When I started to work in IT a couple decades ago, the center of the universe for a software engineer was databases, mostly relational ones. We were relying heavily on their transactional capabilities to ensure data consistency. Integrating an app with the rest of the world was quite a challenge.| Michelin IT Engineering Blog
Dive into the differences between Ruby's Oniguruma and C's POSIX regex engines, offering insights for developers in multi-language projects to ensure seamless compatibility.| Closer to Code | Blog about coding in various languages, security, and my oth...
Master Kafka consumer configuration : subscribe to multiple topics for scalable, real-time data processing.| Examples Java Code Geeks
For the past 4 years, our journey into the heart of Kafka's capabilities has been shaped by two pivotal concepts: Master Topologies and Micro Topologies. These conceptual frameworks have become the backbone of our Kafka Streams application design, offering a comprehensive and granular understanding of our end-to-end communication. We want| Michelin IT Engineering Blog
Apache Kafka provides a reliable, scalable, and fault-tolerant messaging system that enables the exchange of data streams between multiple applications and microservices. Let us delve into understanding Apache Kafka and its basics. 1. Introduction Apache Kafka is a distributed streaming platform. It is designed to handle real-time, high-throughput data feeds. Kafka provides a publish-subscribe model …| Examples Java Code Geeks
I’ve been searching for alternatives to Kafka for some time,| It’s me inside me.
I’ve been writing about Spring Batch lately and one of the questions I had in terms of fault tolerance is … Read MoreHandling manager failures in Spring Batch| Arnold Galovics
I haven’t really covered the topic of batch jobs so far and it happened that I needed to work with them lately and design a quite complicated batch job setup based on Spring Batch with partitioning using Kafka.| Arnold Galovics
Here, we will be looking into how we can communicate with a Kafka Cluster using Spring Cloud Stream| RefactorFirst
In this, we will be looking into how we can publish and subscribe to a Kafka topic using Spring Kafka| RefactorFirst
I'm very pleased to post a draft of my forthcoming essay with Professor Woodrow Hartzog (BU Law), Kafka in the Age of AI and the Futility of Privacy as| TeachPrivacy
So it’s been a while since I wrote a post about Kafka (and Azure too actually, at work we use AWS) But anyway someone mentioned to be the other day that Azure EventHubs come with the ability…| Sacha's Blog
SO last time we look at interactive queries. This time (the last one in this series) we will look at Windowing operations. This is a fairly dry subject, and I don’t have too much to add to this one…| Sacha's Blog
So last time we looked at how to make use of the Processor API in the DSL. This time we are going to look at interactive queries. Where is the code? The code for this post is all contained h…| Sacha's Blog
Last time we look at how we can supply our own Serdes (serializer / deserializer) to the DSL. This time we will look at how we can make use of the lower level “Processor API” in the DSL, which is w…| Sacha's Blog
Last time we look at Joining. This time we will continue to look at the streams DSL, and how we can supply our own Serdes (serializer / deserializer). Where is the code? The code for this po…| Sacha's Blog
Last time we look at Aggregating. This time we will continue to look at the streams DSL, and will look at joining. If you have ever done any standard SQL, this post will be very familiar Whe…| Sacha's Blog
So last time we looked at a whole bunch of stateless operations. This time we will continue our journey to look at how we can start to do aggregations, and make use of state stores. Where is the co…| Sacha's Blog
Ok so our journey now continues with Kafka Streams. Last time we introduced a few key concepts, such as Props Topologies How to test streams apps This time we will continue to look at stateless ope…| Sacha's Blog
Introduction In this post we will look at some of the key objects we looked at last time, and we will also see what a typical Scala (though Kafkas libraries are mainly Java, I just prefer Scala) ap…| Sacha's Blog
So this post will be an introductory one on Kafka Streams. It is not intended to be one on Apache Kafka itself. For that there are many interesting books/posts/documents available which cover this …| Sacha's Blog
Confluent’s hosted Kafka service is a quick and cost effective way to trigger your functions by events.| OpenFaaS - Serverless Functions Made Simple
We brought a whole team to San Francisco to present and attend this year’s Data and AI Summit, and it was a blast! I would consider the event a success both in the attendance to the Scribd hosted talks and the number of talks which discussed patterns we have adopted in our own data and ML platform. The three talks I wrote about previously were well received and have since been posted to YouTube along with hundreds of other talks.| Scribd Technology
We are very excited to be presenting and attending this year’s Data and AI Summit which will be hosted virtually and physically in San Francisco from June 27th-30th. Throughout the course of 2021 we completed a number of really interesting projects built around delta-rs and the Databricks platform which we are thrilled to share with a broader audience. In addition to the presentations listed below, a number of Scribd engineers who are responsible for data and ML platform, machine learning s...| Scribd Technology
Streaming data from Apache Kafka into Delta Lake is an integral part of Scribd’s data platform, but has been challenging to manage and scale. We use Spark Structured Streaming jobs to read data from Kafka topics and write that data into Delta Lake tables. This approach gets the job done but in production our experience has convinced us that a different approach is necessary to efficiently bring data from Kafka to Delta Lake. To serve this need, we created kafka-delta-ingest.| Scribd Technology
Grab's data streaming infrastructure runs in the cloud across multiple Availability Zones for high availability and resilience, but this also incurs staggering network traffic cost. In this article, we describe how enabling our Kafka consumers to fetch from the closest replica helped significantly improve the cost efficiency of our design.| Grab Tech
In application architecture discussions, "Kafka" comes up a lot, and for good reason: it plays an important role in many event-driven architectures. Here's how it works and why you might want to use it.| www.cockroachlabs.com
How to build an architecture that ensures your metadata is highly available, consistent, and disaster-proof.| www.cockroachlabs.com
Kafka is a messaging system. That’s it. So why all the hype? In reality messaging is a hugely important piece of infrastructure for moving data between systems. To see why, let’s look at a data pipeline without a messaging system. This system starts with Hadoop for storage and data processing. Hadoop isn’t very useful without data so the first stage in using Hadoop is getting data in. Bringing Data in to Hadoop So far, not a big deal. Unfortunately, in the real world data exists on many...| Kevin Sookocheff