Deploy and scale agents on LiveKit Cloud| LiveKit Blog
Over the past two years, as more teams pushed LiveKit voice agents to production, the same questions kept coming up: * How much CPU and memory do I allocate to my agent pools? * How do I handle sudden traffic spikes? * How can I instrument and optimize performance across sessions? Our Agents| LiveKit Blog
Introducing integration with Tavus Video avatars aren't just gimmicks—they've become genuinely useful tools that developers and businesses actually want. We've been hearing this a lot from customers, especially in education, healthcare, mental wellness, and marketing. Everyone seems keen on turning their voice interactions into something more visual and engaging.| LiveKit Blog
LiveKit announces $45m Series B financing and the 1.0 release of its Agents framework designed for building voice and video AI agents.| LiveKit Blog
When we originally introduced LiveKit Cloud, we also launched a realtime analytics and telemetry product to give you insights into how your users interacted with your LiveKit applications. Over the past two years, LiveKit Cloud has grown significantly, powering realtime applications ranging from AI assistants to robotic pile drivers to| LiveKit Blog
One of the hardest problems for voice AI applications is knowing when a user has finished speaking.| LiveKit Blog
This ubiquitous question is what I was asking myself a couple years ago at LiveKit's first hackathon. Could I play DOOM over LiveKit? Potentially ambitious for a 24-hour competition, but I had to solve this rite of passage. So, I partnered with our resident WebRTC expert, Raja, and| LiveKit Blog
LiveKit and OpenAI are partnering to help you build your own apps using the same technology powering ChatGPT’s new Advanced Voice feature.| LiveKit Blog
LiveKit Cloud’s pricing model is different from the industry norm. You only pay for the resources you use: 1. Compute: the time your users spend connected to our servers 2. Bandwidth: the data your application transfers over LiveKit’s network For simplicity we chose to embed the cost of| LiveKit Blog
LiveKit has raised $22.5M in additional funding to build infrastructure for realtime voice and video-driven AI applications.| LiveKit Blog
Agents is an open source stack for building real-time multimodal AI applications.| LiveKit Blog
We explore how HLS works, why it has variable latency, and how it compares to a newer protocol: WebRTC.| LiveKit Blog
With WebRTC you can live stream video from a canvas. This post is a step-by-step guide which shows you how. We use LiveKit’s WebRTC stack to build a real-time application for sending canvas video. Check out the full code. A lot of people know WebRTC as the technology that| LiveKit Blog
With technology built into every modern web browser, learn how you can live stream audio to other people using just a URL.| LiveKit Blog
A tutorial post detailing how to use WebRTC and the WebAudio PannerNode to implement spatial audio.| LiveKit Blog
Simple steps to setup a LiveKit Cloud project.| LiveKit Blog
Which components should be decentralized, and to what extent?| LiveKit Blog
A technical deep-dive into the architecture of LiveKit Cloud and how we built a system for real-time, 100K-user events.| LiveKit Blog
With LiveKit Cloud, we're rethinking how a cloud-based, real-time media service should charge for its infrastructure.| LiveKit Blog
LiveKit Cloud is a WebRTC platform allowing you to build 100K-person shared experiences w/ only 100ms of latency.| LiveKit Blog
From the very first lines of code LiveKit's progress has been accelerated by the WebRTC developer ecosystem and the open source community.| LiveKit Blog
How we built a real-time interface on top of ChatGPT where you can see and speak with it over a video call.| LiveKit Blog