Yohei Nakajima| yoheinakajima.com
I was running an experiment on summarizing X posts around a certain topic and found this on on Agent-to-Agent pretty solid, so I am publishing this as a blog post since people ask me about the topic often. This is based on 100 recent tweets about the topic A2A (run on April 15th). A2A is […]| Yohei Nakajima
I was running an experiment on summarizing X posts around a certain topic and found this on on MCP pretty solid, so I am publishing this as a blog post since people ask me about the topic often. This is based on 100 recent tweets about the topic MCP client (run on April 15th).| Yohei Nakajima
This blog post is based on a talk I did recently at DDVC Conference. This was originally generated by AI based on the talk transcript, which I edited.| Yohei Nakajima
Pippin is a whimsical AI-driven unicorn designed to interact with the digital world through a continuous cycle of dynamic activities, memory updates, and state changes. Operating 24/7, Pippin embodies a playful experiment in AI influencer development, inspired by community engagement and open-source collaboration.| Yohei Nakajima
For the last couple of months, we keep getting exposed to interesting ideas and startups in the intersection of AI and web3, and some of the ideas make a lot of sense to us.| Yohei Nakajima
I recently did a talk in SF on the future of autonomous agents – then published the deck on X/Twitter here. We’ll be doing an hour long livestream with Q&A on May 16th at 9am on X/Twitter, so mark your calendars (or add your email here for reminders and recording link)!| Yohei Nakajima
At Untapped, we pride ourselves on being early in identifying upcoming technology trends, and thought we’d share what we’ve learned recently about the intersection of knowledge graphs and LLMs. For those not familiar, knowledge graphs (Wikipedia) are a type of data representation in the forms of nodes (objects) and edges (relationships). This data structure allows for […]| Yohei Nakajima
First up is AI chips, since Groq made waves this past month with their LLM optimized chips with much faster inference than anything we’ve seen before. Etched is another player in this domain, but we haven’t seen a public demo yet. AI chips aren’t necessary new (eg. Graphcore, Ceerebras), and a question is, will specialized chips maintain value as model architecture evolves? […]| Yohei Nakajima
I recently asked the following question on Twitter:| Yohei Nakajima
I recently threw out a random thought on Twitter, wondering if there might be room for something I called semi-local inference. This wouldn’t be on-device processing, but something like using a WiFi router to run powerful language models (LLMs). I was curious about the potential benefits—speed, cost, and privacy—over using APIs to power and control smart home or office devices. Here’s the tweet that started it all:| Yohei Nakajima
BabyAGI has been cited in 42 arxiv papers (full list). The following article summarizes these 42 articles. Yohei Nakajima’s project, BabyAGI, appears to have catalyzed a wide range of innovations and research advancements across several domains of artificial intelligence, particularly in the development and application of large language models (LLMs) and agent systems. The impact […]| Yohei Nakajima