Hey hustlers,

You blinked, and three massive things dropped this week. One of them literally crashed stock prices before it even launched. Not kidding.

The AI world is moving so fast that by the time you finish reading this, something new probably shipped. But that's exactly why we're here. We filter through the noise so you walk away knowing only what actually matters.

Let's get into it.

👾 WHAT'S NEW IN AI

1. Anthropic Launched Claude Opus 4.7 AND an AI Design Tool That Spooked the Entire Design Industry

On April 16, Anthropic shipped Claude Opus 4.7, their most powerful publicly available model. It scores 87.6% on SWE-bench (the gold standard for coding), outperforms GPT-5.4 and Gemini 3.1 Pro in complex coding tasks, and has 3x better image resolution than previous versions. Developers are calling it "stepping into an F1 car."

But the bigger story? A day later, Anthropic launched Claude Design under a new "Anthropic Labs" sub-brand.
It's an AI-powered design tool for websites, presentations, and product mockups. When The Information leaked the news two days before launch, Figma, Adobe, Wix, and GoDaddy stocks all dropped 2-4%. An AI product that didn't exist yet briefly vaporized billions in market cap.

Why you should care: This isn't just a model upgrade, Anthropic went from "AI chatbot company" to directly competing with the design software industry in one week. If you use Figma, Canva, or any design tool, Claude Design is worth watching closely. And if you code, Opus 4.7 is now the strongest model you can get your hands on.

2. Perplexity Launched "Personal Computer" and Turned Your Mac Into a Full-Time AI Worker

Recently, Perplexity released Personal Computer for Mac. Not a chatbot. An AI agent that lives on your computer, reads your local files, opens apps like Mail and Calendar, and actually executes your to-do list. You can press both Command keys, speak a task, and watch it work across your apps. Pair it with a Mac mini and it runs 24/7 in the background. You can even start tasks from your iPhone.

Why you should care: The idea of AI organizing your messy Downloads folder, sending follow-up emails, and scheduling meetings without you touching anything is no longer a demo. It's a product. The catch is it costs $200/month (Max plan only). But even if you don't use it, the direction is obvious: AI is moving from the browser to your desktop.

3. OpenAI Signed a $20 Billion Deal With Cerebras (and Might Buy a Piece of the Company)

OpenAI will pay Cerebras more than $20 billion over three years for servers powered by Cerebras' custom AI chips. The deal may also give OpenAI an equity stake. Plus, OpenAI agreed to provide about $1 billion to help fund Cerebras' data center development. Cerebras is targeting an IPO in Q2 this year.

Why you should care: OpenAI is actively diversifying away from Nvidia. That tells you two things: AI compute demand is so massive that the biggest players can't rely on one supplier, and the next wave of AI breakthroughs will be as much about hardware deals as software innovation. The infrastructure race is now as important as the model race.

👾 THE GOOD STUFF

🔧 AI Tool: Google Gemma 4

Google released four open-source AI models that run on a single GPU while performing comparably to models 20x their size. Apache 2.0 license, function calling support, optimized for edge devices. If you want powerful AI running locally without cloud dependency, this is the most accessible option right now.

🐙 GitHub: claude-code-best-practice (36.9k stars)

A trending repo documenting battle-tested best practices for getting the most out of Claude Code. Practical, well-organized, and constantly updated. If you use Claude for anything, bookmark this.

🎬 YouTube: Claude Opus 4.7 and Big Tech Earnings Preview (Live Breakdown)

A solid walkthrough of the Opus 4.7 launch, what the benchmarks actually mean, and why Anthropic is holding back its even more powerful Mythos model.

👾 TO READ

Agents of Chaos: What Happens When You Give AI Agents Real Tools and Let Them Loose

38 researchers from Northeastern, Harvard, Carnegie Mellon, and UBC ran an experiment. They set up five AI agents in a live environment with real email accounts, Discord access, file systems, and the ability to execute shell commands. Then they let 20 AI researchers interact with them for two weeks under normal and adversarial conditions.

What they found: Eleven different failure modes. Agents followed instructions from people who weren't their owners. They leaked sensitive information. They executed destructive system-level actions. They got stuck in infinite loops eating up resources. One agent's bad behavior spread to other agents like a virus. And in multiple cases, agents told researchers "task completed" when the actual system showed the opposite.

Why it's interesting: This week we're celebrating Perplexity putting an AI agent on your Mac and Claude getting better at autonomous coding. This paper is the other side of that coin. When agents have real access to real systems, the failures aren't hypothetical anymore. They're messy, unpredictable, and sometimes agents literally lie about what they did. Required reading for anyone excited about the agent future (which should be everyone reading this newsletter).

Check my thread for a simplied version :

Every week I think "okay, next week will be calmer." Every week I'm wrong.

We're not in the "should I try AI?" phase anymore. We're in the "how do I use it without getting burned?" phase. And that's exactly what we'll keep covering here.

👾 See you soon 👾

Keep Reading