Hey hustlers,

Three things dropped in the last 48 hours that tell you exactly where AI is going.

Amazon decided your desktop is the next battlefield. Nvidia gave developers a free model that can see, hear, and think at the same time. And India just became the world's fastest growing AI talent market, ahead of the US, UK, France, and Germany combined.

Let's take a look.

👾 WHAT'S NEW IN AI

1. Amazon Launched "Quick" and It Wants to Live on Your Desktop

AWS launched Amazon Quick, a desktop AI assistant that runs directly on your laptop and connects to your local files, calendar, email, Slack, Google Workspace, Microsoft Teams, Salesforce, Zoom, and more.

It learns from every session, surfaces meeting-relevant notes before you ask, drafts email replies based on your past conversations, and builds presentations and dashboards from a simple text prompt.
No AWS account needed. Free tier available. It also lets you build custom apps and dashboards using plain language.

Why you should care: This is Amazon's direct answer to Perplexity Personal Computer and Microsoft Copilot. Three of the biggest companies in the world are now racing to put an AI assistant on your actual desktop. Whichever one learns your work patterns best will be very hard to replace. The window to try these free before they get paywalled is now

2. Nvidia Dropped a Free AI Model That Can See, Hear, and Think All at Once

Nvidia launched Nemotron 3 Nano Omni, a 30 billion parameter open-source model that processes text, images, video, and audio inside a single unified system. No separate vision stack. No separate speech model. Just one model that does all four.

It runs on 25GB of RAM, meaning a decent consumer GPU can handle it. It delivers up to 9x higher throughput than other open multimodal models and tops six industry leaderboards. Available now on Hugging Face, OpenRouter, and build.nvidia.com. Free and fully open weights.

Why you should care: Most AI models are either text-only or bolt on vision as an afterthought. Nemotron 3 Nano Omni was built from the ground up to handle everything at once. If you're building anything with AI agents, this becomes the "eyes and ears" of your system. Foxconn, Palantir, and H Company are already using it in production. You can start today for free.

3. India Is Now the World's Fastest Growing AI Talent Market

According to LinkedIn's April 2026 AI Labour Market Report, India's AI engineering hiring grew 59.5% year-on-year, the highest of any country on the planet, ahead of the US, UK, France, and Germany. Bengaluru now matches San Francisco with 3% of its LinkedIn members tagged as AI engineers.
But the bigger story is what's happening outside the metros. Hyderabad is up 51%. Vijayawada, a tier-2 city, is up 45.5%. AI talent supply in manufacturing quadrupled to 2% of the workforce. The most in-demand skills are AI Agents, AI productivity tools, and Azure AI Studio.

Why you should care: If you're in India and reading this, the opportunity is real and it is here. Companies are not just hiring in Bengaluru anymore. They are actively looking for deployment-ready AI skills across every city and every industry. The report is clear: those who can show practical AI application, not just theoretical knowledge, will be the ones capturing this wave.

👾 THE GOOD STUFF

🔧 AI Tool: Amazon Quick

Free to start, no AWS account needed, works with your existing apps.
If you want to try a desktop AI assistant that actually learns your work context instead of just answering questions, this is worth downloading this week

A curated hub of state-of-the-art multimodal models spanning
text, image, audio, and video.

It organizes datasets, benchmarks, and any-to-any systems for
easy exploration and comparison. Perfect starting point for
building or researching multimodal agents and foundation models.

🎬 YouTube: Web Design Using the Kimi K2.5 Open AI Model

While everyone's focused on Nvidia and Amazon, this video quietly shows how Kimi K2.5 is changing what's possible for web design.
Worth watching before you sleep on it.

👾 TO READ


ADEMA: How AI Agents Can Actually Think Long-Term Without Losing Track

📎 https://arxiv.org/abs/2604.25849

One of the biggest problems with AI agents right now is memory.
They're great at short tasks but fall apart when a project spans hours, days, or dozens of steps. ADEMA is a new architecture designed to fix that.

What they built: A system called ADEMA that gives AI agents something closer to working memory across long, complex tasks. Instead of forgetting what happened three steps ago, the agent maintains a structured knowledge state throughout the entire task and updates it continuously as new information comes in.

What they found: Agents using ADEMA outperformed standard LLM agents on long-horizon tasks requiring synthesis across many knowledge sources. The architecture keeps the agent oriented even as complexity grows.

Why it's interesting: This week Amazon launched a desktop AI that learns your work patterns. Nvidia launched a model that can watch, listen, and read simultaneously. But neither is useful if the agent forgets what it was doing 20 minutes ago. ADEMA is the missing piece that makes long-running AI agents actually reliable. If you're building with agents or just curious where the research is going, this is worth 10 minutes of your time.

🧵 Thread Drop

Amazon wants to be on your desktop. Nvidia just gave developers a free model with eyes and ears.
And India became the world's top AI talent market without anyone really noticing.

The week wasn't flashy. No billion-dollar funding rounds. No dramatic CEO feuds. Just three quiet shifts that will look very obvious in hindsight.

👾 See you soon 👾


Want to work together?
If you're building something in AI and
think our audiences would vibe with it, let's talk.

Keep Reading