- The Signal
- Posts
- OpenAI Never Sleeps, Grok Puts Pedal To Metal, and Google AI Bridges Land and Sea
OpenAI Never Sleeps, Grok Puts Pedal To Metal, and Google AI Bridges Land and Sea


AI Highlights
My top-3 picks of AI news this week.
OpenAI
1. OpenAI Never Sleeps
This week has been packed with back-to-back releases across OpenAI’s entire ecosystem:
New models: The GPT-4.1 family (handling up to 1M tokens) and o-series reasoning models (o3 and o4-mini) that excel at complex tasks while setting new benchmarks in coding, math, and multimodal understanding.
Integrated capabilities: Full tool access for reasoning models, visual thinking that manipulates images during reasoning, and agentic approaches that use tools based on desired outcomes.
Developer ecosystem: Introduced Codex CLI (a lightweight terminal coding agent), launched a $1M initiative supporting developers, and made advanced models available through API.
Alex’s take: Aside from OpenAI’s convoluted naming system for their models, they’re really pushing the envelope when it comes to new releases. Only last week, we saw memory improvements roll out—this week, it’s new reasoning models. It feels like we’re witnessing the early stages of truly agentic AI, which can plan, reason, and execute complex tasks across different modalities.
xAI
2. Grok Puts Pedal To Metal
The AI race is red hot, with OpenAI setting a blistering pace. But Grok is out here swinging with weekly updates:
Memory integration: Grok 3 now remembers conversations, builds personalised knowledge about users, and analyses thinking patterns to deliver tailored recommendations (not yet available in EU/UK due to regulations).
Grok Studio: A new canvas-like tool allowing users to create and collaborate on documents, code, reports, and browser games, with Google Drive integration for file attachment.
Grok 3 Mini: A cost-efficient model that outperforms reasoning models at 1/5 the cost, excelling in graduate-level STEM, math, and coding while providing access to its complete reasoning trace.
Alex’s take: Memory transforms AI from a chatbot to a useful assistant. As you build conversation history with an AI, switching costs increase exponentially. ChatGPT was first. And this is exactly why xAI is racing to get memory embedded into Grok ASAP. Once you’ve invested months or years of conversations into one AI, would you really want to start from scratch with another?
Google
3. Google AI Bridges Land and Sea
Google's AI updates this week couldn't possibly sit at more opposite ends of the tech spectrum. And that's precisely what makes them so fascinating:
Google Sheets AI function: A new “=AI()” function brings native Gemini capabilities directly into spreadsheets, allowing users to generate text, analyse sentiment, categorise data, and summarise information without leaving their workflow.
DolphinGemma model: Google has developed a specialised 400M parameter AI model that analyses and generates dolphin vocalisations, helping researchers decode patterns in dolphin communication using the Wild Dolphin Project’s 38 years of research data.
Mobile field research: The DolphinGemma system runs directly on Pixel phones, making it practical for oceanographic field research and real-time dolphin interaction through the CHAT system.
Alex’s take: What I love about Google is how they’re applying AI across such vastly different domains—from mundane spreadsheet functions to potentially cracking the code of dolphin language. I’m particularly struck by how the same foundational LLM technology powering a spreadsheet function could help us understand what dolphins are saying to each other after decades of careful observation.
Today’s Signal is brought to you by Athyna.
Curious how global hiring gives you a competitive edge?
Discover salary insights for engineers, data scientists, product managers, and more.
Explore top-tier talent with experience at AWS, Google, PwC, and beyond.
Learn how to save up to 70% on salaries while hiring top global talent.
Content I Enjoyed
Nvidia’s Geopolitical Tightrope
When your CEO lands in Beijing days after the US government blocks sales of your AI chips to China, you know you’re walking a precarious geopolitical tightrope. I’ve been watching Nvidia's China situation unfold with an eager eye this week.
Something that stood out to me was this revenue chart showing Nvidia's growth trajectory—from virtually nothing in 2015 to approximately $125 billion in 2025, with China representing a significant yet increasingly constrained slice of that pie.
The Trump administration’s move to restrict even Nvidia's China-specific H20 chips (which were specifically designed to comply with previous export controls) shows a dramatic escalation in the tech decoupling between the US and China.
Take for example the timing: immediately after Nvidia announced plans to invest $500 billion in US-based AI infrastructure and manufacturing—building chips in Arizona and AI supercomputers in Texas—the administration quietly tightened the screws on China sales.
But there’s something else at play. This week, Chamath observed that perhaps 20-30% of Nvidia's GPUs may actually be finding their way to China through “vassal intermediary countries” despite the restrictions. The process is a simple one: nearby Asian countries buy the GPUs and then send them into China.
I suspect this has been taking place for years, especially given how fast China is building out their data centres.
The takeaway? Nvidia finds itself at the centre of perhaps the most consequential technological and geopolitical chess match of our time. Two feuding countries going head-to-head. The company that is powering the AI revolution is now caught between competing superpowers, each determined to (a) control the future of computing and (b) prioritise their singular mission and race to achieve artificial general intelligence (AGI).
I don’t envy Jensen Huang’s position.
Idea I Learned
AI Is a Motion Market, Not a Moat Market
This week, OpenAI is reportedly in talks to acquire Windsurf (formerly Codeium) for $3 billion—less than four years after its founding. This signals OpenAI’s clear intent to verticalise and own the application layer, particularly in coding, positioning them to compete directly with Cursor.
What's fascinating is the discussion this sparked about the nature of the AI market itself. Despite OpenAI’s superior distribution, brand, and data compared to Windsurf, they're willing to pay billions rather than build in-house.
The reason? Time-to-market. The opportunity cost of building in-house might exceed $3 billion, given how lucrative the AI coding market is becoming.
This reinforces a compelling insight I saw on X this week: the AI market is a “motion market,” not a “moat market.” For a while, GitHub Copilot seemed to have locked up the developer tools conversation. Many hesitated to compete because it felt like Microsoft had already won.
That turned out to be an illusion. Most incumbents are approaching AI the same way they approached mobile—late, reactive, and lacking taste. They're too bloated, too slow, and too busy with executive rotations and OKRs to move quickly.
The winners in this space know how to iterate and build in public, collapse feedback loops, tell compelling stories, and design from first principles. Even when established products catch up technically, the zeitgeist has already shifted, and users (especially developers) are difficult to win back once they've moved on.
The lesson? In AI, speed and execution trump resources and existing advantages. The window for building category-defining products remains wide open as long as you can move fast enough.
This reminds me of what Peter Thiel said about “last mover advantages”. If you think deeply enough about what users really want, you will create the last great development in a specific market and enjoy years, if not decades, of monopoly profits.
Danny Postma on AI democratising expertise:
AI makes you an expert in any field. You just need to know what to ask it.
— Danny Postma (@dannypostmaa)
3:14 AM • Apr 17, 2025
AI is creating a fundamental shift in how we define expertise.
I think this is a great summarisation of Danny Limanseta’s observation that “broad knowledge and surface-level understanding across domains is becoming more valuable than deep expertise in a single technical skill.”
Expertise is now increasingly about knowing how to access and direct knowledge rather than personally possessing it.
We’re now witnessing the emergence of a new cognitive partnership. The combination of human curiosity, context, and critical thinking with AI’s vast knowledge and tool use to create and automate tasks at scale.
The most successful individuals will be those who embody critical thinking and ask deep, probing questions—challenging the AI—to evaluate, synthesise and thoughtfully apply the answers.
Source: Danny Postma on X
Question to Ponder
“Will the increased use of AI in our lives lead to a change in how we think?”
One hundred per cent.
As we explored above, you now need to turn your attention toward directing knowledge instead of memorising it.
AI is capable of writing anything you’ve ever written as long as your prompt is good enough. That is, providing the necessary context, examples, and framework to produce a great output.
But the idea that AI will replace thinking (and subsequently make humans brain-dead beings) is a totally flawed argument.
It reminds me of the age-old adage, “garbage in, garbage out”. In the context of AI, the quality of your output is only as good as the quality of your prompt. And the quality of your prompt is only as good as the quality of your thinking when it comes to applying the necessary context and asking the right questions to get the answer you desire.
Directing knowledge, critical thinking, and questioning assumptions are the core skills that have to be prioritised in the age of AI.

How was the signal this week? |
See you next week, Alex Banks | ![]() |