- The Signal
- Posts
- OpenAI’s Agent Acceleration, The Google Method and AI Creator Takeover
OpenAI’s Agent Acceleration, The Google Method and AI Creator Takeover


AI Highlights
My top-3 picks of AI news this week.
OpenAI
1. OpenAI's Agent Acceleration
OpenAI has unveiled a comprehensive suite of tools for developers to build sophisticated multi-agent systems with unprecedented speed and efficiency.
Agent Tools Trifecta: Three powerful new tools including Web Search Tool for accessing up-to-date internet information, File Search Tool for advanced RAG applications, and Computer Use Tool for programmatic computer control.
Responses API: A unified framework that supports multiple turns and simultaneous tool calls, eventually replacing the Assistant API (planned for 2026).
Production-Ready Agents SDK: The evolution of their experimental Swarm framework now features task handoffs with context preservation, built-in monitoring, and validation guardrails.
Alex’s take: I’ve seen a lot of talk that this will kill agent startups. I disagree. Agent startups will need to find deeper competitive advantages beyond what’s now readily available through OpenAI’s SDK. Competition is driving better tools for everyone.
Google
2. The Google Method
Google has launched a series of AI updates across their product lines, making their AI more accessible, powerful, and personalised than ever before.
Gemma 3: A new family of open-source models (1B-27B parameters) built from Gemini 2.0 technology, designed to run on a single GPU or TPU with multimodal capabilities, 128K token context window, and support for 140 languages.
Gemini 2.0 Flash upgrades: Enhanced with file upload support, a 1M token context window for Advanced users, and native image generation and editing capabilities.
Deep Research: Now powered by Gemini 2.0 Flash Thinking Experimental and available to all users in 45+ languages, enabling more detailed and insightful multi-page reports.
Image-to-video with Veo 2: Google's Veo 2 technology, first integrated in Freepik AI Suite, now transforms static images into short, high-quality videos with natural motion.
Alex’s take: Google is seriously pushing the envelope. They were a sleeping giant in the first GenAI wave of 2022-2024. 2025 came along and they woke up. They’re drip feeding consistent updates each week vs the “big drops” of other AI-first companies like OpenAI. I can only feel that Google’s distribution advantage will make them a long-term winner.
Captions
3. AI Creator Takeover
Captions has launched “Mirage,” the world's first foundation model specifically designed for generating user-generated content (UGC).
Complete video generation: Creates full videos from just audio files or scripts, eliminating traditional production requirements.
Original AI actors: Generates realistic digital personalities with natural expressions that can deliver your message.
Licensing freedom: Produces content entirely free from rights management and licensing restrictions.
Scene creation: Builds complete backgrounds and environments from simple text prompts.
Full customisation: Allows for personalisation of actor appearance, voice, and presentation style.
Alex’s take: We’re witnessing the democratisation of video production at an unprecedented scale. Just as AI writing tools transformed content creation, Mirage is one of the first who will ride on the explosion of AI-generated video flooding our feeds. As AI content becomes ubiquitous, I suspect we’ll see authentic human creators become more valuable precisely because of their uniqueness.
Today’s Signal is brought to you by GrowHub.
Want to know why most people fail on LinkedIn?
They spend hours writing mediocre posts that get zero engagement.
I know because I used to be one of them.
I built an audience of 125,000+ followers by turning my ideas into viral content in minutes (not hours).
The secret? GrowHub.
Drop in any:
YouTube video
Blog post
Image
And watch GrowHub's AI (trained on thousands of viral posts) transform it into engaging content.
You can now create tweet-style images for your LinkedIn posts with a single click.
Content I Enjoyed
The Future of U.S. AI Leadership with CEO of Anthropic Dario Amodei
This week Dario Amodei, CEO of Anthropic, sat down with the Council on Foreign Relations to discuss AI’s trajectory, risks, and opportunities.
Something stood out to me during the interview.
Amodei made a bold prediction about the future of software development:
“I think we'll be there in three to six months—where AI is writing 90% of the code. And then in twelve months, we may be in a world where AI is writing essentially all of the code.”
Having spent months using AI coding tools like Cursor, Lovable and V0, I’ve seen my productivity skyrocket—but in my opinion we’re nowhere near 90% replacement yet.
Today, building complex systems requires a much deeper understanding than what AI currently grasps, especially when business requirements are rarely clear enough for AI to interpret correctly.
Even when you get in the weeds, testing, debugging and back-end development all require human expertise. Security is another lens here that demands oversight.
I believe the true value of software engineers lies in orchestrating complex systems and translating business needs. The role will evolve rather than disappear.
This pattern is emerging across industries: not wholesale replacement, but a fundamental shift in how we work alongside increasingly capable AI systems.
The best developers will be those who ask the best questions. Human language is becoming the new programming language.
Idea I Learned
Manus - “It’s a Claude wrapper”
This week, Chinese company “Manus AI” released a fully autonomous AI agent.
It thinks, plans, and executes tasks without human oversight.
And has sparked a polarising debate online.
When I posted this update on LinkedIn, one of the most liked comments was words to the effect of:
“But Manus is only a Claude wrapper.”
The comment was absolutely right, it is only a Claude wrapper.
As “Peak”, a Co-Founder of Manus stated on X this week:
“We use Claude and different Qwen-finetunes” alongside a variety of tools and open-source packages like Browser Use.
Turns out, adding unique experiences, functions, tools and techniques together means you are in fact wrapping up and riding on top of other technologies.
That’s how accretive value is built.
Yet something is telling me it’s almost like a “gotcha” whenever I see comments like this.
It makes the author of the comment feel good by diminishing the technology that has been constructed.
All we have to do is extend this idea for some context:
Salesforce is an Oracle database wrapper valued at $320 billion.
Stripe is a Mastercard wrapper valued at $70 billion.
AWS is an HPC primitives wrapper valued at $3 trillion.
h/t to @sahilypatel on X for this.
Turns out everything in tech is a wrapper.
But things get interesting in the domain of AI “wrappers”:
We thought the application layer would be commoditised and the value was to be found in the foundation models.
I disagree.
The models themselves are becoming commodities and open sourced, so everyone can use them and build on the application layer like Manus.
As the cost of intelligence trends to near-zero, the real opportunity lies in infusing unique insights with deep proprietary data sets whilst combining the best LLMs and tools to make magical applications.
These are the areas where I see a real sustaining advantage.
So yes, Manus is a Claude wrapper.
But so is every other technology company.
Figure’s introduction of BotQ:
Today we're introducing: BotQ
BotQ is our high-volume robot manufacturing facility
Initially designed to produce 12,000 robots per year, it will scale up to support a fleet of 100,000
— Figure (@Figure_robot)
2:52 PM • Mar 15, 2025
Figure has spent the last 8 months building the BotQ manufacturing process to scale up to 12,000 humanoids per year.
What I love about the company is that they are vertically integrating and in-housing their entire manufacturing process.
They also intend to use their humanoid robots in their manufacturing process to build other humanoid robots—expected to take place this year.
I personally can’t wait to see what’s in store.
Source: Figure website
Question to Ponder
“What is your go-to AI when you have deep research questions to ask?”
ChatGPT was my constant companion for months—but recently I've found myself gravitating toward Grok more and more.
I like to use “DeepSearch” mode for researching answers to questions. Grok combs the web for the best sources to inform its answer. Once I have a robust plan and information, I then engage “Think” mode to reason through the steps to get an answer to my question.
I find the combination of both modes to be really helpful and actually outperform even GPT-4.5 which is costing me $20/mo.
What strikes me most about this shift is how little brand loyalty seems to matter in the AI space. We simply go where the best experience is. If Grok gives better answers faster, that's where we’ll go—regardless of which company built it.

How was the signal this week? |
See you next week, Alex Banks | ![]() |