- The Signal
- Posts
- Nvidia’s GTC Bombshells, Adobe’s Agent Orchestrator, and Google’s Watermark Woes
Nvidia’s GTC Bombshells, Adobe’s Agent Orchestrator, and Google’s Watermark Woes


AI Highlights
My top-3 picks of AI news this week.
Nvidia
1. Biggest updates from Nvidia’s GTC 2025
Nvidia CEO Jensen Huang’s GTC 2025 keynote unveiled a series of major announcements spanning AI acceleration, robotics, and autonomous systems.
Blackwell Ultra: Coming in H2 2025, offering the same 20 petaflops of AI performance as standard Blackwell but with 288GB of memory (up from 192GB), enhancing AI inference capabilities.
Future GPU Roadmap: Vera Rubin architecture for late 2026 with 50 petaflops for inference (2.5x Blackwell), followed by Rubin Ultra in 2027 (100 petaflops), and eventually Feynman GPUs in 2028.
Personal AI Supercomputers: DGX Spark and DGX Station built on the Grace Blackwell chip bring supercomputing power to the desktop for high-end enterprise and research applications.
Isaac Groot N1: An open-source foundation model for humanoid robots that enables collaborative work, independent learning, and optimisation across various environments.
GM Partnership: Nvidia is teaming up with General Motors to bring AI into manufacturing, vehicle design, and autonomous driving systems.
Alex’s take: I think it’s fascinating to see the progression from the GenAI boom of 2022 through to now being on the cusp of agentic AI and physical AI in 2025. Nvidia is building the infrastructure to power it all. I also think it’s great to see them open-sourcing models like Groot N1 to accelerate physical AI, similar to how open LLMs transformed generative AI.
Adobe
2. Adobe’s Agent Orchestrator
Adobe has unveiled its ambitious AI strategy with the launch of Adobe Experience Platform Agent Orchestrator, bringing agentic AI to enterprise workflows.
Suite of 10 purpose-built agents: From Site Optimisation to Content Production, these specialised agents handle everything from creating personalised audience segments to detecting and fixing website issues.
Brand Concierge application: Creates personalised, immersive experiences using a company’s unique brand attributes and customer data.
Multi-agent ecosystem: Strategic partnerships with major players including Microsoft, ServiceNow, and AWS enable seamless execution across different AI agent environments.
Alex’s take: Adobe has taken the 10 most common business functions, including the marketer, account manager, and data engineer, and turned their common workflows into agentic systems. This is especially true for niche use cases like marketing where you can pull on rich past performance data to inform future campaigns.
Google
3. Gemini's Watermark Woes
Google’s new Gemini 2.0 Flash model has sparked controversy with its ability to remove watermarks from images, including those from Getty Images and other stock media companies.
Powerful editing: The model can not only remove watermarks but also intelligently fill in gaps created by watermark deletion.
Experimental status: Currently labelled as “experimental” and “not for production use,” available only in developer-facing tools like AI Studio.
Legal concerns: Removing watermarks without the original owner's consent is generally considered illegal under U.S. copyright law.
Alex’s take: Google has stated that using its tools to infringe copyright “violates their terms of service.” However, it’s interesting that when you ask a different image generation/manipulation model like OpenAI’s DALLE-3, requests like watermark removal are outright refused. I expect we’ll see more robust protections implemented soon to give greater certainty to the creator.
Today’s Signal is brought to you by Artisan.
Hire an AI BDR & Get Qualified Meetings On Autopilot
Outbound requires hours of manual work.
Hire Ava who automates your entire outbound demand generation process, including:
Intent-Driven Lead Discovery Across Dozens of Sources
High Quality Emails with Human-Level Personalization
Follow-Up Management
Email Deliverability Management
Content I Enjoyed
The Cute Robot That Captivated GTC 2025
Among all the impressive announcements at Nvidia’s GTC event this year, there was one that truly sparked wonder inside of me.
It was the result of a collaboration between Nvidia, Disney Research, and Google DeepMind.
The result: “Blue”, an incredibly cute Star Wars-inspired BDX droid.
The Newton physics engine powering Blue is an open-source simulator designed specifically for robotics that allows artists to animate robots exactly as they envision.
The AI then handles the complex physics to make those movements work in unpredictable environments.
By creating virtual worlds with incredibly realistic physics, they can have thousands of digital robots trip, fall, and crash without damaging expensive hardware in the real-world.
The engine is highly customisable too, allowing developers to program robotic interactions with food, cloth, sand, and other deformable objects—things that have traditionally been a nightmare for robotics.
Disney plans to bring robots like Blue into their theme parks starting next year. The technology will also be available as an open-source release later this year.
I think there’s something refreshingly appealing and playful about a robot that feels more like R2D2 than Terminator.
Idea I Learned
CEO of Coca-Cola James Quincey on the future of AI advertising
This week, I attended the Adobe Summit in Las Vegas.
James Quincey, CEO of Coca-Cola, gave a fascinating talk on day one of the event.
Only in December did Coke release its Christmas advert, which faced a lot of backlash for a simple reason—it was 100% AI-generated.
“It was cheaper to make than historical regular ads, it was much quicker to make so it was a huge productivity opportunity.”
Yet, on reflection, James noted:
“It cannot produce a resolution on people's faces that humans will buy into… [GenAI] is not yet at the stage when it can make all our ads because we want people in them and humans are incredibly effective at noticing AI.”
As video models get better in the future, James sees two opposing forces taking place:
On the one hand, the cheaper it is to make an AI-generated advert, the easier it is to put the end consumer in the ad when you watch it on your phone.
If you didn’t click “buy”, the AI can A/B test the ad and keep re-showing it until you do. You’re essentially perpetually optimising your ads and chasing the consumer until they purchase your product.
On the other hand, the second force is you now have a bunch of annoyed consumers who will block your product.
Where does the advertiser now spend its money?
Live experiences are the only thing that can’t be avoided.
So there’s a line to dance on here between the cost of creativity and people’s level of annoyance from hyper-personalisation.
James feels the thing that will set you apart and prevail as this becomes a reality is the quality of the creativity. “I don’t want to be spammed with rubbish.”
It’s clear creativity has a premium—even more so in the age of GenAI, where a mountain of content slop is beginning to pile up.
The only thing that now towers over that is authentic and thoughtful content that evokes a deep emotional response.
I’m confident this will be where the enduring advantage lies between humans and AI.
Ethan Mollick on AI passing the Turing Test for memes:
I regret to announce that the meme Turing Test has been passed.
LLMs produce funnier memes than the average human, as judged by humans. Humans working with AI get no boost (a finding that is coming up often in AI-creativity work) The best human memers still beat AI, however.
— Ethan Mollick (@emollick)
12:33 AM • Mar 17, 2025
A recent study from KTH Royal Institute (Sweden’s largest technical university) demonstrated that AI has now surpassed average humans in creating humour—at least when it comes to memes.
The research compared three groups: humans alone, humans with AI assistance, and AI working independently.
AI-generated memes scored higher on humour, creativity, and shareability than both human-only and human-AI collaborations. However, when looking at the very best examples, humans still created the funniest memes, while human-AI teams excelled at creativity and shareability.
Perhaps this study reveals something many of us already suspected: the average human isn't particularly funny, and most people have fairly predictable tastes in memes.
On the other hand, AI now excels at pattern recognition and generating content that appeals to the broadest possible audience. It essentially creates humour by averaging preferences across millions of examples.
While the unique spark of exceptional human creativity remains persistent, we’re now entering a new phase where AI can generate genuinely entertaining content for the masses.
Source: Ethan Mollick on X
Question to Ponder
“What does it mean for us when AI becomes smarter than humans at pretty much everything? How do we find our place in that new world?”
This question reminds me of a recent episode of the Joe Rogan experience with chess champion Josh Waitzkin.
Within 3 hours of experimentation, AlphaZero was stronger than any human or computer in the history of chess.
Stronger than Magnus Carlsen, Gary Kasparov, and Bobby Fischer.
AlphaZero has an ELO of 3700. The strongest players today have an ELO of ~2800. When Josh was 9, he had an ELO of 1900.
To put this gap into perspective:
The ELO rating difference between the strongest AI chess engines and the human world champion is the same as the gap between the world champion and a talented 9-year-old.
Humans naturally have a hard time adapting to paradigm shifts. We’re inherently resistant. We fight “tooth and nail” to maintain our conceptual schemes, especially when our identity and life’s work are tied to them.
But what if we embraced the humility of, as Josh puts it, being the “ant relative to the human”?
What if we approached this new relationship with the curiosity and playfulness of children who aren't afraid of failure or looking bad?
I agree that this childlike openness to learning without ego might be our greatest asset in a world where we’re no longer the apex intelligence on the planet.

How was the signal this week? |
See you next week, Alex Banks | ![]() |