• The Signal
  • Posts
  • Google’s AI Dominance, ChatGPT’s Memory Upgrade, and Amazon's Double Drop

Google’s AI Dominance, ChatGPT’s Memory Upgrade, and Amazon's Double Drop

AI Highlights

My top-3 picks of AI news this week.

Sundar Pichai CEO of Google

Sundar Pichai / The Verge

Google
1. Google's AI Dominance

Google has unveiled a series of significant AI advancements this week at Google Cloud Next 25, as they continue to build the future of AI infrastructure and applications.

  • Ironwood TPU: Their seventh-generation Tensor Processing Unit (TPU) designed specifically for inference, delivering 5x more peak compute capacity compared to the previous generation, with 42.5 Exaflops of power (24x more than the world's largest supercomputer).

  • Agent2Agent (A2A) Protocol: A new open standard for AI agent interoperability that enables secure communication, task coordination, and context sharing among AI agents across different platforms and vendors, supported by over 50 technology partners.

  • Enhanced Gemini Capabilities: Deep Research is now powered by Gemini 2.5 Pro, bringing improved information synthesis and analytical reasoning, while Gemini Live now allows users to share their screen or camera to brainstorm and troubleshoot in real time.

Alex’s take: Google’s woken up. They have been sitting on a distribution and hardware advantage all this time. They’re now starting to lean into this advantage, and I think it will become even more apparent over the coming decade, especially as they’re making increasingly more efficient, bespoke hardware like Ironwood in-house to run their AI models. This could mean they beat the competition on a cost basis—there’s a serious advantage to be found in owning the vertical from chip to chatbot.

OpenAI
2. ChatGPTs Memory Upgrade

OpenAI has rolled out memory improvements to ChatGPT, enabling more personalised interactions by referencing past conversations.

  • Comprehensive recall: ChatGPT can now draw from your entire chat history to provide contextually relevant responses based on your preferences and interests.

  • Two-tiered memory system: Features both "Saved Memories" (explicitly requested information) and "Chat history" references that work together to create more tailored experiences.

  • User control: Complete management of what ChatGPT remembers with options to delete individual memories, clear all saved memories, or turn off memory features entirely through settings.

Alex’s take: What I like about this move is that it solves the annoying “memory reset” issue we all experience when working alongside LLMs. We are now one step closer to truly personalised AI relationships rather than isolated conversations. However, the feature is absent in European markets (including the UK) due to much stricter data protection regulations.

AWS
3. Amazons Double Drop

AWS has released two AI models focused on enhanced video and speech capabilities, both of which are now available on Amazon Bedrock.

  • Nova Sonic: A new speech-to-speech model designed to capture not just what people say but how they say it—including tone, style, pauses, and interruptions.

  • Nova Reel 1.1: Generate multi-shot videos up to 2 minutes in length with consistent styling across shots, elevating the quality and coherence of AI-generated video content.

  • Unified approach: Nova Sonic combines speech understanding and generation in a single model, preserving context and nuance for more natural conversations.

Alex’s take: I’m a big believer that speech will be the predominant interface for generative AI. It’s great to see AWS sharing their advancements across both visual and audio domains, given their size and position. They have a serious distribution advantage for those already embedded in the AWS ecosystem. For now, I’m eagerly waiting for their pricing to be released and seeing how it stacks up against the likes of Google’s Veo 2 at $0.50 per second.

Today’s Signal is brought to you by GrowHub.

Want to know why most people fail on LinkedIn?

They spend hours writing mediocre posts that get zero engagement.

I know because I used to be one of them.

I built an audience of 130,000+ followers by turning my ideas into viral content in minutes (not hours).

The secret? GrowHub.

Drop in any:

  • YouTube video

  • Blog post

  • Image

And watch GrowHub's AI (trained on thousands of viral posts) repurpose it into engaging content.

Content I Enjoyed

ElevenLabs MCP server

ElevenLabs MCP server

Voice Is Becoming the New Interface for AI

This week, ElevenLabs released its MCP server. This means you can build a voice agent that lets you do things like order pizza, transcribe speech, and generate audio using simple API calls.

For those wondering what model context protocol (MCP) is, you can think of it as a USB-C port for AI applications. MCP provides a standardised way to connect AI models to different data sources or tools.

The ElevenLabs MCP server sits on top of the ElevenLabs API to provide AI models with context to access the ElevenLabs AI audio platform.

I think this is a really interesting step towards voice and audio becoming the primary interfaces of the AI economy.

For example, when I ask Claude, Grok, ChatGPT, or any other LLM provider a prompt on my computer, I no longer use my keyboard. I use my voice instead, leveraging a favourite tool of mine called Wispr Flow.

But when it comes to performing outbound calls, following up with leads or even booking appointments, the ElevenLabs MCP server seems to be scratching the surface of something extraordinary.

My research also led me to find this demo where Luke Harries, Growth Lead at ElevenLabs, combined the WhatsApp MCP server with their own MCP server to transcribe voice notes and send audio messages using ElevenLabs’ library of 3,000+ voices. Wild.

I believe MCPs are turning out to be something of the AI app store.

Idea I Learned

Custom dashboard / shadcn on X

Custom dashboard / shadcn on X

The Composability of AI Tools

You write 7 lines of code and have a gorgeous dashboard.

When would this have been possible ever before?

What I love about the possibility of generative AI is it has fundamentally removed the barriers to “be technical”.

English is the new programming language. No longer is there a requirement to memorise complex functions and formulas. If you get stuck, you can simply ask an LLM (with search capabilities to get up-to-date information), “How do I do this?”.

It is a frictionless facilitator that lets anyone explore their curiosity and go down deep rabbit holes without having to pause, spend time finding adjacent materials to help their endeavour, and then continue. You can now get any answer you could possibly want on demand as long as you ask the right question.

As shadcn on X demonstrates, you can now build gorgeous UI by combining building blocks across the web. It’s almost like stacking legos and building your desired creation—everything is becoming composable and working on top of each other.

Then, you can combine deep research tools like Grok 3 with AI-powered development tools like Cursor, v0, and Lovable, and you’re cooking with gas to turn your idea into life.

If you’re interested in my AI-powered workflow using tools like this for non-technical folk, let me know by responding to this email.

Quote to Share

Elon Musk on Sam Altman:

Earlier this week, OpenAI Newsroom put out a post on X directly addressing Elon Musk’s actions against OpenAI. This legal back-and-forth was born out of Musk's lawsuit to block OpenAI's conversion to a for-profit structure back in 2024.

But things have recently become vocal on X, with Musk replying to a screenshot of OpenAI’s statement with, “Scam Altman is at it again.” This mainly refers to Sam Altman’s original statement of, “I’m doing this because I love it,” and not for the money, made when he testified before Congress in 2023 about the dangers of AI. It was only later announced that he would receive approximately $10 billion in equity if OpenAI successfully converts to a for-profit model.

OpenAI’s original mission was to build a non-profit, open-source AI company. It is now a closed-source (and well on its way to becoming a) for-profit AI company—a complete 180 from its original mission when Musk helped found the company in 2015.

Question to Ponder

“With AI progress accelerating, are we approaching an ‘end of work’ scenario where humans become economically unnecessary under our current model?”

By 2100, AI researchers predict full automation of all human jobs—though this could happen much sooner.

Our economic system was built around human labour as the primary means of distributing resources. When that foundation shifts, everything must adapt.

With AI eventually driving the cost of intelligence and the cost of labour to near-zero, we need to think about how we navigate a society like this.

Understanding this trajectory highlights an important question. Will we need basic income? I believe it’s one that is increasingly more difficult to ignore. Workers competing with automation would need the security of some basic income.

Currently, routine, low-creativity jobs like trucking or data entry are prime candidates for disruption. Jobs needing creativity, emotional depth, or social finesse—like artists, therapists, or nurse anesthetists—are far more difficult for AI to compete against as it ultimately can’t (yet) “feel”.

However, the WEF expects 39% of core skills to shift by 2030. Therefore, focusing on upskilling and retraining will be paramount.

Much like the Industrial Revolution transformed work rather than abolished it, I think we’ll see the same with the rise of AI. New roles will be created, old ones will fade, and our definition of “work” will shift over time.

How was the signal this week?

Login or Subscribe to participate in polls.

💡 If you enjoyed this issue, share it with a friend.

See you next week,

Alex Banks

Do you have a product, service, idea, or company that you’d love to share with over 45,000 dedicated AI readers?