• The Signal
  • Posts
  • Google Talks the Talk, OpenAI Goes on the Record, and ElevenLabs Puts Agents to Work

Google Talks the Talk, OpenAI Goes on the Record, and ElevenLabs Puts Agents to Work

AI Highlights

My top-3 picks of AI news this week.

Google launches Search Live with talk and listen within Google App

Google launches Search Live / Search Engine Land

Google
1. Google Talks the Talk

Google has launched Search Live with voice input, a new way to interact with search through free-flowing, back-and-forth voice conversations in the Google app for Android and iOS.

  • Voice-first experience: Users can now engage in natural conversations with Search by tapping the "Live" icon and asking questions verbally, receiving AI-generated audio responses in real-time.

  • Multitasking capabilities: Search Live works in the background, allowing users to continue conversations while using other apps.

  • Enhanced exploration: The feature provides easy-to-access web links on screen and includes a transcript button to switch between voice and text interactions, with conversation history saved in AI Mode.

Alex’s take: This feels like a significant step toward making search more natural and conversational. I'm particularly excited about the upcoming camera integration, which will enable us to show Search what we’re seeing in real-time. I’d definitely use this for getting help fixing home appliances, or if you’re solving a homework problem, you could get live feedback without having to ask a teacher.

OpenAI
2. OpenAI Goes on the Record

OpenAI has launched ChatGPT Record, a new feature that transforms how we capture and process spoken information through real-time transcription and summarisation.

  • Live transcription & summarisation: Records up to 120 minutes of meetings, brainstorms, or voice notes, then automatically generates structured summaries saved as interactive canvases.

  • Multi-format: Canvases can be instantly converted into project plans, emails, code scaffolds, or any other format you need.

  • Cross-conversation memory: References past recording transcripts and canvases to provide contextual responses across new conversations, enabling questions like "What did we decide in Monday's roadmap sync?"

  • Limited rollout: Currently exclusive to Enterprise, Edu, Team, and Pro workspaces, and only available through the macOS desktop app.

Alex’s take: Whilst this feature is limited for a set of users on macOS, I feel this is one of many initiatives OpenAI will be rolling out over the coming months that go beyond “just an LLM”. OpenAI are reaching up to the application layer and eating the stack from the ground up. Granola, my current favourite AI meeting notetaking tool, comes to mind when ChatGPT Record was released. Let’s see how tough platform lock-in and brand become as we move toward a world where AI becomes a companion vs a tool used in isolation.

ElevenLabs
3. Elevenlabs Puts Agents to Work

ElevenLabs has introduced Model Context Protocol (MCP) support for their conversational agents, enabling integration with external tools and data sources.

  • Universal connector: MCP acts as a standard interface allowing AI agents to access data sources and specialised tools through external servers, significantly extending agent capabilities.

  • Flexible security controls: Three approval modes ranging from “Always Ask” for maximum security, “Fine-Grained Tool Approval” where users can pre-select which tools run automatically versus requiring permission, to “No Approval” where the assistant can use any tool without approval.

  • Growing ecosystem: Integration with services like Zapier MCP that connects to hundreds of tools and platforms.

Alex’s take: I really enjoyed watching Angelo’s demo on X, bringing this feature to life. Isolation is an AI agent’s Achilles’ heel. ElevenLabs is tackling this head-on by making their agents truly interoperable. MCP will, in turn, become the plumbing that transforms AI into genuinely useful business tools that can actually do things in your existing workflow.

Today’s Signal is brought to you by INBOUND.

INBOUND 2025 makes its West Coast debut this year in San Francisco, Sept. 3–5. Connect with visionaries including Anthropic's Dario Amodei, Synthesia's Victor Riparbelli, and AI thought leader Dwarkesh Patel. Discover cutting-edge strategies and game-changing insights at the heart of AI innovation.

Content I Enjoyed

Elon Musk: Digital Superintelligence, Multiplanetary Life, How to Be Useful

Elon Musk at YC AI SUS 2025 / AttentionX YouTube

Elon Musk on Digital Superintelligence

This week, Elon Musk gave a fireside chat at Y Combinator’s AI Startup School in San Francisco. Now that he’s back from his politics “side quest”, he seems to be once again locked into the acceleration of intelligence. With timelines compressing across the board, I’ve pulled out my three favourite highlights that relate to AI or AI-adjacent topics.

Firstly, Musk believes digital superintelligence is arriving faster than expected (in the next 1-2 years). We’re at the very early stage of the “intelligence big bang”. It’s important to note that his definition of “digital superintelligence” here is a system that’s smarter than any human at anything. Instead of having a single digital superintelligence, Musk foresees 5-10 competing deep intelligences vs one runaway system.

Secondly, Neuralink is moving from restoration to superhuman augmentation. Beyond the five humans already controlling computers with their thoughts, Musk revealed that vision restoration trials will begin in 6-12 months. They'll write directly to the visual cortex of blind patients. But the long-term vision is wild: multispectral vision including infrared, ultraviolet, and radar wavelengths.

Thirdly, the robot revolution will be humanoid-first. His prediction that humanoid robots will outnumber all other robot types "by an order of magnitude" means Musk expects 5-10x more humanoid robots than humans eventually. Whilst he was initially hesitant about humanoid robotics due to “Terminator concerns”, he realised “it's happening whether I do it or not”. I guess if you can’t beat them, you’ve got to join them.

For civilisation as a whole, human intelligence will be “less than 1% of all intelligence” at some point. But if he's right about the timeline, we’re about to witness the most significant transformation in human history. The intelligence big bang, indeed.

Idea I Learned

Self-driving cars are safer than human drivers

Self-driving car / Volvo

When Driving Becomes a Hobby, Not a Necessity

New data has surfaced comparing accident rates across different driving scenarios.

Waymo's autonomous vehicles clock in at 1.16 accidents per million miles, while the US average sits at 3.90. That's 70% safer than human drivers. But here’s where it gets interesting: Tesla's data shows 0.15 accidents per million miles—though this includes supervised human drivers using Tesla's systems, so the data isn’t quite apples to apples…

I thought the overall trend highlights an interesting shift for the next generation. It seems like we’re heading toward a future where manual driving becomes what horse riding is today. More of a recreational activity rather than daily transportation.

Waymo is already making over 250,000 paid journeys weekly across US cities. The implications are massive. Imagine reclaiming the hours currently spent focused on the road. Reading, working, or simply relaxing during your commute could become the norm rather than the exception.

For parents, this data raises important questions about when and how to introduce driving skills to teenagers in an increasingly autonomous world. Chamath recently shared that while his 16-year-old son must learn to drive manually, once licensed, he'll only drive their Tesla with Full Self-Driving enabled.

The transition won't happen overnight, but these numbers suggest we're closer than many realise to fundamentally changing how we think about getting from point A to point B.

Quote to Share

George Hotz on AI’s value hierarchy:

I thought this was some interesting commentary by George Hotz on where value will ultimately accrue in the AI stack.

His framework suggests that infrastructure layers (data centres, chip fabs, hardware) will capture most economic value, while application companies and even foundation model makers will struggle to maintain margins.

This echoes historical tech patterns where infrastructure providers often outperform application companies long-term. Think about how cloud providers like AWS captured more value than many of the applications built on top of them.

But his dismissal of tier 4 companies seems a bit premature. OpenAI and Anthropic are no longer providers of foundational LLMs. They’re actually now becoming platforms that could own entire workflows (think tasks, reminders, workspace integrations, tool use). I believe the question goes beyond just technical capability to true distribution as a moat with ecosystem lock-in.

As for tier 5 "worthless" companies like Cursor (the AI-powered IDE), while they face commoditisation risk, there's still enormous value in solving specific use cases better than general-purpose models can. An interesting remark for a company that has grown to over $500 million in ARR and is used by over half of the Fortune 500.

The real test will be whether model companies can maintain differentiation as compute becomes commoditised.

Source: TBPN on X

Question to Ponder

“What is your go-to AI when you have deep research questions to ask?”

One month ago, if you asked me, I would have said Grok 3 “DeepSearch”. However, more recently, I've actually changed to ChatGPT's o3 model with search. I find the length of the output to be far more digestible as it really does hit the nail on the head without any extra waffle.

Even though OpenAI announced their o3-pro model last week as part of their $200/month plan, offering only a 3-5% improvement over standard o3, there is literally zero incentive for me to upgrade, as the output quality is only marginal.

What strikes me most about this shift is how little brand loyalty seems to matter in the AI space. We simply go where the best experience is. If Grok or Google gives better answers faster next month, that's where I’ll probably go, regardless of which company built it.

In traditional software, switching costs kept us locked in. Email migration, file compatibility, and learning curves all create friction, which we as consumers tend to shy away from. But with AI, that friction barely exists. We copy-paste a prompt from one chat to another, and we're done.

The best tool wins, not the most familiar one.

This rapid migration pattern tells us something important about the future. AI companies can’t rely on habit. They have to build beyond the LLM and stay ahead every single month, or users will simply drift away to whoever’s performing better.

We're witnessing the death of AI LLM provider brand loyalty in real time.

How was the signal this week?

Login or Subscribe to participate in polls.

💡 If you enjoyed this issue, share it with a friend.

See you next week,

Alex Banks

Do you have a product, service, idea, or company that you’d love to share with over 45,000 dedicated AI readers?