• The Signal
  • Posts
  • The Beatles get by with a little help from AI

The Beatles get by with a little help from AI

AI Highlights

My top-3 picks of AI news this week.

The Beatles by dov makabaw sundry / Alamy

The Beatles by dov makabaw sundry / Alamy

AI Music
1. The Beatles' AI song makes Grammy history

The Beatles' final song, “Now and Then”, has been nominated for two Grammy Awards, marking a historic moment where AI-restored music competes with contemporary artists.

  • AI restoration: Machine learning technology was used to separate John Lennon's vocals from a 1970s demo tape, making it possible to complete the unfinished track.

  • Grammy nominations: The song is nominated for Record of the Year and Best Rock Performance, competing against modern artists like Beyoncé, Taylor Swift, and Billie Eilish.

  • Historical significance: This marks the first Grammy nomination for The Beatles in nearly 50 years since their split, forming a nice tie between past and present through AI technology.

Alex’s take: While some worry about AI replacing human creativity, I find it poetic that AI has instead helped preserve and complete a piece of musical history. The technology didn't create something new—all it did was make it possible for Paul and Ringo to fulfil John's original artistic vision. Given the current chart music, I think it would actually be a breath of fresh air to see The Beatles win with their timeless sound.

Perplexity AI
2. Perplexity's sponsored search

Perplexity AI has introduced advertising to its AI-powered search platform, marking a significant shift in its monetisation strategy.

  • Novel ad format: Implementing “sponsored follow-up questions” positioned alongside search results with partners including Indeed, Whole Foods, and Universal McCann.

  • Publisher focus: Aims to generate sustainable revenue for content sharing with publishing partners, addressing limitations of their subscription-only model.

  • Competitive positioning: Marketing itself as a premium alternative to Google, targeting educated, high-income consumers while maintaining AI-generated, unbiased answers.

Alex’s take: There’s an undeniable tension between providing unbiased AI search results and monetisation. While Perplexity claims sponsored questions won't affect answer objectivity, I have concerns about how paid content might influence the “signal-to-noise ratio” in search results. With ChatGPT's ad-free search (for now) recently launching, the real test will be whether users accept this necessity to meet commercial targets or flock to alternatives.

DeepL
3. DeepL finds its voice

DeepL, the German translation company valued at $2 billion, has launched DeepL Voice, bringing real-time voice translation capabilities to its platform.

  • Real-time translation: Supports live voice-to-text translation across 13 languages, with caption support for all 33 languages in DeepL Translator.

  • Meeting-focused design: Offers “mirror” display for in-person meetings and subtitle integration for Microsoft Teams.

  • Privacy-first approach: While voice processing occurs on servers, DeepL has a strict no-retention policy.

Alex’s take: I like DeepL’s patience in building from the ground up. In a landscape where many AI companies race to market by riding on top of existing LLMs, DeepL spent years developing their own translation-optimised model. I’m a big believer that purpose-built AI, rather than general-purpose models, will be what ultimately wins in specialised domains like language.

Today’s Signal is brought to you by Fyxer AI.

You know what's killing your productivity?

Those 2-hour blocks you waste every day writing “quick responses” to emails.

I know because I was there. Then I found Fyxer AI.

Here's what happened when I gave Fyxer AI access to my inbox:

  • It learned my writing style

  • Started drafting responses so good I hit send 80% of the time without editing

  • Began summarising my endless Zoom calls into actual action items

I can't help but smile every time I open my inbox and see it’s already organised and drafted the perfect responses for me.

Those mind-numbing hours spent staring at emails, trying to figure out how to respond?

Finally over.

Test it free for 14 days. The idea of manually writing emails now seems as outdated as using a flip phone.

Content I Enjoyed

Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity | Lex Fridman Podcast

Dario Amodei / Lex Fridman Podcast

Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity

Lex Fridman sat down with three key figures from Anthropic in what might be the most comprehensive conversation yet about the company's approach to AI development. I needed two coffee breaks to finish this one, as the episode is over 5 hours long!

Key takeaways from Dario Amodei's segment:

  • He believes we'll likely achieve AGI (or, as he prefers to call it, “powerful AI”) between 2026-2027, barring major obstacles. While he acknowledges potential delays, Dario believes we're “rapidly running out of truly convincing blockers.”

  • Anthropic's “race to the top” strategy focuses on setting positive examples for the industry rather than competing directly. When one company adopts better safety practices, others often follow suit.

  • He predicts AI clusters will soon be capable of running millions of model instances, highlighting both the incredible potential and risks ahead.

  • The company's Responsible Scaling Policy includes AI Safety Levels (ASL) from 1-5, with current models at ASL-2. They're preparing for ASL-3 potentially as soon as next year.

Special mentions:

  • Amanda Askell provided fascinating insights into Claude's character development, revealing how they balance helpfulness with ethical behaviour.

  • Chris Olah explained their work on mechanistic interpretability—think of it as “neural network anatomy”—which could be crucial for understanding AI safety as systems become more powerful.

It has become abundantly clear that Anthropic is approaching AI development in a thoughtful, meticulous way.

They do this whilst still prioritising shipping speed and getting useful products to market fast.

Let’s see if the 2026-2027 AGI estimate proves accurate.

Idea I Learned

Intel security conversation with Emily Ryan, Alex Banks, and Jimmy Wai.

Emily Ryan, Alex Banks, and Jimmy Wai

Why hardware is the unsung hero in AI security

Recently, I had a conversation with Intel about AI security that changed my perspective on protection against AI threats.

While we often focus on software solutions, I learned that the real foundation of AI security starts at the hardware level—especially in this new world of generative AI.

Here's what caught my attention: When a Ferrari executive received a call from what seemed to be their CEO (but was actually an AI voice clone), they were able to catch the impersonator with a simple question: “What book did you just talk about?”. But what if the AI had been more sophisticated?

This is where hardware-level security becomes crucial. Intel showed me how their neural processing unit (NPU), which powers many of today's AI-enabled PCs, can detect threats locally, right on your device, before they even have a chance to cause damage.

It’s essentially using AI to fight AI. The same technology that creates threats can be leveraged through hardware to detect and prevent attacks in real-time.

While we're all rushing to implement the latest security solutions, we might actually be overlooking the foundation that makes it all possible: the hardware itself.

It's a bit like building a house—you need a solid foundation before adding security features. Starting with hardware that has built-in security capabilities, I feel, can provide that essential foundation for protecting against AI threats.

Going forward, this will be a key step toward a safe future for us all in the era of AI.

For more info, check out my conversation with the Intel team.

Quote to Share

Jensen Huang on AI's impact on work:

“Instead of thinking about AI as replacing the work of 50% of the people, you should think that AI will do 50% of the work for 100% of the people... AI will not take your job. AI used by somebody else will take your job.”

This is a powerful reframe of the AI productivity narrative.

Rather than viewing AI through the lens of job displacement, he positions it as a universal productivity multiplier. The key insights:

This aligns with Nvidia’s broader vision of AI as an industrial revolution, where every company must become an AI company. It's particularly relevant given their announcement with SoftBank to build Japan's largest AI factory—a 25-exaflop system that will serve as the foundation for Japan's AI infrastructure.

The message is clear: AI adoption isn't optional, it's imperative. As Jensen puts it, “How can any company afford not to produce intelligence? How can any country afford not to produce intelligence?”

Question to Ponder

“If robots have feelings, do they need rights?”

If AI becomes conscious, what moral dilemmas would present themselves?

Does this mean robots should have rights, too?

We can start by understanding where we are in present time.

If an AI says it's sad today, it’s just words. It’s a neural network predicting the next word in a sequence based on the previous words.

AI doesn’t have a heart.

It can’t portray real, genuine human emotion.

It doesn’t have real idiosyncrasies and quirks that make humans human.

However, we can look to industry and see this narrative developing.

Only 3 months ago, Anthropic quietly hired its first “AI welfare” researcher, Kyle Fish, to explore whether future AI models might deserve moral consideration and protection.

In the future, I see it boiling down to whether a robot is more personal because it looks like you.

I believe humans will treat life-like humanoids differently than just robots because of our human tendency to anthropomorphise.

We're likely to treat a humanoid robot that looks and acts like us very differently than we would a faceless AI system, even if their underlying architecture is similar. It's those life-like features—those visual cues that mirror humanity—that could gradually shape our moral intuitions about robot rights.

Whilst robots and AI don’t have feelings today, if they do tomorrow, we need to be ready.

I recommend three different resources for those who want to dive deeper into this topic:

  1. This article from The Times that inspired this question

  2. This blog post by Shakeel Hashim highlighting Anthropic’s “AI welfare” hire

  3. This piece by Benj Edwards on AI welfare becoming the new frontier in ethics

How was the signal this week?

Login or Subscribe to participate in polls.

💡 If you enjoyed this issue, share it with a friend.

See you next week,

Alex Banks

Do you have a product, service, idea, or company that you’d love to share with over 40,000 dedicated AI readers?