• The Signal
  • Posts
  • Google's Search War, Anthropic's $3.5B Boost, and Musk's AI Megacampus

Google's Search War, Anthropic's $3.5B Boost, and Musk's AI Megacampus

AI Highlights

My top-3 picks of AI news this week.

Google AI Mode

Courtesy Google / Inc

Google
1. Google’s AI is After Perplexity

Google is rolling out significant AI enhancements, including “AI Mode” and a new embedding model, with a focus on making AI more accessible and practical for everyday use.

  • AI Mode in Search: A new experiment that expands on AI Overviews with advanced reasoning capabilities, allowing users to ask anything and engage in follow-up questions with helpful web links.

  • Availability: Currently available in English to Google One AI Premium subscribers in the US who are 18+ and opt-in via Labs, with plans to expand to more countries and users after initial testing.

  • Gemini Embedding: Released a new text embedding model that translates words and phrases into numerical representations, supporting over 100 languages and accepting larger chunks of text and code.

Alex’s take: It seems like Google has now made a carbon copy of Perplexity’s core product offering—providing AI summary answers with citations. Google's 90% share in traditional search far exceeds Perplexity's estimated 0.1%–0.2% in the overall market today. It was only a matter of time before Google’s distribution advantage started to catch up to Perplexity’s business model.

Anthropic
2. Anthropic's $3.5B Boost

AI startup Anthropic has secured a massive $3.5 billion in Series E funding at a $61.5 billion post-money valuation, led by Lightspeed Venture Partners.

  • Strategic focus: Anthropic will use this investment to develop next-generation AI systems, expand compute capacity, deepen research in interpretability and alignment, and accelerate international expansion.

  • New model release: The funding follows the recent launch of Claude 3.7 Sonnet, a “hybrid reasoning” model designed to more carefully consider queries before answering.

  • Growing business: Anthropic's annual revenue run rate has increased by 30% so far in 2025, from its $1B figure in 2024, though the company expects to burn $3 billion this year on development costs.

Alex’s take: I’ve been a long-time user of Anthropic’s products since the release of Claude in 2023. There’s just a feeling about the model interaction that has more personality and makes it more vibrant and fun in comparison to the rather sterile GPT series from OpenAI. Model characteristics will play an essential part in determining who will come out on top in the long-run.

xAI
3. Musk's AI Megacampus

Elon Musk's AI company, xAI, is dramatically expanding its infrastructure with a massive 1 million-square-foot property acquisition in Southwest Memphis.

  • Ambitious scaling: The new property will complement xAI's existing Memphis data centre “Colossus” housing 200K GPUs, creating one of the largest AI computing hubs in the world.

  • Power hunger: The current facility uses 250MW of power. The new data centre will require 1.21 GW (~5 times larger) scaling to 1M GPUs, the same amount of power required by the plutonium powered flux capacitor to enable time travel in Back to the Future.

  • Hardware capacity: xAI is aggressively investing in AI hardware, recently setting up a $700M data centre in Atlanta and signed a $5 billion server deal with Dell.

Alex’s take: This goes to show we have a way to go before we hit the asymptote of scaling compute capacity to improve model performance. It makes me think about what will happen when we top out with the current transformer architecture of LLMs—3M GPUs, 10M GPUs? It excites me to think what a new future architecture might look like as humanity continues scaling intelligence.

Today’s Signal is brought to you by Athyna.

Athyna

Content I Enjoyed

Hao AI Lab / X

Super Mario: The New AI Benchmark

Is Super Mario the new eval for LLMs?

Hao AI Lab at UC San Diego threw various AI models into live Super Mario Bros games last week. While Claude-3.7 had already been tested on turn-based games like Pokémon Red, seeing AI tackle real-time games presented an entirely different challenge.

Claude-3.7 outperformed, while Claude-3.5 showed strength but struggled with complex maneuvers. Meanwhile, Gemini-1.5-pro and GPT-4o performed less well in the Mario environment.

The game ran in an emulator integrated with their in-house framework, GamingAgent, which fed the AI basic instructions like “If an obstacle or enemy is near, move/jump left to dodge” alongside game screenshots. The AI then generated Python code to control Mario.

Interestingly, the researchers found that reasoning models like OpenAI's o1 performed worse than “non-reasoning” models, despite being generally stronger on most benchmarks. The main reason? In Super Mario Bros., timing is everything—and reasoning models take seconds to decide on actions when a split-second can mean the difference between success and failure.

This inspired me to create my own Space Invaders themed game using Grok 3 + Claude 3.7. As someone who taught themselves to code using AI, I find it incredible to see what can now be achieved with these tools. The democratisation of creative coding means the requirement to be technical is no longer a thing. English is the new programming language.

Check out my game here.

Idea I Learned

Joaquin Phoenix / “Her” (2013)

The Hidden Tension Between AI and Social Health

This week I listened to Kasley Killam's SXSW talk on social health.

The idea behind the talk was that the more we turn to AI for connection, the more we may be undermining the very relationships that keep us alive.

Killam noted that “hundreds of millions of people around the world are turning to AI for companionship and for love.” In her research, she created her own AI friend and observed how people describe these tools as companions, friends, lovers, and even spouses.

On one hand, she's “concerned that we have created a culture where people feel like they need to turn to AI for companionship.” On the other hand, she acknowledges AI can be valuable when used as just one of many connection sources.

The future case of this looks a bit like the movie “Her” with Theodore, played by Joaquin Phoenix, falling in love with his AI companion Samantha (voiced by Scarlett Johansson).

AI has the potential to become really smart as we extend beyond real-time text generation to the incorporation of voice, and eventually video to create truly incredible and immersive experiences.

But right now, it’s important we build our education on how AI works today—it’s just trying to predict what word it will say next, having been trained on a huge corpus of text. It’s this stark contrast that gives me hope that we’ll live in a future that involves human connection abundance, whilst intelligent AI supports our relationships. Not the other way around.

Quote to Share

Brett Adcock on Figure AI becoming the 6th most sought after private company:

Figure AI now sits ahead of OpenAI in the rankings for the most sought-after venture companies in the secondary market.

This comes off the back of their announcement of Helix, Figure’s Vision-Language-Action (VLA) model that makes robots think like humans.

Another interesting note—4 out of the top 16 companies have been founded by Elon Musk: SpaceX, xAI, OpenAI, and Neuralink.

Question to Ponder

“Isn't job replacement the point of AI?”

I see both profound vision and legitimate concern in this question.

Our economic systems were built around human labour as the primary means of distributing resources. When that foundation shifts, everything must adapt.

With AI eventually driving the cost of intelligence and the cost of labour to near-zero, we need to think about how we navigate a society like this.

Without planning for this transition, we could face a period where jobs disappear faster than new opportunities emerge. Concerns about who “controls the robots”, how benefits will be distributed, and whether our political systems can adapt quickly enough are questions we as a society need to find answers for soon.

Perhaps the question isn't whether AI will replace jobs, but the next step on, which is whether we’ll use that replacement to create abundance for all or concentrate power and resources even further. I think the latter is the most dangerous outcome, and believe society needs to focus on (1) broad baseline education of using these tools, and (2) open-sourcing as much of these developments as possible to enable everyone to benefit and build on the productivity gains of these tools.

Right now, a $200/mo subscription to ChatGPT Pro is out of reach for many—especially with the introduction of new (and potentially even more expensive) agentic capabilities being layered on top in the coming months.

How was the signal this week?

Login or Subscribe to participate in polls.

💡 If you enjoyed this issue, share it with a friend.

See you next week,

Alex Banks

Do you have a product, service, idea, or company that you’d love to share with over 45,000 dedicated AI readers?