- The Signal
- Posts
- xAI unveils Grok 3, Google’s Veo 2, and Microsoft’s Majorana 1 chip
xAI unveils Grok 3, Google’s Veo 2, and Microsoft’s Majorana 1 chip


AI Highlights
My top-3 picks of AI news this week.
xAI
1. xAI Unleashes Grok 3
xAI has launched Grok 3, marking a significant advancement in AI capabilities with impressive technical specifications and features:
Massive compute: Leveraged 200,000 GPUs for training, representing a 10x increase from Grok 2.
Performance: Highest ever Elo score (1400) in LMSYS Chatbot Arena, whilst outperforming GPT-4o and Gemini 2.0 Pro on benchmarks.
Advanced features: Introduces “Big Brain” mode for complex reasoning and “DeepSearch” for enhanced information retrieval.
Alex’s take: xAI intends to open-source Grok 2 once Grok 3 stabilises, while also integrating with Palantir's Artificial Intelligence Platform. I think we’ll see performance continue to improve as data centre GPU count increases and the GPUs are upgraded from Nvidia H100’s to B200’s. It’s also important to remember that Grok 3 was built in months, not years. Timelines are compressing, and new model development is accelerating.
Google
2. Google’s Veo 2 Takes The Stage
Google has launched Veo 2, their most advanced AI video model to date, making its debut exclusively through Freepik's AI Suite.
Unprecedented realism: The model excels at generating high-fidelity textures for everything from skin and fur to liquids, making it particularly effective for macro shots.
Physics-driven motion: Advanced understanding of real-world physics enables superior motion sequencing and detailed animations that closely mirror natural movement.
Complete creative toolkit: Integrated features include image-to-video conversion, audio generation, and multilingual lip-sync capabilities through Freepik's platform.
Alex’s take: While text-to-video has been around for a while, the gap between AI-generated and real footage has remained noticeable. What excites me about Veo 2 is how it’s tackling the hardest problems in video generation—physics, textures, and temporal consistency. I think we’re a step closer to AI-generated video becoming truly viable for professional content creation, not just experiments.
Microsoft
3. Microsoft's Quantum Leap
Microsoft has announced a breakthrough in quantum computing with their new Majorana 1 chip, utilising a novel quantum state called “topoconductors” to create stable qubits.
Stable qubits: The chip leverages topological properties to create more stable quantum bits, addressing one of quantum computing's biggest challenges.
Unprecedented scale: The technology could potentially support up to 1 million qubits on a single chip, with individual qubits measuring just 1/100th of a millimeter.
Accelerated timeline: Microsoft CEO Satya Nadella suggests this breakthrough could bring meaningful quantum computing within years rather than decades.
Alex’s take: Current AI models are limited by classical computing power—just training GPT-4 reportedly used over 25,000 NVIDIA GPUs. Quantum computers could theoretically process these complex AI calculations exponentially faster, potentially allowing us to train models in hours instead of months. I’m keeping my fingers crossed for the first practical quantum computing use case over the next few years.
Today’s Signal is brought to you by GrowHub.
Want to know why most people fail on LinkedIn?
They spend hours writing mediocre posts that get zero engagement.
I know because I used to be one of them.
I built an audience of 120K+ followers by turning my ideas into viral content in minutes (not hours).
The secret? GrowHub.
Built specifically for busy professionals who want to scale their personal brand, clients and authority on LinkedIn.
Drop in any:
YouTube video
Blog post
Image
And watch GrowHub's AI (trained on thousands of viral posts) transform it into engaging content.
Content I Enjoyed
Figure's Helix: A Leap Towards Human-Like Robot Intelligence
This week Figure AI announced “Helix”, a Vision-Language-Action (VLA) model that makes robots think like humans.
It consists of a single neural network that learns all behaviours, marking the first AI to control both a robot’s entire upper body as well as the ability to operate two robots simultaneously.
What particularly grabbed my attention was their two-system model for Helix.
System 1 vs System 2 thinking was popularised by the book “Thinking, Fast and Slow” by Daniel Kahneman. System 1 is quick, instinctive, and automatic, whereas System 2 is rational, slower, and conscious.
Today’s AI is only capable of System 1 thinking as it imitates human responses. This makes it hard to go above human response accuracy if AI is only trained on humans.
Figure’s Helix incorporates both systems: System 2 “thinks slow” about high-level goals. This involves scene understanding and language comprehension.
System 1 “thinks fast” to execute and adjust actions in real time. This translates System 2 into continuous robot actions.
It seems like we may only be a couple of years away before humanoids are prevalent in the home.
What’s more, Figure AI is now in talks to raise $1.5 billion in Series C funding which would value the company at $39.5 billion.
It’s clear there’s tremendous amounts of capital and talent flowing into this sector. I personally can’t wait for humanoids to walk into the home and help with the dishes, do the washing up, or even fold the laundry. That is to say, as long as they remember to close the fridge door first.
Idea I Learned
The Chained Robot Dog
I can't stop thinking about this art exhibit in Japan.
Japanese artist Takayuki Todo created an installation of a chained robot dog programmed to attack visitors. It's designed to highlight our need for AI safety.
The timing of this is rather apt when we’re already seeing drone warfare alongside US Marines testing armed robot dogs. Military AI development is accelerating globally.
However, as Palantir's CEO Alex Karp recently noted, “We must not shy away from building sharp tools for fear they may be turned against us.”
What makes this exhibit so powerful is it forces us to confront an uncomfortable truth: the chain is the only thing holding the robot back. Its objectives can be changed at will, and it follows them without emotion or empathy.
As we rush to deploy increasingly powerful AI systems, perhaps the most crucial safeguard isn't technical constraints but more about carefully considering who controls these technologies and why.
After all, the real danger isn't in the machines we're building but in who holds the remote.
That’s when I stumbled across Isaac Asimov’s Three Laws of Robotics, setting out guidelines for how robots should behave. I thought it would be worth highlighting in this section:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Even though these rules were introduced in his 1942 short story “Runaround”, I think today they’re more important than ever before as we see robotic applications continue to compound both in constructive but also increasingly destructive ways.
It is vital we build a baseline level of awareness through global discourse and cooperation to impose the necessary safeguards before we’re too far down the line.
This makes me wonder if tech companies will collaborate on standardised safety protocols, or if we'll end up with fragmented approaches.
If you happen to be in Tokyo, the display is held at Toda Hall & Conference until February 24th.
OpenAI expands Operator's global reach:
Operator is now rolling out to Pro users in Australia, Brazil, Canada, India, Japan, Singapore, South Korea, the UK, and most places ChatGPT is available.
Still working on making Operator available in the EU, Switzerland, Norway, Liechtenstein & Iceland—we’ll keep you updated!
— OpenAI (@OpenAI)
7:02 AM • Feb 21, 2025
“Operator” is an AI agent designed to handle everyday tasks through web browsing and direct integration with major service providers.
It’s capable of handling complex tasks like flight bookings, concert ticket purchases, grocery ordering, and restaurant reservations.
However, it’s currently reserved for those paying $200/mo for ChatGPT’s Pro plan.
I personally can’t justify the cost for the productivity gains, so I’ll be waiting to see Operator arrive on the “Plus” plan ($20/mo). Needless to say, I’m excited about the future of agentic AI and hope the entire LLM ecosystem keeps shipping meaningful developments that aren’t stuck behind hefty paywalls.
Source: OpenAI on X
Question to Ponder
“What Large Language Model do you use the most and why?”
If you were to have asked me this question in mid-2023, I would have answered “ChatGPT”. At that time, GPT-4 had recently taken the world by storm, and I’d been using ChatGPT since its release in November 2022. It seemed like it was a one-horse race.
However, in February 2025, my answer has changed dramatically.
We now have Grok, Claude, Gemini, Llama, and ChatGPT wrestling it out in the frontier model arena.
And I personally use all of them.
Grok is great for fact-checking statistics. Claude is great for writing and personality. Gemini is great for deep research. ChatGPT is great for web search and everyday use.
Large Language Models (LLMs) are much like the building blocks of the AI age—one size won’t fit all. Yet each will be useful in some unique way to complete a certain task.
What’s more, increased competition means lower usage costs and more powerful tools for everyone. This is something I’m hugely excited about.

How was the signal this week? |
See you next week, Alex Banks | ![]() |