- The Signal
- Posts
- Gemini's memory milestone, Perplexity rewrites retail and Mistral's visual leap
Gemini's memory milestone, Perplexity rewrites retail and Mistral's visual leap
AI Highlights
My top-3 picks of AI news this week.
Google
1. Gemini's memory milestone
Google has announced a significant upgrade to their Gemini AI, introducing persistent memory capabilities for premium users, marking a shift towards more personalised AI interactions.
Conversation memory: Gemini can now remember user preferences, interests, and work details across multiple conversations, eliminating the need for repetitive context-setting.
Premium feature: Available exclusively for Google One AI Premium subscribers at $20/month, initially launching on web platforms with mobile support coming soon.
Privacy focus: Google emphasises that saved information remains private and isn't used for model training, addressing potential security concerns.
Current scope: The feature is currently limited to English language users on the web client, with manual memory deletion options available.
Alex’s take: Microsoft AI CEO Mustafa Suleyman recently said that memory is the “critical piece”. Unlocking this means no longer having the frustration of re-stating ideas and context. Long-term memory compounds over time, so I see this as a serious inflection point as we transition from chatbots to useful assistants.
Perplexity
2. Perplexity rewrites retail rules
Perplexity has launched a new shopping feature for Pro users in the US, marking their entry into the e-commerce space.
Visual search cards: Presents product details, pricing, seller information, and pros/cons in an easy-to-digest format.
One-click checkout: Users can store payment details and addresses, with free shipping for Pro subscribers.
Merchant program: Allows sellers to improve their product visibility and leverage Perplexity's search API on their own websites.
Alex’s take: Perplexity has been hogging the AI limelight lately. Only last week, we talked about their introduction of advertising into its AI-powered search platform. Perplexity has highlighted its commitment to “unbiased” recommendations without sponsored slots (this is different to the ads they’re running). In a space dominated by paid placements, this could be the novel approach we need, though maintaining this stance as they scale will be their true test.
Mistral AI
3. Mistral's eye for AI
Mistral AI introduced Pixtral Large, a groundbreaking 124B parameter multimodal model that sets new benchmarks in visual and textual understanding.
Performance leader: Achieves state-of-the-art results on MathVista (69.4%), outperforming competitors like GPT-4 and Gemini-1.5 Pro on complex mathematical reasoning over visual data.
Specifications: Features a 123B multimodal decoder and 1B parameter vision encoder, with a 128K context window capable of processing 30+ high-resolution images.
Real-world capabilities: Demonstrates advanced multilingual OCR, chart comprehension, and document understanding while maintaining Mistral Large 2's leading text capabilities.
Alex’s take: What fascinates me about Pixtral Large is how it reflects the rapid evolution of multimodal AI. Just one year ago, achieving this level of visual understanding seemed distant. Now, we're seeing models that can not only “see” but truly comprehend complex visual information across languages and formats. AI's understanding of the visual world is becoming remarkably human-like.
Today’s Signal is brought to you by The Daily Upside.
Savvy Investors Know Where to Get Their News—Do You?
Here’s the truth: there is no magic formula when it comes to building wealth.
Much of the mainstream financial media is designed to drive traffic, not good decision-making. Whether it’s disingenuous headlines or relentless scare tactics used to generate clicks, modern business news was not built to serve individual investors.
Luckily, we have The Daily Upside. Created by Wall Street insiders and bankers, this fresh, insightful newsletter delivers valuable insights that go beyond the headlines.
And the best part? It’s completely free. Join 1M+ readers and subscribe today.
Content I Enjoyed
The Dawn of the AI PC Era
After visiting London's new AI Experience store this week, I couldn't help but reflect on how our interaction with computers is about to change.
Something that stood out to me was how the industry is quietly shifting from “computers that can run AI” to “AI that happens to be in a computer.” It's a subtle but profound difference that reminds me of the birth of the smartphone—when “phones with internet” became just “phones”.
Intel is building the foundations of the AI PC. Take deepfake detection. Just a year ago, this required cloud processing and specialised software. They showed me how they worked specifically with McAfee to optimise this experience.
Now, it's becoming a standard feature that runs locally on new devices, much like how spell-check evolved from a separate program to an integrated feature we barely think about.
The battery life improvements in their new generation of AI PCs are particularly telling. We're seeing devices that can run complex AI workloads for 15+ hours. I see AI becoming an always-on utility, much like WiFi or Bluetooth.
While we often focus on flashy AI applications like ChatGPT and Midjourney, the real revolution might be happening in silicon. Today, the Neural Processing Unit (NPU) feels much like the Graphics Processing Unit (GPU) of the early 2000s. Soon, we won't remember a time when our computers didn't have one.
I can only feel this is the beginning of a fundamental shift in how we interact with technology in the years to come.
I really enjoyed visiting Intel’s experience and learning about this first-hand. If you want to dive in further, check out my video here.
Idea I Learned
AI Avatars: The ultimate work-life balance hack or a step too far?
This week, I discovered Pickle—a startup that's pushing the boundaries of what's possible with AI avatars.
Their proposition is fascinating: submit a 5-minute video of yourself, and 24 hours later, you have a digital clone ready to sit through those marathon Zoom calls while you... well, do whatever you want.
The technology works with major platforms like Zoom, Google Meet, and Teams (though currently only on MacOS). At $300-$1,150 per year, it's not cheap. But what price do you attribute to digital freedom?
The ethical implications are vast. Are we entering an era where “being present” in our ever-evolving virtual world loses its meaning? How can we trust if we're talking to a real person or their AI stand-in?
While I appreciate the innovation, I can't help but wonder if we're solving the wrong problem. Instead of creating AI avatars to attend unnecessary meetings, perhaps we should be questioning why we have so many meetings in the first place.
As Elon Musk stated in an email to Tesla employees, “Excessive meetings are the blight of big companies and almost always get worse over time.”
That said, I do feel it’s a fascinating glimpse into how AI could reshape our work culture. Don't blame me if your boss isn't thrilled about your digital avatar taking notes while you're actually riding waves at the beach.
USCC Commissioner Jacob Helberg on the AI development race:
“We've seen throughout history that countries that are first to exploit periods of rapid technological change can often cause shifts in the global balance of power. China is racing towards AGI... It's critical that we take them extremely seriously.”
This statement comes as the U.S.-China Economic and Security Review Commission (USCC) proposes a Manhattan Project-style initiative for AI development—a significant parallel to the WWII-era collaboration that led to transformative technological advancement.
The proposal emphasises three key points:
Public-private partnerships are crucial for advancing artificial general intelligence
Energy infrastructure and data centre development need streamlined processes
The US needs to match China's pace in AI development to maintain technological leadership
This aligns with recent calls from major AI companies like OpenAI for increased government funding in artificial intelligence development. The historical parallel to the Manhattan Project suggests just how seriously U.S. policymakers are taking the AI race with China.
Source: Reuters
Question to Ponder
“As AI-generated content becomes more prevalent, how do we strike the right balance between embracing innovation and maintaining authentic human connection?”
I've been thinking about this a lot since seeing the new Coca-Cola Christmas ad.
The wheels that don't spin properly, the odd proportions, the ultra-short shots.
It all feels like AI for AI's sake rather than AI for creativity's sake.
What fascinates me most is how this campaign has challenged our deep emotional connection with their original 1995 “Holidays are Coming” ad.
That ad wasn't just about a red truck with twinkling lights. It was about the warmth, joy, and genuine human connection that defines the holiday season.
When billion-dollar companies like Coca-Cola choose exclusively AI over traditional production methods (or a human + AI combo), it raises an important question: Are we sacrificing emotional resonance for technological novelty?
But maybe we're looking at this all wrong. Perhaps Coca-Cola has achieved exactly what it wanted: people talking about their brand during the holiday season. The controversy itself has become the marketing win.
Still, I can't help but think of what Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered AI, said:
“As artificial intelligence evolves, we must remember that its power lies not in replacing human intelligence, but in augmenting it. The true potential of AI lies in its ability to amplify human creativity and ingenuity.”
AI should be the backup singer, not the lead vocalist. The moment brands forget this, they lose their soul.
Consumers can smell inauthenticity from a mile away, whether it's AI-generated or not.
Let me know what you think of the ad.
How was the signal this week? |
See you next week, Alex Banks |