- The Signal
- Posts
- Adobe's AI steals the show, Google goes nuclear, and when less is Mistrally more
Adobe's AI steals the show, Google goes nuclear, and when less is Mistrally more
AI Highlights
My top-3 picks of AI news this week.
Adobe MAX / Alex Banks
Adobe
1. Adobe’s AI steals the show
Adobe has taken a giant leap in AI-driven creativity with a series of groundbreaking announcements at their Adobe MAX conference in Miami. Here's the signal:
Firefly Video Model: Adobe’s first publicly available video model designed to be commercially safe, having been trained only on content they have the rights to use.
Generative Extend: Firefly-powered video editing tool that extends clips and covers gaps in footage that can work with both video and audio.
Distraction Removal: Removes unwanted objects in your image, like people, wires and cables, whilst preserving the integrity of the background.
Alex’s take: The demand for content has never been greater, and Adobe is clearly passionate about equipping creators with the tools to meet this demand.
I had a fantastic time with Adobe at the conference, they are empowering creators to push the boundaries of what's possible. Whilst there are too many announcements to mention here, I see huge potential in just these top 3 alone.
You can sign up for the Firefly waitlist here.
Google
2. Google goes nuclear
Google is partnering with nuclear startup Kairos Power to accelerate the clean energy transition across the US to support AI technologies.
First-of-its-kind deal: Signed the world’s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMR) developed by Kairos Power.
Supporting AI growth: The agreement aims to meet the growing energy demands of AI technologies, providing clean and reliable power for data centres.
Economic impact: Kairos Power’s reactors will produce 500 MW of new 24/7 carbon-free power to US electricity grids by 2030, creating high-paying, long-term jobs and decarbonising the grid.
Alex’s take: I often refer back to Josh Wolfe’s (co-founder of Lux Capital) description of nuclear power as “elemental energy”. I think there are too many negative connotations to the word “nuclear” given our history. I think we’ll see more and more investment over the next 3 years in elemental energy to meet our growing demand for training foundational AI models.
Mistral AI
3. When less is Mistrally more
Mistral AI has introduced two new state-of-the-art models for on-device computing and edge use cases: Ministral 3B and Ministral 8B, collectively known as “les Ministraux.”
Advanced capabilities: These models set a new frontier in efficiency and performance in the sub-10B parameter category, outperforming competitor models from Google and Meta.
Large context length: Supports up to 128K context length allowing for more extensive data processing.
Use cases: These models are ideal for on-device translation, offline smart assistants, local analytics, autonomous robotics, and orchestrating agentic workflows.
Alex’s take: It’s only been one year since Mistral released their 7B model. Today, their smaller 3B parameter model now outperforms it across most benchmarks. I think it is remarkable how quickly intelligence is scaling—models are becoming faster and more energy-efficient with fewer parameters.
Today’s Signal is brought to you by Writer.
The fastest way to build AI apps
Writer is the full-stack generative AI platform for enterprises. Quickly and easily build and deploy AI apps with Writer AI Studio, a suite of developer tools fully integrated with our LLMs, graph-based RAG, AI guardrails, and more.
Use Writer Framework to build Python AI apps with drag-and-drop UI creation, our API and SDKs to integrate AI into your existing codebase, or intuitive no-code tools for business users.
Content I Enjoyed
Adobe MAX / Alex Banks
Adobe Sneaks
While attending Adobe MAX this week in Miami, I had the chance to experience the MAX Sneaks session.
This is where Adobe engineers unveil early-stage projects showcasing potential future technologies.
You can watch Sneaks on YouTube—they’re super impressive and entertaining to watch (I feel some of them should do standup on the side).
Here are some standout projects:
Purpose: Rotates 2D vector art in a 3D space without losing its 2D appearance.
Features: Simple slider-based rotation, mimicking the manipulation of 3D objects.
Benefit: Adds a 3rd dimension to 2D artwork with minimal effort.
Purpose: Converts sketches into finished designs using generative AI.
Features: Personalises creative assistance to maintain your style across different canvases.
Benefit: Facilitates the transition from initial sketches to editable graphics effortlessly.
Purpose: Adds people and objects seamlessly into images.
Features: Adjusts colour, lighting, and shadows for natural blending using advanced masking technology.
Benefit: Makes it easy to enrich images with new elements while maintaining realism.
Just to be clear, none of this is sponsored—I actually had a blast, and now I'm officially a fan. Adobe’s been taking their time, perfecting their craft and doing it right.
Being there in person, I loved the energy of the audience. Even if a demo didn’t necessarily go to plan, it was met with cheers and support from the crowd.
I think it’s a testament to their community where people actually want to use these tools and support their favourite, even if one happened to be a bit rough around the edges.
You can’t beat live feedback, and I honestly think a lot of other big tech companies could follow this “startup” mentality.
Idea I Learned
Perplexity / perplexity.ai
The future of research with AI
This week, The New York Times slapped Perplexity with a cease and desist, asking them to stop using the newspaper's content without permission.
News outlets are getting increasingly nervous—and I wouldn’t blame them. Lost traffic means lost revenue.
Despite the controversy, Perplexity remains one of my favourite tools for research. Here's how you can get started with it:
Visit perplexity.ai
Experiment with Quick Search to get familiar with the basic features.
Explore different queries and see how it references source material across mediums.
To help you get started, I've created a short video tutorial that walks you through the basics. Be sure to check it out before diving in.
Yann LeCun on the timeline for Human-Level AI:
“Reaching human-level AI will take several years if not a decade.”
Yann LeCun's perspective aligns closely with Sam Altman's estimation of “several thousand days” (6 to 9 years) for achieving Artificial General Intelligence (AGI). While both agree that AGI is not imminent within the next few years, LeCun believes the timeline could extend even further due to the complex and often unpredictable nature of AI development.
Maybe we might take longer than the optimistic projections from other industry leaders like Anthropic CEO Dario Amodei, who has hinted at AGI as early as 2026.
Source: Yann LeCun on X
Question to Ponder
“With Tesla’s recent progress on humanoid robots, are we headed towards a single dominant player in the humanoid robotics industry, or will we see multiple winners in different sectors?”
I saw a great answer to this recently on X by Robert Scoble.
Tesla’s Optimus humanoid demo at their “We, Robot” event certainly turned heads, but does this mean Tesla will dominate the humanoid robotics space? Not necessarily.
Tesla is focusing on building humanoids for manufacturing and consumer applications.
I see serious opportunities in niche markets that aren’t Tesla’s focus, such as medicine and healthcare.
Whilst the humanoid market is vast, with companies like Boston Dynamics, 1X Technologies, and Figure Robotics all pushing forward, I can’t help but love Brett Adcock’s take on Robert’s opinion.
"I really hate these 'the market is so big, there are lots of winners' predictions. You have to be the least competitive person in the world to be fired up about this. If Figure doesn’t win, please obliterate us."
It seems the competition might be fiercer than anticipated.
How was the signal this week? |
See you next week, Alex Banks |