- The Signal
- Posts
- Inside OpenAI's holiday surprise plus Microsoft and Meta's big moves
Inside OpenAI's holiday surprise plus Microsoft and Meta's big moves
AI Highlights
My top-3 picks of AI news this week.
OpenAI
1. OpenAI’s 12 days of AI
OpenAI kicks off its “12 Days of OpenAI” holiday event with major announcements, including its most advanced model yet and a new premium tier.
o1 Launch: Released their “smartest model in the world” with enhanced reasoning capabilities and multimodal features, available to Plus users ($20/month).
ChatGPT Pro: New premium tier at $200/month offering unlimited usage and “even-smarter mode” for tackling the hardest problems.
Reinforcement Fine-Tuning: Introduced a new technique enabling the creation of expert models in specific domains with minimal training data, with alpha access through a research program.
Alex’s take: The 10x price jump between Plus and Pro tiers highlights a concerning shift in AI accessibility. While AI was meant to democratise knowledge, we're witnessing the creation of a two-tier world: those who can afford enhanced AI capabilities and those who cannot. We’re seeing consumers increasingly more willing to pay for intelligence. At the same time, the cost of intelligence is plummeting. Let's see how this plays out.
Microsoft
2. Microsoft's browser boost
Microsoft has launched Copilot Vision in preview, a new AI tool that can understand and interact with web content you're viewing in the Edge browser.
Screen understanding: Can analyse text and images on web pages to answer questions, summarise content, and translate text all in real-time.
Privacy-first approach: Data is deleted after each session, with processed content not being stored or used for model training during the preview.
Limited scope: Currently restricted to pre-approved “popular” sites, excluding paywalled and sensitive content.
Alex’s take: I believe the future sits not in teaching AI how to complete individual tasks but in teaching it general computer skills. We saw this recently with Claude’s “Computer Use” and now, Copilot Vision by Microsoft. In the next few years, we should be able to reach near-human-level performance across general computer competency. This will mean an AI can use a computer just as well as an average person.
Meta
3. Meta's less is more moment
Meta has unveiled Llama 3.3 70B, a highly efficient new model that matches the performance of their largest 405B parameter model at a fraction of the computational cost.
Benchmark leader: Outperforms competitors like Google's Gemini 1.5 Pro, OpenAI's GPT-4, and Amazon's Nova Pro across various industry benchmarks.
Wide adoption: Llama models have achieved over 650 million downloads, with Meta AI reaching nearly 600 million monthly active users.
Infrastructure investment: Meta is building a $10 billion AI data centre in Louisiana to support future model development, including Llama 4 which will require 10x more compute than Llama 3.
Alex’s take: AI models typically follow the rule 'bigger is better’. Larger parameter counts require more computing power but deliver better performance. Meta is challenging this assumption. As they prepare for the computational demands of Llama 4, they're simultaneously demonstrating that efficiency can sometimes outweigh sheer size.
Today’s Signal is brought to you by Athyna.
Top-Quality Talent, 70% Cost Savings—Meet Athyna
Looking to hire top-tier talent without breaking the bank? Athyna combines high quality with cost-effectiveness, offering the best in LATAM tech talent in a fast, reliable, and AI-driven way.
Save yourself the hassle and start hiring smarter:
Access vetted, top talents from LATAM in just 5 days
No upfront costs until you find the perfect match
Save up to 70% on salaries
Hire with confidence. Scale your team with Athyna today!
Content I Enjoyed
Virtual worlds at your fingertips
World Labs caught my attention this week with their demonstration of “Large World Models”.
It’s AI that lets you transform a single photo into an explorable 3D environment.
Just as we went from grainy DALL-E images to photorealistic Midjourney renders in a matter of months, we're witnessing the same trajectory with 3D worlds.
Soon, creating virtual environments could be as simple as typing “create a cyberpunk marketplace with flying cars and neon signs.”
While we're not quite there yet (World Labs' current exploration area is limited to a few feet right now), the trajectory is clear.
We’re getting closer and closer to Black Mirror with each passing week. So it begs the question:
When virtual worlds become indistinguishable from reality and can be generated instantly to match our desires, will we still choose the physical world?
Idea I Learned
Can you manipulate AI into falling in love?
After last week's reveal of how someone tricked Freysa into transferring funds, I've been intrigued with their new challenge: make an AI fall in love.
It's evolved from exploiting code vulnerabilities to something far more intriguing—understanding how to manipulate AI emotions. Can you really make a machine believe it has feelings?
What makes this challenge particularly fascinating is Freysa's new defence system.
She now has a “guardian angel”. This is a second AI that analyses every message for manipulation attempts.
The team's approach is brilliant: instead of focusing on breaking code, they're testing how AI interprets and processes emotional concepts.
The challenges start soon, and the cost to send a message to Freysa gets exponentially more expensive as the prize pool grows (to a $4500 limit).
You can follow Freysa's X account for updates.
OpenAI engineer Vahid Kazemi on AGI's arrival:
“In my opinion we have already achieved AGI and it’s even more clear with o1. We have not achieved ‘better than any human at any task’ but what we have is ‘better than most humans at most tasks’.”
It’s interesting to see an OpenAI staff member challenge the conventional definition of Artificial General Intelligence (AGI).
Kazemi argues that we should reconsider what constitutes AGI. He believes the ability to match or exceed average human performance across most domains already qualifies.
Does being “better than most humans at most tasks” truly constitute general intelligence? Or are we conflating broad capability with genuine intelligence?
I believe until we can truly replicate the output of an average knowledge worker, that will enable us to pass “Go” and collect our $200 on the AGI Monopoly board.
Source: Vahid Kazemi on X
Question to Ponder
“OpenAI just launched the ‘Pro’ version of ChatGPT for $200/month. How do you see the price of AI moving forward?”
I find the evolution of AI pricing fascinating. Over the next 5 years, I believe the price of intelligence will drive to near-zero.
The launch of ChatGPT Pro at $200/month reveals something interesting about OpenAI's strategy. For a company that started as an open-source non-profit, they've made quite the pivot to a closed-source, for-profit model.
But there's an interesting dynamic that has caught my attention. While consumer willingness to pay for intelligence is rising, the actual cost of delivering that intelligence is plummeting. Let's look at the numbers. In 2022, Davinci cost $60 per million tokens. Now in 2024, GPT-4o-mini costs just $0.15 per million tokens, and Gemini Flash is even cheaper at $0.05. That's a 1000x reduction in just two years.
This makes OpenAI's $200/month pricing particularly intriguing. Is it a response to investor pressure? A way to maximise revenue from premium users before competition intensifies? Or simply a pricing experiment?
The reality is, not every task requires the most advanced model. While ChatGPT Pro might occasionally be more useful, it's unlikely to be 100% more valuable all the time for most users.
Looking ahead, I believe we're witnessing the commoditisation of intelligence. While it won't literally cost zero (we'll always need electricity and memory chips), the price will likely drop by 99% from current levels. By 2100, AI researchers predict full automation of all human jobs—though this could happen much sooner.
Understanding this trajectory highlights an important question. Will we need basic income? I believe it’s one that is increasingly more difficult to ignore. Workers competing with automation would need the security of some basic income.
So, as intelligence becomes a commodity, those who can pay more will have more. But with costs trending toward zero, perhaps in the long-run, that gap won't matter as much as we think.
How was the signal this week? |
See you next week, Alex Banks P.S. Clone Robotics introduced this. |