• The Signal
  • Posts
  • The Signal: There’s something about Mark, ChatGPT voice mode, and Google lets AI take the wheel

The Signal: There’s something about Mark, ChatGPT voice mode, and Google lets AI take the wheel

AI Highlights

Mark Zuckerberg / Bloomberg

At Meta's recent Connect conference, Mark Zuckerberg unveiled some serious innovations that hint at the future of AI and augmented reality. Here are the key highlights:

  • Llama 3.2 Models: Released their latest multimodal AI models capable of understanding both images and text.

  • Meta AI Expansion: New features include voice interactions across Facebook, Messenger, WhatsApp, and Instagram, as well as the ability to answer questions about uploaded photos.

  • Meta Quest 3S: Introduced an affordable mixed-reality headset priced at $299.99. It features high-resolution full-color mixed reality, hand tracking, and access to a wide range of apps on Meta Horizon OS.

  • Ray-Ban Meta Glasses Updates: The popular AI-powered wearable now comes with upcoming features like visual reminders, continuous Meta AI responses, live translation, and more natural interactions.

  • Orion AR Glasses Prototype: Unveiled Orion, Meta's first augmented reality glasses prototype that looks like regular eyewear but offers seamless AR experiences with voice control, hand and eye tracking, and a wristband for discreet input.

Alex's take: Meta is making some bold moves to bring AI and AR into everyday life. They’re pushing forward across models (Llama 3.2), applications (Facebook, Messenger, WhatsApp and Instagram) and wearables (Quest 3S & glasses). Covering this much of the stack is impressive—we’re seeing Sam Altman have a similar vision by partnering with ex-Apple designer Jony Ive to work on a new AI device together. I wonder who will come out on top?

OpenAI is rolling out Advanced Voice Mode to all ChatGPT Plus and Team users this week. This feature allows you to have natural, voice-based conversations with ChatGPT on your smartphone, making interactions feel more human than ever. They've added Custom Instructions, Memory, five new voices, and improved accent recognition. It can even say "Sorry I'm late" in over 50 languages.

However, the launch isn't happening in the EU, likely due to regulatory concerns under the EU AI Act, which restricts AI systems from inferring emotions in certain contexts.

Alex's take: This is a serious step forward for conversational AI. Being able to chat with ChatGPT using voice makes the experience more seamless and personal. I do hope that a balance can be struck between acceleration and legislation so everyone can benefit.

Google DeepMind has introduced AlphaChip, an AI system that designs computer chips using reinforcement learning. By treating chip layouts like a game, AlphaChip can produce optimised chip designs in hours instead of the months it typically takes human engineers. It's already been used to create layouts for the last three generations of Google's Tensor Processing Units (TPUs), which power many of their advanced AI services.

Alex's take: I like how Google has open-sourced AlphaChip—it opens the door for collaboration across the community. We're witnessing a self-improving cycle where AI helps develop hardware, which in turn propels AI capabilities even further. Exciting times ahead.

Today’s Signal is brought to you by Insta360.

My Ride-or-Die Camera for Video Calls: Insta360 Link 2

If you’re serious about elevating your video calls, you need the Insta360 Link 2 or Link 2C.

It’s the ultimate tool for pro AI tracking, intuitive gesture controls, stunning 4K clarity and professional-looking Natural Bokeh.

Whether you're presenting to a team or catching up with friends, this camera has you covered. It's smart, intuitive, and—honestly—it’s the one I rely on every single day to look and sound my best.

Content I Enjoyed

Dwarkesh Patel / YouTube

Fifteen years ago, Shane Legg, co-founder of Google DeepMind, made a bold prediction: we'd reach human-level Artificial General Intelligence (AGI) by 2025.

In his 2009 blog post titled "Tick, tock, tick, tock…", Legg discussed the rapid advancements in computing power, highlighting IBM's Sequoia supercomputer and its potential to simulate a human cerebral cortex.

What strikes me is how consistent Legg has been with his timelines. Even after acknowledging his own optimism bias and adjusting his expected date to 2033, he still maintains a mode of 2025 and a mean of 2028 for achieving AGI. He based this on the belief that algorithmic breakthroughs were on the horizon and that hardware limitations would soon be a thing of the past.

My takeaway? With the exponential growth in AI capabilities we've witnessed recently, perhaps his forecast isn't as far-fetched as it once seemed.

Idea I Learned

Cursor landing page / Cursor

I've been diving into Cursor this week, and honestly, it's a revelation—even for someone with little to no coding experience like me.

It's an AI-powered coding assistant that writes code for you using simple plain English instructions.

To help you get started, I’ve made a video of me learning how to use the tool for you to use as a guide.

Check it out before diving in.

To get started:

  • Download and install Cursor

  • Start your two-week free trial

  • Begin with this video to get familiar with the interface

Quote to Share

Derya Unutmaz on OpenAI's o1 model:

“In the past few days, I’ve been testing OpenAI o1 models, mostly o1-mini, for developing PhD or postdoc level projects. I can confidently claim that the o1 model is comparable to an outstanding PhD student in biomedical sciences! I’d rate it among the best PhDs I’ve have trained!”

He adds:

“Also, in my experience in my field, o1 rated better than Terence Tao's, who rated it as a mediocre PhD student, likely because math is a tougher field for conceptualization and, not to mention, he is one of the best, if not the best, mathematicians in the world.”

My thoughts:

It's fascinating to see how experts across different fields are evaluating AI models like OpenAI's o1. While Terence Tao, one of the world's leading mathematicians, found the model to be comparable to a mediocre PhD student in mathematics, immunologist Derya Unutmaz rates it among the best PhD students he's trained in biomedical sciences.

I think it just highlights how AI excels in certain disciplines over others. A model’s capability is only as good as its training data and its manipulation of tools. I’m excited to see how AI’s application in specialised areas of research continues to develop.

Question to Ponder

“Will generative AI in our social media feeds create a scenario leaving us more in our echo chambers or less?”

If we don't introduce novel ideas, there's a risk that generative AI could reinforce echo chambers by amplifying existing biases and preferences.

However, when we blend unique (and very human) ideas and use generative AI to elevate them, we can create unmatched experiences.

I’ve seen it all over X and LinkedIn where some people generate content that doesn't add much value (all of it generated by AI). But others bring fresh perspectives to our timelines, using these tools to create something marvellous.

It's up to us as individuals to use generative AI to amplify our creativity—not just replace it—and break free from these bubbles.

💡 If you enjoyed this issue, share it with a friend.

See you next week,

Alex Banks

Do you have a product, service, idea, or company that you’d love to share with over 40,000 dedicated AI readers?