• The Signal
  • Posts
  • Inside Sam Altman’s AI opera, Meta’s Movie Gen, and Google’s new way to search

Inside Sam Altman’s AI opera, Meta’s Movie Gen, and Google’s new way to search

AI Highlights

My top-3 picks of AI news this week.

OpenAI logo / TechCrunch

OpenAI
1. Inside Sam Altman’s AI opera

OpenAI CEO Sam Altman has got a lot on his plate. From leadership shakeups to record funding, they still managed to pull it out the bag and ship a host of new features at their DevDay. Here’s the signal:

  • Durk Kingma, one of the lesser-known co-founders of OpenAI, has left to join Anthropic.

  • OpenAI raised $6.6B in new funding at a $157B post-money valuation to accelerate their mission of building AGI.

  • Launched a bunch of new features including real-time API for speech-to-speech applications, fine-tuning GPT-4o with both images and text, and automatic generation of prompts in playground.

Alex’s take: OpenAI have been through a storm this last year. Another of their co-founders, John Schulman, left only back in August as well as superalignment lead Jan Leike, back in May. This exodus of talent has been poached by none other than Anthropic. You can really feel the competition intensify as we accelerate towards AGI.

Meta
2. Meta releases Movie Gen

Meta announced their most advanced media foundation model to date. You can enter simple text prompts and create custom video and sounds—even manipulating a personal image of yourself.

  • Generate video from text: 1080p HD videos up to 16 seconds duration across different aspect ratios (1:1, 9:16, 16:9).

  • Edit video with text: Transform existing videos with text inputs from styles and transitions to fine-tuned edits e.g., “Make the poodle wear a pink onesie with ears”.

  • Produce personalised videos: Upload an image of yourself and turn it into a personalised video.

  • Create sound effects and soundtracks: Use video and text inputs to create audio for videos. This includes SFX, background music and soundtracks.

Alex’s take: This thing is fantastic—in my honest opinion the best video model I’ve seen to date. It looks like Meta released Sora before OpenAI.

Google
3. Google updates a new way to search with AI

Google has released some serious search features using AI to find exactly what’s in front of you.

  • Video understanding in Lens: Search by taking a video and ask questions about what you see.

  • Voice questions in Lens: Point your camera, press the shutter button and ask what’s on your mind.

  • Shop what you see: Take a photo of an item and identify price info across retailers and where to buy, using Google’s Shopping Graph which has information on over 45 billion products.

Alex’s take: AI is changing the way we search across text, video, images and voice. It’s also interesting to see Lens queries are now one of the fastest growing query types on search. Needless to say, I’m sticking with my iPhone for now.

Today’s Signal is brought to you by Athyna.

  • ~67% savings compared to hiring in U.S./Europe.

  • From product and engineering, to business and ops.

  • Pay nothing until you find the perfect match.

  • Get $1,000 OFF on your next offshore employee by mentioning us.

Content I Enjoyed

Brian Greene and Daphne Koller / YouTube

Code to Cure: AI and the Future of Health.

Computer scientist and Co-Founder of Coursera, Daphne Koller, sat down with Brian Greene, co-founder of the World Science Festival, to talk about how AI is shaping drug discovery and development.

Something that stood out to me was the analogy that AI is changing biology much like calculus did for physics.

AI's ability to analyse tremendously large amounts of complex biological data means we now have the predictive power to understand living systems.

Take for example AlphaFold which predicted the structures of all 200 million proteins known to science in less than a year.

We’re at an exiting frontier that could reshape our understanding of life itself—this is well worth a listen.

Idea I Learned

Canvas / OpenAI

A new way of working with ChatGPT: Canvas.

OpenAI's new feature ‘Canvas’ was released this week.

It is a native editor that integrates seamlessly with ChatGPT, allowing you to collaborate with the AI directly within the chat interface.

It’s essentially OpenAI’s response to Claude’s artifacts. However, unlike Claude, with ChatGPT, everything in the sidebar is editable and instantly updates.

No more constant switching between applications or copying and pasting—everything you need is right there.

To help you get started, I've put together a short video tutorial walking you through the basics. Check it out before diving in.

To get started:

  • Navigate to ChatGPT

  • Select “ChatGPT 4o with canvas”

  • Try out the writing shortcuts

I feel this is a really neat and natural way to refine ideas. Give it a try, and let me know what you think!

Quote to Share

Sam Altman’s predictions for 2025:

“Predictions for the three most important technological developments that will happen by 2025:

  1. We will get net-gain nuclear fusion working at prototype scale

  2. AGI will feel within reach to many people in the industry

  3. Gene editing will have cured at least one major disease”

This prediction was made in 2019. Fusion is looking likely by the end of the decade, we’re getting closer to AGI transitioning from reasoners to agents, and gene editing has a cure for sickle cell disease. It’s wild to see how far we’ve come.

Question to Ponder

“I see a lot of AI tools to help with coding but I’m still struggling to start and feel a bit overwhelmed. Where do I begin?”

It’s totally normal to feel like this.

I spent way too many years worrying about what technical requirements were necessary to start coding.

  • Is there one programming language that I must learn first?

  • Are there books I need to read?

  • Which courses must I take?

Only until recently, this requirement to “be technical” has been removed.

We no longer have to memorise complex formulas and functions.

The only thing we need to use is something innate to us all: natural language.

Now tools like Cursor and Replit Agents mean you can use plain English to direct AI and build something for you.

So don’t get caught up on the initial friction of what a “programmer” looks like and the traditional “requirements” that come with it. For this friction no longer exists.

AI has advanced so much that when we now have an idea, we can turn it into a reality within minutes.

Start with something that interests you, ask AI how to build it, and watch the adventure unfold.

💡 If you enjoyed this issue, share it with a friend.

See you next week,

Alex Banks

Do you have a product, service, idea, or company that you’d love to share with over 40,000 dedicated AI readers?