onStrategy

Open AI’s new market: text-to-video

February 23, 2024
2 minutes

Strategy | Business Models | Tech

OpenAI

Photo source: OpenAI

Prompt: Prompt: Several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snow capped mountains in the distance, mid-afternoon light with wispy clouds and sun high in the distance creates a warm glow, the low camera view is stunning capturing the large furry mammal with beautiful photography, depth of field.


Open AI has launched this week their new text-to-video model called “Sora”. We don’t know yet the costs of using these models, but Ben Thompson wrote a good article about it (building on Matthew’s Ball theory of the metaverse). Here are three takeaways from his article:

1/ Intro: Metaverse is the preferred term for future digital realities.

Matthew Ball’s essay underscores the Metaverse as the most fitting descriptor for the evolving digital landscapes, emphasizing its capacity to function akin to the 3D version of the Internet. This comparison highlights the Metaverse’s potential for a vast, interconnected network of real-time 3D experiences, suggesting a shift beyond traditional computing and the necessity for significant advancements in networking and computing infrastructures to realize this vision. Matthew Ball

2/ Sora’s role in advancing spatial computing and the metaverse.

OpenAI’s development of Sora marks a significant step forward in the realm of spatial computing, allowing for the generation of complex scenes and characters with nuanced emotions and interactions. Despite its current limitations in simulating physics accurately or understanding intricate cause-effect relationships, Sora’s capabilities in creating immersive, dynamic video content hint at its foundational role in building the metaverse. This suggests a future where AI-generated content could become increasingly sophisticated, making the metaverse more accessible and interactive.

3/ Technological convergence driving virtual reality’s evolution

The discussion around Sora, alongside developments in computing hardware like Groq’s deterministic processing units, points to a broader technological convergence shaping the future of virtual reality (VR). These advancements suggest that we are moving closer to real-time, interactive virtual environments that could significantly enhance user experiences in the Metaverse. The synergy between AI-generated content, advanced hardware, and VR technologies implies a nearing inflection point where virtual experiences could rival or augment reality, potentially transforming how we interact with digital spaces. Ben Thompson, Groq, Sora

What interesting times are we living!

Discover posts:

Microsoft's priorities

Microsoft will never be a product company

The most notable tech event last week was the departure of Panos Panay, Chief Product Officer at Microsoft leading the company’s Surface line, to…Amazon. Here are some ideas: 1/ Last week Microsoft held its annual event usually dedicated to hardware (i.e. Surface line), but it was mostly about AI. The (...)

Read more
Meta market capitalization nov2021-nov2022

The TGV business theory

In the strategy book I’m writing I have coined a new theory — The TGV business theory. That is, once a company starts doing one thing best it cannot do other things (ie. META doing VR / AR platforms). The $10 bn invested by the company in the “metaverse” are (...)

Read more