Techs Reviews

Unleash the Power of GenAI: Dive into the Next Level with These 5 Exciting Sora Videos

"Embark on a Whimsical Journey from Bubble Dragons to Mythical Tea: Discover the Practical Magic Within!"

Unleash the Power of GenAI: Dive into the Next Level with These 5 Exciting Sora Videos

OpenAI recently unveiled a fresh batch of videos generated by its Sora AI model, showcasing whimsical creations such as a horse on rollerskates, a bubble dragon, and mythical tea. Originally introduced to the public in February, Sora has sparked much anticipation about its eventual availability.

However, in a recent interview on Marques Brownlee’s WVFRM podcast, the Sora team hinted that a public release might not be imminent. They cited the necessity for additional safety research and the fact that video creation with Sora takes minutes, not seconds.

While the public eagerly awaits a hands-on experience, the Sora team continues to share captivating videos on platforms like TikTok, often responding to prompt suggestions from social media users. For instance, one request led to the creation of a video featuring a “cute rabbit family eating dinner in their burrow.”

How is Sora different than other AI video models?

Currently, the market boasts various AI video models and tools, such as Runway, which is approaching its one-year anniversary since its public launch, and Pika Labs, now delving into sound effects and lip-synced dialogue in collaboration with ElevenLabs.

Despite the array of options, none, including the highly realistic Stable Video Diffusion clips, appear to match the capabilities demonstrated by Sora. This distinction might be attributed to the temporal aspect, as the Sora team revealed to Brownlee that they have enough time to step away, make a coffee, and return before a video finally completes its generation process.

@openai

Introducing Sora, our first AI model that creates videos from text captions. This video was generated from the following prompt: “this close-up shot of a victoria crowned pigeon showcases its striking blue plumage and red chest. its crest is made of delicate, lacy feathers, while its eye is a striking red color. the bird’s head is tilted slightly to the side, giving the impression of it looking regal and majestic. the background is blurred, drawing attention to the bird’s striking appearance.”
What would you like to see Sora make next? Let us know in the comments.  #madewithSora #Sora #OpenAI

♬ original sound – OpenAI

Leveraging the extensive GPU resources at OpenAI’s disposal, the Sora team employed a novel architecture that integrates techniques from models like GPT-4 and DALL-E during the training process. Additionally, Sora benefits from a diverse training dataset encompassing various sizes, lengths, and resolutions.

Among the standout videos in this latest series is a mesmerizing depiction of a dragon crafted from bubbles, exhaling fire in bubble form. The seamless motion, exceptional quality, and realistic physics showcased in this video underscore the impressive capabilities of Sora.

It all comes from a single prompt

As of now, the team has limited control over the output, primarily relying on text prompts, often consisting of relatively short one-sentence instructions. However, it’s anticipated that this dynamic will evolve once Sora is made accessible to the public. The team is actively developing more intricate controls, aiming to provide users with the ability to manipulate finer details, including lighting, camera motion, and orientation. These additional features align with functionalities already offered by other platforms, such as Pika and Runway.

@openai

Introducing Sora, our first AI model that creates videos from text captions. This video was generated from the following prompt: “this close-up shot of a victoria crowned pigeon showcases its striking blue plumage and red chest. its crest is made of delicate, lacy feathers, while its eye is a striking red color. the bird’s head is tilted slightly to the side, giving the impression of it looking regal and majestic. the background is blurred, drawing attention to the bird’s striking appearance.”
What would you like to see Sora make next? Let us know in the comments.  #madewithSora #Sora #OpenAI

♬ original sound – OpenAI

Sora’s remarkable capacity to generate impressive visuals from concise prompts is truly noteworthy. A notable example includes a teapot pouring water into a cup, where the cup is filled with a mesmerizing swirling vortex of colors and movement.

Moreover, the team is showcasing many of these new videos on TikTok, presenting them in a vertical format. This highlights Sora’s capability to create engaging vertical videos solely based on a text prompt.

What is holding up Sora’s release date?

@openai

Introducing Sora, our first AI model that creates videos from text captions. This video was generated from the following prompt: “this close-up shot of a victoria crowned pigeon showcases its striking blue plumage and red chest. its crest is made of delicate, lacy feathers, while its eye is a striking red color. the bird’s head is tilted slightly to the side, giving the impression of it looking regal and majestic. the background is blurred, drawing attention to the bird’s striking appearance.”
What would you like to see Sora make next? Let us know in the comments.  #madewithSora #Sora #OpenAI

♬ original sound – OpenAI

The desire to experiment with Sora is widespread, given its impressive capabilities applicable to various sectors such as video production, marketing, and architecture. A recently unveiled video even provides a walkthrough of an unconventional kitchen with a bed situated to one side.

However, according to the Sora team in their discussion with Brownlee, there is ongoing work required to refine Sora before it becomes a fully realized product or is integrated into platforms like ChatGPT.

Tim Brooks, research lead at Sora, explained, “The motivation for releasing Sora in its current state, even before it’s fully ready, is to explore its possibilities and identify the necessary safety research.”

Sora's video of a man eating a burger. Can you tell it's not real? : r/singularity

“We aim to demonstrate that this technology is on the horizon and invite input from individuals on its potential applications,” Brooks continued. The team also seeks feedback from safety researchers to assess and address any potential risks associated with Sora.

Brooks emphasized that Sora is not currently a product, and there isn’t a defined timeline for when it might transition into one. Therefore, users should not anticipate being able to use it within the current year.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button