Adobe previewed the Adobe Firefly Video Model on Wednesday. The software giant first announced the under-development video model in April. It has now shared more details about it. The large language model (LLM) will be able to generate videos from text prompts as well as image inputs. Users can also generate videos from various camera angles, styles, and effects. The company also stated that the video model will be available in beta later this year.
Adobe Firefly Video Model previewed
In a newsroom post, the company detailed the capabilities of the native AI video model. They shared a YouTube video to showcase its features. OncevAdobe launches it, the Firefly Video Model will join Adobe’s existing generative models. This includes the Image Model, Vector Model, and Design Model. Based on the YouTube video, it appears the Adobe Firefly Video Model can generate videos from both text and image-based inputs. This means users will be able to write a prompt in detail or share an image as the reference for the output video.
Users will also be able to make complex requests such as multiple camera angles, lighting conditions, styles, zoom, and motions. Notably, the AI-generated videos appear to be on par with what OpenAI teased OpenAI’s Sora. Additionally, the company also demonstrated the Generative Extend feature. Adobe first revealed this feature (but did not showcase it) in April. The feature essentially allows users to extend the duration of a shot by adding extra frames. These frames are generated using AI by taking reference from the preceding and following frames. This can give editors the option to lengthen a video or allow the camera to pan on a shot a couple of seconds longer.
For more information please keep reading techinnews