Live Chat

Crypto News

Cryptocurrency News 11 months ago
ENTRESRUARPTDEFRZHHIIT

Meta Unveils AI-Powered Emu Models to Revolutionize Content Editing and Generation

Algoine News
Summary:
Meta, the internet titan, has unveiled two new artificial intelligence models, Emu Video and Emu Edit, aimed at enhancing content creation and editing. Emu Video generates video clips from text and image inputs, while Emu Edit manipulates images with improved precision. These models are still undergoing research, but potentially present significant benefits for creators, artists, and animators. The release and deployment of these tools are being carried out cautiously due to stringent regulatory scrutiny.
In a recent blog post, Meta, the internet behemoth, unveiled two innovative artificial intelligence models that are intended to enhance content production and editing. The models are known as Emu Video and Emu Edit. Emu Video's function is to construct video clips from text and image inputs, building on Meta's previous Emu model. Emu Edit, on the other hand, manipulates images with heightened accuracy. Though these models are still undergoing research, Meta envisions creators, artists, and animators finding viable applications for them. Meta revealed their "factorized" approach for training Emu Video, dividing the process into two stages to make the model flexible to various inputs. The first stage involves creating images based on text prompts, while the second stage produces a video using both the previous image and the text. This method enables the effective training of video generation models. Notably, the Emu Video model has the ability to bring images to life. Instead of employing a complex cascade of models, it simply uses two diffusion models to create 512x512, four-second videos with 16 frames per second. Emu Edit focuses on precision in image manipulation, allowing alterations such as color and geometry transformations, adding and removing backgrounds, as well as editing on a local and global scale. In doing so, Meta aims to ensure that the model alters only the pixels relevant to the edit request and nothing more. For instance, when the text “Aloha!” is added to a baseball cap, the model should not alter the actual cap. To train Emu Edit, Meta used 10 million synthesized images, each corresponding to an input image and a task with the desired outcome. According to Meta, this is the largest dataset of its kind available. Furthermore, Meta collated 1.1 billion pieces of data, including user-shared photos and captions from Facebook and Instagram, to train the newly released Emu model. However, the deployment of Meta's AI tools has been careful due to intense scrutiny from regulators worldwide. Meta recently declared that its AI tools could not be used by political campaigns and advertisers to create ads on Facebook and Instagram.

Published At

11/16/2023 8:00:00 PM

Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.

Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal? We appreciate your report.

Report

Fill up form below please

🚀 Algoine is in Public Beta! 🌐 We're working hard to perfect the platform, but please note that unforeseen glitches may arise during the testing stages. Your understanding and patience are appreciated. Explore at your own risk, and thank you for being part of our journey to redefine the Algo-Trading! 💡 #AlgoineBetaLaunch