{"id":1209,"date":"2024-02-16T08:02:49","date_gmt":"2024-02-16T08:02:49","guid":{"rendered":"https:\/\/chezaspin.com\/blog\/index.php\/2024\/02\/16\/openai-introduces-ai-model-sora-that-turns-text-into-video\/"},"modified":"2024-02-16T08:02:49","modified_gmt":"2024-02-16T08:02:49","slug":"openai-introduces-ai-model-sora-that-turns-text-into-video","status":"publish","type":"post","link":"https:\/\/chezaspin.com\/blog\/openai-introduces-ai-model-sora-that-turns-text-into-video\/","title":{"rendered":"OpenAI introduces AI model \u2018Sora\u2019 that turns text into video"},"content":{"rendered":"<p><strong>Microsoft-backed OpenAI is developing software capable of generating minute-long videos based on text prompts, the company announced on Thursday.<\/strong><\/p>\n<p>The software, named \u201cSora\u201d after the Japanese word for \u201csky,\u201d is currently available for red teaming, which helps identify flaws in the AI system. Additionally, it is intended for use by visual artists, designers, and filmmakers to provide feedback on the model, the company stated.<\/p>\n<p>\u201cSora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,\u201d the statement said, adding that it can create multiple shots within a single video.<\/p>\n<p>In addition to generating videos from text prompts, Sora can also animate a still image, as mentioned in a blog post by the company.<\/p>\n<p>The video generation software follows OpenAI\u2019s ChatGPT chatbot, which was released in late 2022 and created a buzz around generative AI with its ability to compose emails and write codes and poems.<\/p>\n<p>Social media giant Meta Platforms beefed up its image generation model Emu last year to add two AI-based features that can edit and generate videos from text prompts. The Facebook-parent company is also looking to compete with Microsoft, Alphabet\u2019s Google and Amazon in the rapidly transforming generative AI universe.<\/p>\n<p>Sora is still a work-in-progress, with the company acknowledging that the model may sometimes struggle with spatial details in a prompt and encounter difficulties in following a specific camera trajectory.<\/p>\n<p>OpenAI also mentioned that they are developing tools to determine whether a video was generated by Sora.<\/p>\n<p>The new tool is not yet publicly available, and OpenAI has disclosed limited information about its development process. The company, which has faced lawsuits from some authors and The New York Times over its use of copyrighted works to train ChatGPT, has not revealed the imagery and video sources used to train Sora.<\/p>\n<p>OpenAI mentioned in a blog post that it is consulting with artists, policymakers and other stakeholders before releasing the new tool to the public.<\/p>\n<p>\u201cWe are working with red teamers \u200a\u2013 \u200adomain experts in areas like misinformation, hateful content, and bias\u200a \u200a\u2013\u200a who will be adversarially testing the model,\u201d the company said. \u201cWe\u2019re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora.\u201d<\/p>\n<p>The post <a href=\"https:\/\/www.kbc.co.ke\/openai-introduces-ai-model-sora-that-turns-text-into-video\/\">OpenAI introduces AI model \u2018Sora\u2019 that turns text into video<\/a> appeared first on <a href=\"https:\/\/www.kbc.co.ke\/\">KBC<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Microsoft-backed OpenAI is developing software capable of generating minute-long videos based on text prompts, the company announced on Thursday. The software, named \u201cSora\u201d after the Japanese word for \u201csky,\u201d is currently available for red teaming, which helps identify flaws in the AI system. Additionally, it is intended for use by visual artists, designers, and filmmakers [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1209","post","type-post","status-publish","format-standard","hentry","category-uncategorized","entry"],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/chezaspin.com\/blog\/wp-json\/wp\/v2\/posts\/1209","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/chezaspin.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/chezaspin.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/chezaspin.com\/blog\/wp-json\/wp\/v2\/comments?post=1209"}],"version-history":[{"count":0,"href":"https:\/\/chezaspin.com\/blog\/wp-json\/wp\/v2\/posts\/1209\/revisions"}],"wp:attachment":[{"href":"https:\/\/chezaspin.com\/blog\/wp-json\/wp\/v2\/media?parent=1209"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/chezaspin.com\/blog\/wp-json\/wp\/v2\/categories?post=1209"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/chezaspin.com\/blog\/wp-json\/wp\/v2\/tags?post=1209"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}