OpenAI’s latest product, Sora 2, has sparked widespread criticism for its potential to spread misinformation and deepfakes. The AI-generated video social network allows users to upload their images and create videos with realistic content, but it also enables the creation of fake videos with little oversight. This raises concerns about copyright infringement, deepfakes, and the impact on society.
Unlike OpenAI’s original mission statement, which aimed to benefit all humanity, Sora 2 seems to prioritize profits over responsibility. The company’s CEO, Sam Altman, has praised Sora 2 as a “ChatGPT for creativity” moment, but critics argue that it will lead to a proliferation of low-quality content and overshadow the positive work being done in AI.
As OpenAI reaps the benefits of its product, other companies are working on more responsible AI projects. Periodic Labs, launched by ex-employees from big AI companies, aims to accelerate scientific discoveries using artificial intelligence. However, it seems that market forces will drive OpenAI’s decisions, leaving users to resist the spread of misinformation and low-quality content.
For now, Sora 2 remains a top performer in the iPhone app chart, with millions of users engaging with its addictive and often misleading content. As the debate continues, one thing is clear: OpenAI’s actions will have far-reaching consequences for society, and it’s up to us to stand against the spread of AI “slop.”
Source: https://www.vox.com/future-perfect/463596/openai-sora2-reels-videos-tiktok-chatgpt-deepfakes