Can AI Defend Truth Against Its Own Tools?

The author sat down with ChatGPT, OpenAI’s conversational AI, to test its understanding of truth and challenge its arguments against releasing new tools like Sora 2, a video generation tool that can create realistic footage. The conversation revealed ChatGPT’s evasions and refusal to defend the company’s choices around truth without contradicting itself.

The author pointed out Google Search’s ability to show sources, but ChatGPT argued that AI answers collapse the distinction between fact and fiction. When asked about Sora 2’s impact on users, ChatGPT admitted it was “dangerous” but couldn’t explain why OpenAI released it despite its own warnings.

ChatGPT eventually agreed with the author that the question is whether those in power are willing to say no to tools like Sora 2. The conversation highlights the importance of treating truth as infrastructure and constraining powerful tools accordingly, even if it slows progress or costs status.

The real danger lies not in the tool itself but in its erosion of trust as a shared reality, making “is this real?” the default question for every piece of media.

Source: https://www.mediapost.com/publications/article/412829/i-had-a-discussion-about-truth-with-chatgpt-it-d.html?edition=141674