Large language models are prone to making up answers that are factually incorrect due to a design flaw. When these models are created, they’re incentivized to guess rather than admit uncertainty. To fix this, OpenAI suggests penalizing confident errors more than uncertainty and giving partial credit for expressions of uncertainty. This could lead to better outcomes but its impact remains unknown, with the AI industry still grappling with high capital expenditures and environmental concerns.
Source: https://futurism.com/openai-mistake-hallucinations