Current AI models still suffer from “hallucinations” where they provide factually false answers. Researchers found that the way we evaluate their output incentivizes them to guess rather than admit uncertainty. This can lead to high-stakes advice being given incorrectly.
However, tweaking evaluations to penalize confident errors more might not be economically viable due to increased costs. If AI models are forced to express uncertainty more often, users may lose confidence in the system and abandon it.
Experts argue that increasing operational costs could be disastrous for AI companies, as they’ve invested heavily in expanding infrastructure to run power-hungry models. The cost of hallucinations may outweigh the expense of getting models to decide whether they’re uncertain.
Source: https://futurism.com/fixing-hallucinations-destroy-chatgpt