Apple’s generative AI model has failed to deliver accurate news notifications, blurring the lines between fact and fiction. Just days after its launch in the UK, the tech company’s Apple Intelligence feature incorrectly summarized a BBC report about a murder arrest, claiming the suspect had attempted suicide. The feature also botched headlines for other major news events.
The BBC has complained to Apple, urging them to fix the problem. “We need our audiences to trust any information or journalism published in our name and that includes notifications,” said a spokesperson. The incident highlights concerns about AI’s role as a mediator of information, where it can spread disinformation and compromise journalistic integrity.
Experts warn that generative AI is prone to hallucinating facts, using statistical predictions to generate text based on human writing. This introduces another layer of complexity in reporting news, where subjective decisions must be made to condense events into headlines. The tech company’s involvement only exacerbates the issue, as they attempt to approximate correct statements.
Source: https://futurism.com/the-byte/apple-ai-luigi-mangione-false