Expert Slams Apple’s AI Hopes, Blames Hardware Limitations

Professor Seok Joon Kwon from Sungkyunkwan University has expressed concerns about Apple’s recent research paper on large reasoning models and language models (LLMs). He believes that the paper’s findings are flawed due to Apple’s lack of high-performance hardware.

Kwon argues that Apple’s research paper, which claimed that LLMs and LRMs fail to make sound judgments as complexity increases, is contradicted by actual language model scaling laws. He points out that hundreds of studies have consistently shown that performance improves in a power-law manner as the number of parameters increases, but does not decrease.

Kwon also notes that Apple’s focus on on-device processing limits its ability to train LLMs and LRMs that require substantial compute and user data to function competitively. The company has adopted a hybrid approach, using external large language models like ChatGPT 4o, which is uncommon for Apple.

The professor believes that Apple’s fundamental hardware limitations are the reason for its AI struggles. Its M-series processors do not support FP16 used for AI training and rely on LPDDR5 memory instead of high-performance HBM3E. To catch up with rivals, Apple needs to develop dedicated server-grade processors with advanced memory subsystems and sophisticated AI training and inference capabilities.

Kwon’s comments come as Apple failed to reveal significant progress in its AI effort at the recent WWDC conference, prompting criticism that it may be falling behind in the global race for AI.

Source: https://www.tomshardware.com/tech-industry/artificial-intelligence/expert-pours-cold-water-on-apples-downbeat-ai-outlook-says-lack-of-high-powered-hardware-could-be-to-blame