Apple Introduces Differential Privacy for Enhanced AI Training

Apple has developed a method to improve its AI models without relying on user data or copying it from devices like iPhones and Macs. The new approach involves comparing synthetic datasets to real-world samples, such as recent emails or messages from users who have opted into the Device Analytics program.

Under this plan, Apple’s devices will analyze which synthetic inputs are closest to real samples and relay that information to the company by sending only a signal indicating proximity. This way, user data remains private and secure, as it never leaves the device.

Currently, Apple trains its AI models on synthetic data alone, potentially leading to less helpful responses. However, with this new system, the company aims to enhance its AI text outputs, such as email summaries, by using the most frequently picked fake samples for improvement.

Apple’s new method is built around differential privacy, a technique used since 2016 to protect user data. By introducing randomized information into a broader dataset, Apple claims to prevent it from linking data to any one person. The new system will be introduced in beta versions of iOS and iPadOS 18.5 and macOS 15.5, marking a significant step towards turning around the company’s AI development challenges.

Source: https://www.theverge.com/news/648496/apple-improve-ai-models-differential-privacy