Apple plans to assess user data to improve its large language model (LLM) software without compromising users’ privacy. In a blog post released on Monday (April 14), Apple made clear that synthetic data has been employed in training AI but that this method has been found unreasonable.
In the future, then, synthetic data will still serve as a base, but the generated output will be correlated against a set of emails from consenting users, showing which of the generated outputs concur with the real messages most closely.
“And only those users that opted to send Device Analytics to Apple will take part,” it asserted on the blog. “The content of the sampled emails remains on the device and is never sent to Apple. Participating devices will only send a signal indicating which variant is most similar to the sampled data, thus allowing Apple to learn which synthetic emails are being picked most often across all devices.”
This new approach will allow the refinement of many text-related services offered by the Apple Intelligence platform, such as summarisation of Notifications, synthesizing thoughts in Writing Tools, and message summarisation.
Apple Intelligence has been facing some roadblocks in the delivery of accurate summaries, and at an internal meeting last month, senior members said that being behind on fundamental updates for Siri, the company’s AI-driven voice assistant, has been a disappointment and an embarrassment.
Apple has announced the updates in AI to be given to Siri by the end of this year; however, it later told Reuters that these updates would come out only by 2026.
Once a leader in the virtual assistant space, Siri now lags behind in the race with others like Amazon’s Alexa and Google’s Gemini for Android and Samsung’s Galaxy AI, which are way ahead in providing sophisticated AI functionalities, which were also mentioned last month by PYMNTS.
Further deep into LLMs requires personal expectation about what kind of work you’re using it for. Easy identifications are available to find out which one is right for you and how LLMs could affect the future of AIs.