Apple has been using AI on its devices for a long time, but it doesn't follow the trend that much.
Just when others, including tech titans, are competing in the development of generative AI, thanks to OpenAI that started the trend following the announcement of ChatGPT, Apple was lagging in the race.
When others have gone leaps ahead, in terms of Large Language Model AI products, Apple was still sticking to its voice assistant Siri.
Then, things changed when Apple finally unveiled its long-awaited AI strategy.
During its 2024 Worldwide Developers Conference (WWDC), Apple announced Apple Intelligence technology across its suite of apps, which should significantly improve not only Siri, but pretty much everything else.
Among the many features, is the ability to summarize news and show them as notifications.
The feature should be useful for those who wish to stay updated to real-time information, until people began to realize that it can create convincing fabricated stories.

The AI-powered news summarizing feature was introduced in the iOS 18.3 beta software for iPhones, iPads, and Mac computers, aimed to provide concise and automated news summaries for on-the-go users.
However, reports of false information—often referred to as "hallucinations" in the AI community—have prompted the tech giant to put the feature on hold while it works on improvements.
One of the most notable issues surfaced when the AI-generated alerts incorrectly attributed fabricated stories to trusted news outlets, such as the BBC.
At the time, a news summary that carried the BBC logo included one claiming Luke Littler had won the PDC World Darts final before playing in it and another that the tennis player Rafael Nadal had “come out” as gay.
Another notable example was when Apple's AI falsely said that Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself.
Other news organizations were also affected by the errors, with a summary of New York Times alerts wrongly claiming that Israel’s prime minister, Benjamin Netanyahu, had been arrested.

This caused significant backlash, with media organizations criticizing the inaccuracies and demanding corrective action.
The errors, which included misleading summaries and completely false claims, highlighted the potential risks of deploying generative AI systems for critical information dissemination without adequate safeguards.
Apple initially said it would release an update to fix the issue. But because the backlash that is overwhelmed with criticisms, Apple is forced to suspend the feature.
"Notification summaries for the news and entertainment category will be temporarily unavailable," Apple said. "We are working on improvements and will make them available in a future software update."
Apple's decision underscores the challenges of integrating AI into services where accuracy is paramount.
The move to pause and address the issue reflects a growing emphasis on responsibility and accountability in the deployment of AI technologies. As the company works to refine its algorithms, this incident serves as a valuable case study in the complexities of integrating AI into everyday applications.

Generative AI, while powerful and versatile, is still prone to occasional errors that can undermine user trust. Apple has acknowledged the issue and assured users that it is working to resolve these problems. The company plans to reintroduce the feature in a future update once the necessary improvements have been made.
This development comes at a time when major tech companies are racing to incorporate AI into their products, often with varying levels of success.
While tools like ChatGPT and Bard have demonstrated the immense potential of generative AI, incidents like these serve as a reminder that the technology is far from perfect. Balancing innovation with reliability remains a significant challenge for companies looking to harness the power of AI.