Apple Says It Is Fixing AI Notification Summaries. It Missed the Most Important Part
In iOS 18, for example, your iPhone will use Apple Intelligence to summarize long threads of notifications from messages, making it easier to catch up on conversations you missed. Or, it will group together notifications from third-party apps and summarize the contents. All of that could be really useful, especially if you get a lot of notifications throughout the day.
The thing is, companies don’t ship ideas, they ship features, and—as a feature—notification summaries are pretty bad. More importantly, they are bad in ways that hurt the user experience, as well as Apple’s reputation.
Part of the problem is the grouping of notifications. Just because a bunch of notifications came from the same app doesn’t mean they contain a continuation of information. There are plenty of examples of people sharing notifications summaries from their smart doorbell that say things like “multiple people at front and back door.” Obviously, that’s not the case, but the way notification summaries work, if you have three notifications that there is someone at your door, Apple Intelligence is going to aggregate those visits together and summarize them in an ominous way.
The other problem is that—because notification summaries are based on Apple Intelligence, which is built on LLMs—it just gets things wrong. The BBC has complained several times that Apple Intelligence is summarizing its news headlines in ways that are completely untrue. It pointed to an example of a summary that said the suspect in the murder of the United Healthcare CEO had shot himself. That wasn’t true, and you can understand why a news organization might be concerned about untrue summaries appearing next to their name.
Apple hasn’t said much but has confirmed that it will add a label to notification summaries to let people know they were created using Apple Intelligence. I guess the idea is to make it more clear to users that what they are reading is a summary, and not the actual headline from the news organization.
Adding the label isn’t a bad idea necessarily, but it doesn’t actually fix the real problem, which is that Apple Intelligence is just bad at this. It’s hard to imagine that no one thought about the possibility of this happening, but they still shipped the product anyway without any kind of guardrails or safeguard to prevent the summaries from getting it wrong.
Imagine if a notification summary accidentally summarized a headline about the President visiting a foreign country with a celebrity who passed away. “President Biden passes away while visiting That is the kind of thing that can affect markets, not to mention national security.
Ultimately, the lesson here is that there is a big difference between ideas and features. I don’t think Apple should have shipped notification summaries yet. This was not an unforeseeable problem—in fact, it’s exactly how LLMs work.
Even worse, Apple’s solution doesn’t actually fix anything. The problem isn’t that people might not realize the summaries are generated by Apple Intelligence. You can add all the labels you want, the problem is that Apple Intelligence is making up false information, sometimes in direct contradiction to the original notification. The fact that the company doesn’t seem to see that is as big a problem as the summaries in the first place.