AIJB

Why Your iPhone Is Getting Smarter Without Apple Seeing Your Data

Why Your iPhone Is Getting Smarter Without Apple Seeing Your Data
2025-11-11 journalistiek

San Francisco, dinsdag, 11 november 2025.
Imagine this: your phone understands exactly what you mean, without a single server ever hearing a word of your conversation. That’s the heart of Apple Intelligence, now active on iPhone, iPad, and Mac since November 2025. The most striking feature? Most AI processing happens entirely locally on your device – your words, photos, and messages remain safe on your own screen. No data goes to the cloud. Only the technology that has always protected your privacy is now better than ever: from automatic summarisation to removing unwanted people from photos with a simple text command. And the best part? It already works in Dutch, with an expanded feature set finally fully available. The question is no longer whether AI is safe, but how quickly you can start using it.

The Technology Behind Apple Intelligence: Local, Secure, and Smart

Apple Intelligence is not a traditional cloud-based AI, but a system that primarily operates directly on your device. This means nearly all processing of text, speech, and images takes place on your iPhone, iPad, or Mac – without personal data ever being sent to a server [1]. The technology leverages the Neural Engine in Apple Silicon chips, such as the M1 and newer, and Apple’s custom on-device AI models. These models are designed to function within the constraints of a mobile device, yet deliver performance that, in internal Apple evaluations, surpasses the smaller models of Mistral AI, Microsoft, and Google [2]. Most AI tasks – such as rewriting an email, summarising a message, or removing unwanted objects from a photo – are executed without any connection to the cloud. Only for complex tasks requiring significant computational power, such as answering intricate questions, does Apple use a secure, end-to-end encrypted cloud service: Apple’s Private Cloud Compute, which runs exclusively on Apple Silicon servers and never stores personal data [1][3]. This underscores Apple’s commitment to user privacy as a core principle of its AI strategy [2][3].

What Journalists Can Do Now: From Summarisation to Image Analysis

For journalists, Apple Intelligence is a powerful tool in daily workflows. The ‘Writing Tools’ feature in the Notes and Messages apps allows users to rewrite, simplify, or summarise text with a single prompt [1]. For example, an unfiltered interview transcript can be entered, and the system generates a concise summary of key points within seconds [4]. In practice, this saves time when drafting articles or processing large volumes of material. Similarly, the ‘Reduce Interruptions’ feature helps filter notifications based on content and urgency, enabling journalists to focus without being distracted by less relevant messages [1]. For visual journalism, Visual Intelligence is a game-changer: by scanning an image with the iPhone or iPad camera, the system can identify objects, plants, animals, or even an address on a poster, and automatically create a calendar event or launch a Google search [1][4]. The ‘Apple Clean Up’ feature allows users to remove people wearing bright clothing from a photo simply by saying: ‘Remove that person in the red jacket’ [1]. All these functions run locally, meaning no photo or text ever leaves your device [3].

The Role of ChatGPT and the Expansion of Creative Possibilities

Apple Intelligence integrates not only its own AI but also external capabilities. Since iOS 18.2, there has been an integration with ChatGPT 4, which handles more complex queries that cannot be processed locally, such as summarising long texts or generating scripts [1][4]. This integration is free to use, but users with a ChatGPT Plus subscription can link their existing account to access additional features like generating content in multiple languages [1]. The ‘Image Playground’ feature allows users to generate images based on text descriptions, for example: ‘a journalist in a silver helmet during an urban war situation’ [1][4]. These images are generated entirely on-device and can be used directly in articles or social media posts. Apple has also prioritised preventing deepfakes: all AI-generated content in Image Playground is automatically labelled with an ‘AI-generated’ badge, so users know what they are viewing [1][4]. This is crucial for transparency and trust in creative industries. The Genmoji feature enables users to create custom emoji via text, such as: ‘a cow holding a microphone while it’s raining’ [1][4]. These can be used immediately in messages and enhance the language used by journalists on social media.

The Price of Privacy: Accuracy, Limitations, and Ethical Dilemmas

While Apple Intelligence protects user privacy in unprecedented ways, challenges around accuracy and accountability remain. In December 2024, the BBC reported that Apple Intelligence had misleadingly summarised a news article, claiming that Luigi Mangione had killed himself – which was false [2]. Similarly, incorrect information was provided about the winner of the PDC World Darts Championship, and a personal story was wrongly attributed to Rafael Nadal [2]. Apple responded by temporarily disabling AI-based notification summaries in iOS 18.3, and only re-enabled them in iOS 26 beta 4 with a clear label: ‘Summarised by Apple Intelligence’ [2]. This highlights that even the most advanced AI systems can make mistakes – especially when processing fast-changing news content. Additionally, a legal complaint has been filed in the US alleging that Apple misled ‘millions of consumers’ by claiming features were available when they were not [2]. The gap between marketing and reality continues to pose a risk. Furthermore, there are technical limitations: as of November 2025, Apple Intelligence still does not support Dutch for all features, such as live translation in Phone or FaceTime, and the advanced Siri with contextual knowledge is scheduled for release in spring 2026 [1]. Also, the ‘Reduce Interruptions’ feature was not fully available in Dutch in 2025 [1]. [alert! ‘Features in Dutch are still not fully available in 2025’]

The Future of AI in Journalism: How Far Does Responsibility Go?

Apple Intelligence is not only a technological leap but also a moral question. The focus on local processing represents a strong rejection of cloud-based models used by others, such as ChatGPT, where data is often stored and used to train models [1][3]. Apple states that their AI is ‘designed to protect privacy by processing data locally, without needing to send it to the cloud’ [4]. But what happens when a journalist uses AI-generated content to write an article? Is the article still trustworthy? And how can a reader determine whether a text, photo, or audio file was AI-generated? This is where innovation meets responsibility. Apple already has a plan to mark AI-generated content in the future, but effectiveness depends on consistency and transparency [1]. The company is also actively expanding its AI capabilities: on 10 and 11 November 2025, multiple job postings were published for AI engineers, machine learning engineers, and UX designers, with a focus on Siri, creative apps, and agent systems across multiple languages, including Dutch [5]. This indicates that Apple Intelligence is not standing still but evolving rapidly – even within the realities of the news industry [5].

Sources