Apple’s latest developer beta was released last week and includes some of the generative AI features announced at WWDC that will be coming to iPhones, iPads, and Macs in the coming months, but Apple’s computers can actually read instructions programmed into models that support some of the Apple Intelligence features.
These appear by default as prompts that appear before you speak to a chatbot, and have been spotted by AI tools like Microsoft Bing and DALL-E. Now, a member of the macOS 15.1 beta subreddit has posted that he’s discovered files that contain these backend prompts. None of these files can be modified, but they do give us some early hints about how the sausage is made.
In the above example, we’re teaching the “helpful email assistant” AI bot how to ask a series of questions based on the content of the email, which could be part of Apple’s Smart Reply feature that can suggest possible replies.
Screenshot: Wes Davis/The Verge
It’s similar to Apple’s “Rewrite” feature, one of the writing tools you can access by highlighting text and right-clicking (or long-pressing on iOS), and its instructions include the line, “Keep your answers under 50 words. Don’t hallucinate. Don’t fabricate factual information.”
Screenshot: Wes Davis/The Verge
The short prompt summarises the email and comes with careful instructions not to answer the question.
Screenshot: Wes Davis/The Verge
I’m pretty sure this is a set of instructions for creating a “Memories” video in Apple Photos: “Do not write religious, political, harmful, violent, sexual, obscene, or negative, sad, or provocative stories,” a line that might explain why the feature rejected my prompt for “images of sadness.”
That’s a shame, but it’s not hard to work around. I had a video generated for me in response to a prompt that asked me to “provide a video of people who are mourning.” I won’t share the resulting video because it includes photos of people other than me, but I will share the best photos that were included in the slideshow.
The file contains many more prompts, all of which point to hidden instructions given to Apple’s AI tools before the prompt is sent, but there’s one final instruction:
In the files I viewed, this model was called “ajax,” which some Verge readers may remember as Apple’s internal name for LLM, which was rumored last year.
The person who discovered the instructions also posted instructions on how to find the file within the macOS Sequoia 15.1 developer beta.
Expanding the “purpose_auto” folder will reveal a list of other folders with long alphanumeric names, most of which have an AssetData folder containing a “metadata.json” file. Opening them will reveal some code, and in some cases, at the bottom of some of them, instructions that were passed to your machine’s local version of Apple LLM. But remember, these reside in a part of macOS that contains the most sensitive files on your system, so proceed with caution.