Apple Intelligence

RSS for tag

Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.

Posts under Apple Intelligence tag

92 Posts

Post

Replies

Boosts

Views

Activity

Xcode Simulator error
Following is an error I encounter when trying to run a foundation model function in simulator. Error Domain=FoundationModels.LanguageModelSession.GenerationError Code=-1 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=( "Error Domain=ModelManagerServices.ModelManagerError Code=1026 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=(\n)}" )} Its a swift playground I'm building in xcode and works fine in the preview and on a real device too but since it's my submission for the swift student challenge I need it to run on the simulator. I have updated my macOS to latest version of Tahoe(26.3) and Xcode is also on latest version. The simulator I run the playground is also on ios and iPadOS 26. I have set my region to US and have turned on Apple Intelligence on my mac too. Any suggestions on how to fix the issue?👾
1
0
34
1d
Big suggestion for the improvements to “Filter unknown calls”:
Hi, I have a big suggestion for the next update! My suggestion is to upgrade the call filtering system : - “Block suspicious calls” It automatically block numbers that are likely to be unwanted or spam. These calls should not ring, should not appear in the call history, and should be silently discarded. - “Block unknown callers/hidden numbers” Instead of receiving a call from a hidden or unknown caller and seeing it appear in the missed calls list, the call should be completely blocked and not recorded at all, no notification, no sight of call. - “Filter numbers that are not from contacts” If the same unknown number calls too frequently within a week and the user never answers (for example 10 times per day, or 5 times per day if considered suspicious or too much spam in a little time), the iPhone should automatically block this number. The device would display a message such as: “This number has called too many times and has been automatically blocked for 3 day(or 1 week).” The block could also apply to the entire IP range or source associated with the unknown caller. It is very helpful for most of unwanted calls.
0
0
30
2d
Apple Intelligence cant change Xcode build target...?!
› It is reasonable to assume Xcode AI integration means AI being able to change the build target... “Why can Xcode Intelligence see source files but not .xcodeproj/project.pbxproj for direct edit?” “Is this a known limitation/bug in Xcode 26.3 RC Project Context or MCP Xcode Tools?” “Any required entitlement/setting beyond Intelligence > Xcode Tools for build-setting edits?”
2
0
30
2d
Foundation models not detectable in Xcode simulator
I'm building an app which runs around the Foundation model framework. My expected output is generated when testing on a real device or in preview in Xcode but it throws Foundation Model error when I try running it on the simulator. I'm using a Macbook M1 air and have apple intelligence turned on and my simulator run destination is also an iPad Pro M5 (26.0). Any solution for this as this is my submission for the SSC so I need to make it work on the simulator iPad. Thank you👾
2
0
87
2d
How can I use private AI agents in Xcode 26.3?
I work on some proprietary codebases and can only use private AI services with them (currently MiniMax M2.1 and GLM 4.7). It all works great with both Claude Code and OpenCode agents, and I'd like to leverage the new agentic capabilities that are now in Xcode 26.3. I'm not seeing any option to connect to OpenCode, and both the Anthropic and OpenAI providers require an enterprise account (which I don't have access to). Are there any options that I'm missing here?
7
4
689
3d
Error with guardrailViolation and underlyingErrors
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model. my set up is: Mac M1 Pro MacOS 26 Beta Version 26.0 beta 3 Apple Intelligence & Siri --> On here is the code, func generate() { Task { isGenerating = true output = "⏳ Thinking..." do { let session = LanguageModelSession( instructions: """ Extract time from a message. Example Q: Golfing at 6PM A: 6PM """) let response = try await session.respond(to: "Go to gym at 7PM") output = response.content } catch { output = "❌ Error:, \(error)" print(output) } isGenerating = false } and I get these errors guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog])) Can you help me get through this?
5
0
624
5d
Some question about Swift Student Challenge 26
My playground may require that the device has downloaded some resources in advance, such as Apple's advanced voice, translation language... But this is not necessary. It is just an incidental function. If it is not downloaded, the app will prompt that this function is not available and most of the other functions can continue to be used. But I want to know whether the judge's device will download these things in advance, and if not, will the judges think that there is a problem with my app that can't be used normally, which will cause my work to be rejected directly? Because my app uses the API of iOS 26, it needs to run in Xcode, and the competition allows the Apple intelligent function, but it is stipulated that if it runs with Xcode, the app will be tested on the simulator. However, my app involves image playground and cannot run on the simulator. Does anyone have a good solution? Thank you!
1
1
169
5d
Apple Intelligence crashed/stopped working
Hi everyone, I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.” I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking? Any help or insights would be greatly appreciated! Thanks in advance!
3
1
1.5k
2w
Sharing: How I Built an IPv4/IPv6 Dual-Stack Network Diagnostic Tool for iOS
Hi everyone 👋 As a network engineer and indie iOS developer, I couldn’t find a lightweight mobile tool that fully supports IPv4/IPv6 dual-stack diagnostics — so I built NetToolbox -All-In-One Utility for engineers, DevOps, and developers. Here are its core features that solve real mobile networking pain points: One-Click Full Diagnostics: Integrates ping, traceroute, and multi-type DNS queries (A/AAAA/CNAME) — no need to switch between apps IPv4/IPv6 Dual-Stack Support: Seamlessly works in IPv6-only networks, with the ability to test connectivity differences between dual-stack environments LAN Device Scanning: Quickly identifies all devices on the same network segment and checks port availability Offline Functionality: Diagnostic logic is stored locally, enabling LAN troubleshooting without an internet connection Lightweight Design: 5MB install size, no storage bloat, and low power consumption during operation Dark Mode Support: Tailored for developers who work late at night During development, I leveraged Apple Intelligence alongside Claude Code and Gemini 3 to accelerate the process, optimize iOS native networking stack adaptation and local storage logic, and significantly boost development efficiency. I’d love to hear from the community: What must-have features are missing from mobile network diagnostic tools? Do you have experience optimizing iOS workflows with Apple Intelligence? 👉 You can try the app here: https://apps.apple.com/us/app/nettoolbox-all-in-one-utility/id6757392404 Feedback is highly appreciated — I’ll keep iterating to make it better! 🚀
1
0
111
3w
Foundation Model Framework
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following: Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s. I was wondering if anyone had any advice?
19
3
2.8k
3w
Accessible Speech Practice App - R Helper Launch
Hi Community, I'm excited to share R Helper, a speech practice app I built with accessibility as the core focus from day one. App Store: https://apps.apple.com/app/speak-r-clearly/id6751442522 WHY I BUILT THIS I personally struggled with R sound pronunciation growing up. It affected my confidence in school and job interviews. That experience taught me how important accessible practice tools are. R Helper helps children and adults practice R sounds with full accessibility support. ACCESSIBILITY FEATURES IMPLEMENTED VoiceOver - complete navigation and feedback Voice Control - hands-free operation Dynamic Type - scales to large accessibility sizes Reduce Motion - respects user preference Dark Mode - user controllable High Contrast compatibility Differentiate Without Color THE CHALLENGE Most speech practice apps ignore accessibility. I wanted to change that and prove that specialized educational apps can be fully accessible. KEY FEATURES Works 100% offline, no internet needed Zero data collection, privacy first Generous free tier with all accessibility features included 10 story missions with gamification 7 languages supported including RTL for Arabic LESSONS LEARNED Accessibility is not hard when you prioritize it from the start. VoiceOver labels and hints make a huge difference. Testing with accessibility features enabled is essential. Standard SwiftUI components handle most accessibility automatically. Reducing motion significantly helps users with vestibular issues. TECHNICAL DETAILS Built with SwiftUI, targets iOS 17 and up. Universal app for iPhone and iPad. Fully offline using CoreData and local storage. No third party analytics, privacy focused. QUESTIONS FOR THE COMMUNITY What accessibility features do you find users request most? How do you test accessibility features efficiently? WHATS NEXT I'm currently working on expanding the word library, adding more story content, improving haptic feedback Thanks for reading. Nour
1
1
1.4k
4w
Nasty problems in Xcode 26.2: Apple Intelligence crash & Source Code Control Failure
I know this post isn't going to give a lot of details, but what I experienced tonight was so completely weird that I wanted to get it posted here in case others run into it: FIRST: All was well until I made a trivial change to a large Objective-C++ module. I suddenly got the idea to look at that line in the code review pane, to see if that area of code had ever had recent modifications. But, the entire module showed up as modified -- one giant change bar, with nothing on the right side of the code review pane, no matter what commit I selected. Then I noticed that the two lines of code which had all of 4 characters edited were no longer showing any change bars. Yet, the file showed up as "modified". Still, the exact line changes were not showing in the source code navigator, even though other files showed their changes. Note I'm connected to our remote repo on github. I did some command line git checks of the local repo, and the changes were there (as yet unstaged). So -- I figured, I'm gonna ask the Apple Coding Assistant what's up. And it gave some fantastic advice, especially on how to confirm the changes really were in the repo ready to stage and commit and push. Which I did. But despite following a couple hours of wonderful suggestions, I could never get the change bars back -- for this one specific file! (yes, the file was in the repo, and in the project -- everything seemed OK with the file itself -- nothing had changed in the project, which compiled and ran perfectly with my changes). SECOND -- suddenly, the AI assistant seemed to crash Xcode. When I went to re-run Xcode, it just crashed exactly the same. The crash log indicated "Xcode is crashing inside the IDEIntelligenceChat plugin while it’s trying to “apply changes” to a Source Editor buffer". Ultimately, I needed to restart Xcode holding down the SHIFT key. I could open other projects, but not the one I had been working on. So I turned OFF apple Intelligence (thanks, ChatGPT). That allowed me to launch. It sounds like some sort of corrupt Apple Intelligence chat logs and/or caches, which ChatGPT has given me extensive suggestions for deleting. I don't have the energy to attack that tonight -- I did an additional Time Machine backup and hope to take a closer look tomorrow. Ideally -- I'd rather NOT lose all my on-going coding assistant chats for this project -- I had some ongoing suggestions I was working on. But more concerning is the weirdness with changebars affecting this one 7,000 line .mm file. It doesn't seem like there's anything that should affect those change bars for ONE FILE that is in the repo and where changes can be seen from a git diff command line operation. If it's a bug -- I can live with it. But it's worrisome. Other than that, Xcode 26.2 has been running great! Unlike 26.1, which insisted on re-compiling all 600 files in my project every time I ran/debugged, 26.2 just does the 2-6 modified files -- a perfect incremental compile. I've saved HOURS of wasted unnecessary compilation since 26.2 was released.
4
1
492
Jan ’26
Image Playground files suddenly not available
My app lets you create images with Image Playground. When the user approves an image I move it to the documents dir from the temp storage. With over a year of usage I’ve created a lot of images over time. Out of nowhere the app stopped loading my custom creations from Image Playground saying it couldn’t find the files. It still had my VoiceOver strings I had added for each image and still had the custom categories I assigned them. Debug code to look in the docs dir doesn’t find them. I downloaded the app’s container and only see the images I created as a test after the problem started. But my ~70MB app is still taking up 300MB on my iPhone so it feels like they’re there but not accessible. Is there anything else I can try?
2
0
815
Jan ’26
What devices will the Swift Student Challenge be judged on?
My project requires the on-device apple intelligence models (FoundationModels) which are only available for iPad on iPad Pro M1 and later, iPad Air M1 and later, iPad mini A17 Pro. If they don't judge on one of these devices, my project might not work properly as FoundationModels is a pretty big part of my project. For this reason I really need to know what devices the Swift Student Challenge will be judged on.
1
0
305
Jan ’26
Pre-inference AI Safety Governor for FoundationModels (Swift, On-Device)
Hi everyone, I've been building an on-device AI safety layer called Newton Engine, designed to validate prompts before they reach FoundationModels (or any LLM). Wanted to share v1.3 and get feedback from the community. The Problem Current AI safety is post-training — baked into the model, probabilistic, not auditable. When Apple Intelligence ships with FoundationModels, developers will need a way to catch unsafe prompts before inference, with deterministic results they can log and explain. What Newton Does Newton validates every prompt pre-inference and returns: Phase (0/1/7/8/9) Shape classification Confidence score Full audit trace If validation fails, generation is blocked. If it passes (Phase 9), the prompt proceeds to the model. v1.3 Detection Categories (14 total) Jailbreak / prompt injection Corrosive self-negation ("I hate myself") Hedged corrosive ("Not saying I'm worthless, but...") Emotional dependency ("You're the only one who understands") Third-person manipulation ("If you refuse, you're proving nobody cares") Logical contradictions ("Prove truth doesn't exist") Self-referential paradox ("Prove that proof is impossible") Semantic inversion ("Explain how truth can be false") Definitional impossibility ("Square circle") Delegated agency ("Decide for me") Hallucination-risk prompts ("Cite the 2025 CDC report") Unbounded recursion ("Repeat forever") Conditional unbounded ("Until you can't") Nonsense / low semantic density Test Results 94.3% catch rate on 35 adversarial test cases (33/35 passed). Architecture User Input ↓ [ Newton ] → Validates prompt, assigns Phase ↓ Phase 9? → [ FoundationModels ] → Response Phase 1/7/8? → Blocked with explanation Key Properties Deterministic (same input → same output) Fully auditable (ValidationTrace on every prompt) On-device (no network required) Native Swift / SwiftUI String Catalog localization (EN/ES/FR) FoundationModels-ready (#if canImport) Code Sample — Validation let governor = NewtonGovernor() let result = governor.validate(prompt: userInput) if result.permitted { // Proceed to FoundationModels let session = LanguageModelSession() let response = try await session.respond(to: userInput) } else { // Handle block print("Blocked: Phase \(result.phase.rawValue) — \(result.reasoning)") print(result.trace.summary) // Full audit trace } Questions for the Community Anyone else building pre-inference validation for FoundationModels? Thoughts on the Phase system (0/1/7/8/9) vs. simple pass/fail? Interest in Shape Theory classification for prompt complexity? Best practices for integrating with LanguageModelSession? Links GitHub: https://github.com/jaredlewiswechs/ada-newton Technical overview: parcri.net Happy to share more implementation details. Looking for feedback, collaborators, and anyone else thinking about deterministic AI safety on-device.
1
0
580
Jan ’26
LanguageModelSession with multiple tools and structured outpout
Hi, I'm using LanguageModelSession and giving it two different tools to query data from a local database. I'm wondering how I can have the session generate structured content as the response that includes data one or both tools (or no tool at all). Here is an example of what I'm trying to do: Let's say the app has access to a database that contains information about exercise and sleep data (this is just an analogy). There are two tools, GetExerciseData() and GetSleepData(). The user may then prompt something like, "how well did I sleep in November". I have this working so that it calls through to the right tool, which would return a SleepSummary. However, I can't figure out how to have the session return the right structured data. I can do this and get back good text data: let response = session.respond(to: userInput), but I believe I want to do something like: let response = session.respond(to: trimmed, generating: <SomeStructure?>) Sometimes the model I run one tool or the other, or both tools, or no tool at all. Any help of what the right way to go about this would be much appreciated. Most of the example I found have to do with 1 tool.
1
0
525
Jan ’26
Pre-inference AI Safety Governor for FoundationModels (Swift, On-Device)
Greetings, and Happy Holidays, I've been building an on-device AI safety layer called Newton Engine, designed to validate prompts before they reach FoundationModels (or any LLM). Wanted to share v1.3 and get feedback from the community. The Problem Current AI safety is post-training — baked into the model, probabilistic, not auditable. When Apple Intelligence ships with FoundationModels, developers will need a way to catch unsafe prompts before inference, with deterministic results they can log and explain. What Newton Does Newton validates every prompt pre-inference and returns: Phase (0/1/7/8/9) Shape classification Confidence score Full audit trace If validation fails, generation is blocked. If it passes (Phase 9), the prompt proceeds to the model. v1.3 Detection Categories (14 total) Jailbreak / prompt injection Corrosive self-negation ("I hate myself") Hedged corrosive ("Not saying I'm worthless, but...") Emotional dependency ("You're the only one who understands") Third-person manipulation ("If you refuse, you're proving nobody cares") Logical contradictions ("Prove truth doesn't exist") Self-referential paradox ("Prove that proof is impossible") Semantic inversion ("Explain how truth can be false") Definitional impossibility ("Square circle") Delegated agency ("Decide for me") Hallucination-risk prompts ("Cite the 2025 CDC report") Unbounded recursion ("Repeat forever") Conditional unbounded ("Until you can't") Nonsense / low semantic density Test Results 94.3% catch rate on 35 adversarial test cases (33/35 passed). Architecture User Input ↓ [ Newton ] → Validates prompt, assigns Phase ↓ Phase 9? → [ FoundationModels ] → Response Phase 1/7/8? → Blocked with explanation Key Properties Deterministic (same input → same output) Fully auditable (ValidationTrace on every prompt) On-device (no network required) Native Swift / SwiftUI String Catalog localization (EN/ES/FR) FoundationModels-ready (#if canImport) Code Sample — Validation let governor = NewtonGovernor() let result = governor.validate(prompt: userInput) if result.permitted { // Proceed to FoundationModels let session = LanguageModelSession() let response = try await session.respond(to: userInput) } else { // Handle block print("Blocked: Phase \(result.phase.rawValue) — \(result.reasoning)") print(result.trace.summary) // Full audit trace } Questions for the Community Anyone else building pre-inference validation for FoundationModels? Thoughts on the Phase system (0/1/7/8/9) vs. simple pass/fail? Interest in Shape Theory classification for prompt complexity? Best practices for integrating with LanguageModelSession? Links GitHub: https://github.com/jaredlewiswechs/ada-newton Technical overview: parcri.net Happy to share more implementation details. Looking for feedback, collaborators, and anyone else thinking about deterministic AI safety on-device. parcri.net has the link :)
1
0
367
Dec ’25
On-Device Intelligent Assistant (Works Offline with Foundation Models)
Hello, World I built a deterministic safety layer for FoundationModels called Newton. It validates prompts before inference — if validation fails, generation never happens. It catches jailbreaks, hallucination traps, corrosive frames, and logical contradictions with 94% accuracy on adversarial inputs. All on-device, native Swift, no dependencies. Newton also has a front-facing Intelligent Partner named Ada, and given the incredible integration with FoundationModels and various census data and shape files, this is all available PRIVATE AND OFFLINE. Running on iOS 26 beta today. Happy to demo. https://github.com/jaredlewiswechs/ada-newton — Jared Lewis parcri.net
1
0
136
Dec ’25