All models run locally using Metal GPU acceleration. No cloud services, no subscriptions required for core features.
All AI models run locally on your device. No internet connection required after initial setup.
Optimized for Apple Silicon with Metal GPU support for fast inference.
Built natively for macOS with efficient memory management.
Support for 8 AI models across 4 families: Llama 3.2, Phi-3, Qwen 2.5, Gemma 2.
Autocomplete, correction, rephrasing, summarization, and translation.
Conversational AI assistant with context awareness. Available in Pro.
Create custom AI tools with your own prompts and parameters.
Markdown syntax with live preview and export options.
AI-powered search across all notes and conversations.
Free version includes core features. Pro unlocks additional models and chat interface.