Mine StableDiffusion is a native, offline-first AI art generation app that brings the power of Stable Diffusion models to your fingertips. Built with modern Kotlin Multiplatform technology and powered by the blazing-fast stable-diffusion.cpp engine, it delivers desktop-class performance on both Android/iOS and Desktop platforms.
- π Native Performance - C++ backend with JNI bindings for maximum speed
- π Privacy First - 100% offline, all processing happens on your device
- π¨ Modern UI - Beautiful Compose Multiplatform interface
- π± True Multiplatform - Shared codebase for Android & iOS & Desktop
- π§ Model Flexibility - Support for FLUX, SDXL, SD3, and many more
- β‘ Hardware Accelerated - Vulkan 1.2+ (Android/Linux/Windows) & Metal (macOS)
Mine StableDiffusion supports a wide range of models. To help you choose the best model for your device, we've organized them by performance requirements:
Please try to ensure that the model is smaller than the VRAM of your device.
[!TIP] Start Small: We recommend starting with smaller models (e.g., SD-Turbo, SD 1.5) and gradually trying larger ones. This allows you to gauge your device's capabilities and identify performance bottlenecks effectively.
Ideal for older phones or integrated graphics. High speed, low memory usage.
- β SD-Turbo - Extremely fast 1-step generation
- β SD1.x / SD2.x/Illustrious
- β
SDXL-Turbo - Fast high 512x512
- Test:Model
-
Good balance between quality and speed. Works well on most modern devices.
- β π¨ Chroma / Chroma1-Radiance - Vibrant color generation
- β πΌοΈ Z-Image - Advanced image synthesis
Best for high-detail 1024x1024+ generation. Requires more VRAM and time.
- β SDXL - Standard high-quality base model
- β SD3 / SD3.5 - Stability AI's latest high-fidelity architecture
- β ποΈ Ovis-Image - Vision-language model
State-of-the-art models with massive parameter counts. Best for flagship phones or dedicated GPUs.
- β FLUX.1-schnell / dev - Next-gen image quality
- β FLUX.2-dev - Latest and most capable iteration
Generate stunning images from text descriptions with various models
Input: "A serene mountain landscape at sunset, digital art"
Output: High-quality AI-generated image
The Advanced Settings page provides fine-grained control over the inference engine. Below is a summary of each toggle and its impact:
| Setting | Description | Effect When ON | Effect When OFF | Recommendation |
|---|---|---|---|---|
| Offload to CPU | Offloads model computations from GPU to CPU | Saves GPU/VRAM at the cost of slower generation speed. | All computation stays on GPU (faster but needs more VRAM). | Enable on low-VRAM devices. |
| Keep CLIP on CPU | Forces the CLIP text encoder to stay on CPU | Frees GPU memory for image generation; slightly slower prompt encoding. | CLIP runs on GPU (faster but uses more VRAM). | β Enabled by default on macOS to prevent potential crashes. |
| Keep VAE on CPU | Forces the VAE decoder to stay on CPU | Frees GPU memory; decoding step is slower. | VAE runs on GPU (faster final decode). | Enable if you encounter OOM errors during decode. |
| Enable MMAP | Memory-maps model weights from disk instead of loading them entirely into RAM | Lower initial RAM spike; the OS pages weights in on demand (more disk I/O). | Entire model is loaded into RAM upfront (higher peak RAM, lower disk I/O). | Disable if you experience slow generation on devices with slow storage. |
| Direct Convolution | Uses a direct convolution implementation in the diffusion model | Experimental performance boost on some hardware. | Standard im2col-based convolution is used. | Try enabling to see if it improves speed on your device; disable if quality degrades. |
Model Weight Type (wtype) β Controls how model weights are stored in memory. Lower bit-depth reduces RAM usage but may degrade image quality.
Tip
K-variants (Q6_K, Q5_K, Q4_K, Q3_K, Q2_K) offer better quality at the same bit-depth compared to their legacy counterparts. Most users should keep Auto and only change this if they have specific memory constraints.
Warning
Changing the weight type requires re-loading the model, which can take a long time. Only change this setting if you understand the trade-offs.
| Platform | Status | Requirements |
|---|---|---|
| π€ Android | β Supported | Android 11+ (API 30+) + with Vulkan 1.2 |
| πͺ Windows | β Supported | Windows 10+ with Vulkan 1.2 |
| π§ Linux | β Supported | Vulkan 1.2+ drivers |
| π macOS | β Supported | Metal support required |
| π± iOS | β Supported | Metal support required |
Tip
Memory Optimization:
- Android: Mmap is enabled by default. You can manually disable it in Settings if you encounter any issues.
- macOS: CLIP on CPU is enabled by default to prevent potential crashes during generation.
[!NOTE] Vulkan Performance: Vulkan is currently used as a general-purpose acceleration backend. While it ensures broad compatibility, generation speeds may not be fully optimized compared to native implementations.
Created something amazing? We'd love to see it! Share your generation details (prompt, seed, model, etc.) to help others learn and create better art.
π Submit your creation here
graph TB
A[Compose Multiplatform UI] --> B[Kotlin ViewModels]
B --> C[Koin DI]
C --> D[JNI Bridge]
D --> E[C++ Native Layer]
E --> F[stable-diffusion.cpp]
F --> G[Vulkan/Metal Backend]
- Language: Kotlin Multiplatform
- UI Framework: Compose Multiplatform
- Dependency Injection: Koin v4.1.1
- Navigation: Jetpack Navigation Compose
- Networking: Ktor 3.2.3 + Sandwich 2.1.2
- Image Loading: Coil3 v3.3.0
- Concurrency: Kotlin Coroutines
- Native Engine: stable-diffusion.cpp
- Native Engine ++: llama.cpp
- Android: Android 11+ device with Vulkan 1.2 support
- Desktop: Windows/Linux/macOS with compatible graphics drivers
- Development: Android Studio Ladybug or later / IntelliJ IDEA
- Visit Releases
- Download the appropriate package for your platform
- Install and launch
# Clone the repository
git clone https://github.com/Onion99/KMP-MineStableDiffusion.git
cd KMP-MineStableDiffusion
# Build for Desktop
./gradlew :composeApp:run
# Build for Android
./gradlew :composeApp:assembleDebug- Launch the app
- Load your Stable Diffusion model (GGUF format)
- Enter your text prompt
- Click generate and watch the magic happen! β¨
- π Changelog - Version history
Contributions are welcome! Whether it's:
- π Bug reports
- π‘ Feature requests
- π Documentation improvements
- π§ Code contributions
Please read our Contributing Guidelines before submitting PRs.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Special thanks to:
- leejet/stable-diffusion.cpp - Awesome C++ SD implementation
- ggerganov/llama.cpp - LLM inference framework
- JetBrains Compose Multiplatform - UI framework
- The entire Stable Diffusion community π
If you find this project useful:
- β Star this repository
- π Report bugs and suggest features
- π Fork and contribute
- π’ Share with others
- Issues: GitHub Issues
- Discussions: GitHub Discussions








