Skip to content

🎨 Multiplatform AI image generation app powered by Stable Diffusion β€’ Built with Kotlin Multiplatform & Compose β€’ Supports SDXL, FLUX, SD3 & more β€’ Native performance via C++/JNI β€’ Android/iOS & Desktop ready

License

Notifications You must be signed in to change notification settings

Onion99/KMP-MineStableDiffusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

343 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Mine StableDiffusion logo

Mine StableDiffusion 🎨

The kotlin multiplatform Stable Diffusion client
Generate stunning AI art locally on Your devices

Kotlin Platforms Release GitHub stars

Compose Multiplatform Koin Vulkan Metal

App preview

✨ What is Mine StableDiffusion?

Mine StableDiffusion is a native, offline-first AI art generation app that brings the power of Stable Diffusion models to your fingertips. Built with modern Kotlin Multiplatform technology and powered by the blazing-fast stable-diffusion.cpp engine, it delivers desktop-class performance on both Android/iOS and Desktop platforms.

🎯 Why Choose This App?

  • πŸš€ Native Performance - C++ backend with JNI bindings for maximum speed
  • πŸ”’ Privacy First - 100% offline, all processing happens on your device
  • 🎨 Modern UI - Beautiful Compose Multiplatform interface
  • πŸ“± True Multiplatform - Shared codebase for Android & iOS & Desktop
  • πŸ”§ Model Flexibility - Support for FLUX, SDXL, SD3, and many more
  • ⚑ Hardware Accelerated - Vulkan 1.2+ (Android/Linux/Windows) & Metal (macOS)

πŸ“Έ Screenshots

πŸ€– Android-1 Android-2 Android-3
Android Demo1 Android Demo2 Android Demo3
πŸ’» Desktop-1 πŸ’» Desktop-2 Desktop-mac
Desktop Demo Desktop Demo2 Desktop Demo3

🎲 Supported Models & Performance Tiers

Mine StableDiffusion supports a wide range of models. To help you choose the best model for your device, we've organized them by performance requirements:

Please try to ensure that the model is smaller than the VRAM of your device.

[!TIP] Start Small: We recommend starting with smaller models (e.g., SD-Turbo, SD 1.5) and gradually trying larger ones. This allows you to gauge your device's capabilities and identify performance bottlenecks effectively.

οΏ½ Entry & Speed (Fastest, Minimal VRAM)

Ideal for older phones or integrated graphics. High speed, low memory usage.

βš–οΈ Balanced Performance (Standard)

Good balance between quality and speed. Works well on most modern devices.

πŸ’Ž Professional Quality (High Requirements)

Best for high-detail 1024x1024+ generation. Requires more VRAM and time.

  • βœ… SDXL - Standard high-quality base model
  • βœ… SD3 / SD3.5 - Stability AI's latest high-fidelity architecture
  • βœ… πŸ‘οΈ Ovis-Image - Vision-language model

🌌 Next-Gen Large Models (Flagship & High-End PC)

State-of-the-art models with massive parameter counts. Best for flagship phones or dedicated GPUs.


🌟 Key Features

Text-to-Image Generation

Generate stunning images from text descriptions with various models

Input: "A serene mountain landscape at sunset, digital art"
Output: High-quality AI-generated image

βš™οΈ Advanced Settings Guide

The Advanced Settings page provides fine-grained control over the inference engine. Below is a summary of each toggle and its impact:

Setting Description Effect When ON Effect When OFF Recommendation
Offload to CPU Offloads model computations from GPU to CPU Saves GPU/VRAM at the cost of slower generation speed. All computation stays on GPU (faster but needs more VRAM). Enable on low-VRAM devices.
Keep CLIP on CPU Forces the CLIP text encoder to stay on CPU Frees GPU memory for image generation; slightly slower prompt encoding. CLIP runs on GPU (faster but uses more VRAM). βœ… Enabled by default on macOS to prevent potential crashes.
Keep VAE on CPU Forces the VAE decoder to stay on CPU Frees GPU memory; decoding step is slower. VAE runs on GPU (faster final decode). Enable if you encounter OOM errors during decode.
Enable MMAP Memory-maps model weights from disk instead of loading them entirely into RAM Lower initial RAM spike; the OS pages weights in on demand (more disk I/O). Entire model is loaded into RAM upfront (higher peak RAM, lower disk I/O). Disable if you experience slow generation on devices with slow storage.
Direct Convolution Uses a direct convolution implementation in the diffusion model Experimental performance boost on some hardware. Standard im2col-based convolution is used. Try enabling to see if it improves speed on your device; disable if quality degrades.

Model Weight Type (wtype) β€” Controls how model weights are stored in memory. Lower bit-depth reduces RAM usage but may degrade image quality.

Tip

K-variants (Q6_K, Q5_K, Q4_K, Q3_K, Q2_K) offer better quality at the same bit-depth compared to their legacy counterparts. Most users should keep Auto and only change this if they have specific memory constraints.

Warning

Changing the weight type requires re-loading the model, which can take a long time. Only change this setting if you understand the trade-offs.


πŸ“± Platform Support

Platform Status Requirements
πŸ€– Android βœ… Supported Android 11+ (API 30+) + with Vulkan 1.2
πŸͺŸ Windows βœ… Supported Windows 10+ with Vulkan 1.2
🐧 Linux βœ… Supported Vulkan 1.2+ drivers
🍎 macOS βœ… Supported Metal support required
πŸ“± iOS βœ… Supported Metal support required

Tip

Memory Optimization:

  • Android: Mmap is enabled by default. You can manually disable it in Settings if you encounter any issues.
  • macOS: CLIP on CPU is enabled by default to prevent potential crashes during generation.

[!NOTE] Vulkan Performance: Vulkan is currently used as a general-purpose acceleration backend. While it ensures broad compatibility, generation speeds may not be fully optimized compared to native implementations.

setting


🎨 Community Showcase

Created something amazing? We'd love to see it! Share your generation details (prompt, seed, model, etc.) to help others learn and create better art.

πŸ‘‰ Submit your creation here


πŸ—οΈ Architecture & Tech Stack

Core Technologies

graph TB
    A[Compose Multiplatform UI] --> B[Kotlin ViewModels]
    B --> C[Koin DI]
    C --> D[JNI Bridge]
    D --> E[C++ Native Layer]
    E --> F[stable-diffusion.cpp]
    F --> G[Vulkan/Metal Backend]
Loading

Technology Stack

  • Language: Kotlin Multiplatform
  • UI Framework: Compose Multiplatform
  • Dependency Injection: Koin v4.1.1
  • Navigation: Jetpack Navigation Compose
  • Networking: Ktor 3.2.3 + Sandwich 2.1.2
  • Image Loading: Coil3 v3.3.0
  • Concurrency: Kotlin Coroutines
  • Native Engine: stable-diffusion.cpp
  • Native Engine ++: llama.cpp

πŸš€ Getting Started

Prerequisites

  • Android: Android 11+ device with Vulkan 1.2 support
  • Desktop: Windows/Linux/macOS with compatible graphics drivers
  • Development: Android Studio Ladybug or later / IntelliJ IDEA

Installation

Option 1: Download Pre-built Release

  1. Visit Releases
  2. Download the appropriate package for your platform
  3. Install and launch

Option 2: Build from Source

# Clone the repository
git clone https://github.com/Onion99/KMP-MineStableDiffusion.git
cd KMP-MineStableDiffusion

# Build for Desktop
./gradlew :composeApp:run

# Build for Android
./gradlew :composeApp:assembleDebug

First Run

  1. Launch the app
  2. Load your Stable Diffusion model (GGUF format)
  3. Enter your text prompt
  4. Click generate and watch the magic happen! ✨

πŸ“š Documentation


🀝 Contributing

Contributions are welcome! Whether it's:

  • πŸ› Bug reports
  • πŸ’‘ Feature requests
  • πŸ“ Documentation improvements
  • πŸ”§ Code contributions

Please read our Contributing Guidelines before submitting PRs.


πŸ“„ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


πŸ™ Acknowledgments

Special thanks to:


πŸ’™ Support This Project

If you find this project useful:

  • ⭐ Star this repository
  • πŸ› Report bugs and suggest features
  • πŸ”€ Fork and contribute
  • πŸ“’ Share with others

πŸ“¬ Contact


About

🎨 Multiplatform AI image generation app powered by Stable Diffusion β€’ Built with Kotlin Multiplatform & Compose β€’ Supports SDXL, FLUX, SD3 & more β€’ Native performance via C++/JNI β€’ Android/iOS & Desktop ready

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published