Releases: OuOSama/YAE-AI
🎉 First YAE-AI Release! 2025
🦊 YAE-AI v1.0.0 - Initial Release
Your AI VTuber companion is finally here! ✨
🎉 What's New
This is the first official release of YAE-AI — a self-hosted AI VTuber framework that lets you run powerful language models locally with Docker and UV package manager.
✨ Core Features
- 🐳 Docker-powered vLLM hosting — Run your models in isolated containers with ease
- ⚡ UV package manager integration — Lightning-fast Python dependency management
- 🎯 Character roleplay system — Built-in support for immersive VTuber personalities
- 🔄 Multi-provider support — Easily switch between local vLLMs, ChatGPT, Claude, and Gemini
- 🎨 Fully customizable — Adapt system prompts and configurations to match your vision
📦 What's Included
yae-ai/
├── src/
│ ├── utils/save.py # Hugging Face model downloader
│ └── main.py # Main application entry point
├── docker-compose.yml # vLLM container configuration
├── pyproject.toml # UV dependencies and project metadata
└── README.md # Comprehensive documentation
🚀 Quick Start
# Clone the repository git clone https://github.com/OuOSama/YAE-AI.git yae-ai cd yae-aiInstall dependencies
uv sync
Download the model
uv run src/utils/save.py
Start vLLM containers
docker compose up -d
That's it! Your AI is now running on http://localhost:8000 🎯
💡 Usage Example
from openai import OpenAIclient = OpenAI(
api_key="mykey",
base_url="http://localhost:8000/v1"
)response = client.chat.completions.create(
model="/model",
messages=[
{
"role": "system",
"content": "You are a charming VTuber AI assistant..."
},
{
"role": "user",
"content": "Tell me something interesting!"
}
],
temperature=0.7
)
print(response.choices[0].message.content)
🛠️ Requirements
| Tool | Version | Purpose |
|---|---|---|
| Python | 3.10+ | Core runtime |
| UV | Latest | Package management |
| Docker | Latest | Container orchestration |
| Git | Latest | Repository cloning |
📖 Documentation
Full documentation is available in the README.md, including:
- Detailed installation guide
- Advanced configuration options
- Character prompt examples
- Troubleshooting tips
- Contributing guidelines
🐛 Known Issues
- Model downloads can take significant time depending on internet speed
- Docker requires at least 8GB RAM for optimal performance
- Some models may require GPU support (configure in
docker-compose.yml)
🌟 What's Next?
Check out our roadmap for upcoming features:
- Voice synthesis integration (TTS)
- WebUI for easier configuration
- Multi-language support
- Fine-tuning scripts
- Live2D integration
- Streaming platform integrations
💜 Community & Support
- 🐛 Found a bug? Report it here
- 💡 Have a feature idea? Start a discussion
- ⭐ Love the project? Give us a star and share with friends!
- ☕ Want to support? Buy me a coffee
🙏 Acknowledgments
Special thanks to:
- vLLM Team for the incredible inference engine
- Astral for creating UV
- The VTuber community for endless inspiration
- All early testers and contributors ✨
📄 License
This project is licensed under the MIT License — see LICENSE for details.
<div align="center">
Built with 💜 by OuOSama
Stay based, stay creative ✨
Download Release | View Documentation | Report Issues
</div>📝 Release Notes
Added
- Initial project structure with UV package manager
- Docker Compose configuration for vLLM hosting
- Model download utility from Hugging Face
- OpenAI-compatible API client example
- Comprehensive README documentation
- MIT License
Security
- Local-first architecture keeps your data private
- No external API calls required for model inference
- Self-hosted deployment gives you full control
Full Changelog: https://github.com/OuOSama/YAE-AI/commits/v1.0.0
# 🦊 YAE-AI v1.0.0 - Initial ReleaseYour AI VTuber companion is finally here! ✨
🎉 What's New
This is the first official release of YAE-AI — a self-hosted AI VTuber framework that lets you run powerful language models locally with Docker and UV package manager.
✨ Core Features
- 🐳 Docker-powered vLLM hosting — Run your models in isolated containers with ease
- ⚡ UV package manager integration — Lightning-fast Python dependency management
- 🎯 Character roleplay system — Built-in support for immersive VTuber personalities
- 🔄 Multi-provider support — Easily switch between local vLLMs, ChatGPT, Claude, and Gemini
- 🎨 Fully customizable — Adapt system prompts and configurations to match your vision
📦 What's Included
yae-ai/
├── src/
│ ├── utils/save.py # Hugging Face model downloader
│ └── main.py # Main application entry point
├── docker-compose.yml # vLLM container configuration
├── pyproject.toml # UV dependencies and project metadata
└── README.md # Comprehensive documentation
🚀 Quick Start
# Clone the repository
git clone https://github.com/OuOSama/YAE-AI.git yae-ai
cd yae-ai
# Install dependencies
uv sync
# Download the model
uv run src/utils/save.py
# Start vLLM containers
docker compose up -dThat's it! Your AI is now running on http://localhost:8000 🎯
💡 Usage Example
from openai import OpenAI
client = OpenAI(
api_key="mykey",
base_url="http://localhost:8000/v1"
)
response = client.chat.completions.create(
model="/model",
messages=[
{
"role": "system",
"content": "You are a charming VTuber AI assistant..."
},
{
"role": "user",
"content": "Tell me something interesting!"
}
],
temperature=0.7
)
print(response.choices[0].message.content)🛠️ Requirements
| Tool | Version | Purpose |
|---|---|---|
| Python | 3.10+ | Core runtime |
| UV | Latest | Package management |
| Docker | Latest | Container orchestration |
| Git | Latest | Repository cloning |
📖 Documentation
Full documentation is available in the [README.md](https://github.com/OuOSama/YAE-AI/blob/main/README.md), including:
- Detailed installation guide
- Advanced configuration options
- Character prompt examples
- Troubleshooting tips
- Contributing guidelines
🐛 Known Issues
- Model downloads can take significant time depending on internet speed
- Docker requires at least 8GB RAM for optimal performance
- Some models may require GPU support (configure in
docker-compose.yml)
🌟 What's Next?
Check out our [roadmap](https://github.com/OuOSama/YAE-AI#-roadmap) for upcoming features:
- Voice synthesis integration (TTS)
- WebUI for easier configuration
- Multi-language support
- Fine-tuning scripts
- Live2D integration
- Streaming platform integrations
💜 Community & Support
- 🐛 Found a bug? [Report it here](https://github.com/OuOSama/YAE-AI/issues)
- 💡 Have a feature idea? [Start a discussion](https://github.com/OuOSama/YAE-AI/discussions)
- ⭐ Love the project? Give us a star and share with friends!
- ☕ Want to support? [Buy me a coffee](https://ko-fi.com/ouosama)
🙏 Acknowledgments
Special thanks to:
- [vLLM Team](https://github.com/vllm-project/vllm) for the incredible inference engine
- [Astral](https://astral.sh/) for creating UV
- The VTuber community for endless inspiration
- All early testers and contributors ✨
📄 License
This project is licensed under the MIT License — see [LICENSE](LICENSE) for details.
**Built with 💜 by [...