-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Feature Request
How do I get memory for my AI agent on Postgres?
Contact Information
- Name: Taariq Lewis
- Company: Volume
- Email: taariq@volume.finance
- Current Database Provider: Supabase
- Segment: saas
Conversation Transcript
User: What about memory?
SerenAI: question! SerenDB's compute resources include memory allocation that scales automatically with your CPU allocation.
Here's how it works:
Memory-to-CPU Ratio:
- Each compute gets 4GB of RAM per vCPU
- So if you're using 0.25 vCPU, you get 1GB RAM
- At 4 vCPU, you get 16GB RAM
- This scales up to 64GB+ for larger computes
Why this matters for agents:
- Agents often need memory for caching embeddings, session state, and context
- The autoscaling ensures memory grows with your workload
- Combined with pgvector, you can keep hot embeddings in memory for fast retrieval
Memory is included in compute costs - you're not billed separately for RAM, it's bundled with the vCPU pricing.
Need more memory than the standard 4:1 ratio allows? That would be a good feature request for specialized workloads. Want me to capture that?
Submitted: 2025-10-14T03:56:07.988Z
Source: serendb.com/landing
Reactions are currently unavailable