Will AI Logic context caching be supported in the iOS SDK? #15861
Replies: 2 comments 2 replies
-
|
Are you asking about explicit context caching support in addition to implicit context caching? https://ai.google.dev/gemini-api/docs/caching What SDK APIs would you like to have? |
Beta Was this translation helpful? Give feedback.
-
|
Hi there! We have added the ability to use an existing explicit caching in Firebase AI Logic. You should create the explicit caching using the Gemini API directly or via the Google Gen AI SDKs. For example, this is way I created a explicit caching in a Colab. Once the cache is created, you can reference it within your server prompt template using the cachedContent property Example: If the specified cache ID is invalid or has expired (past its TTL), the request will fail with an "Explicit content cache doesn't exist" error. There is not support for explicit caching via direct prompts, just using server prompt templates. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! I see that the latest Firebase release 12.9.0 has introduced response metadata for context caching when using Firebase AI Logic. Is this a sign that full context caching support will be rolled into the iOS SDK soon? This is important to us in deciding whether to choose the native SDK for making AI requests versus a more convoluted backend solution, as we have found context caching pretty important in keeping costs down. Thanks for any insight you can provide.
Beta Was this translation helpful? Give feedback.
All reactions