Skip to content

Commit f998b39

Browse files
authored
Release Support Changes (#1548)
1 parent 983a4ce commit f998b39

File tree

3 files changed

+208
-3
lines changed

3 files changed

+208
-3
lines changed

docs/agents/models/google-gemma.md

Lines changed: 197 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,197 @@
1+
# Google Gemma models for ADK agents
2+
3+
<div class="language-support-tag">
4+
<span class="lst-supported">Supported in ADK</span><span class="lst-python">Python v0.1.0</span>
5+
</div>
6+
7+
ADK agents can use the [Google Gemma](https://ai.google.dev/gemma/docs) family of generative AI models that offer a
8+
wide range of capabilities. ADK supports many Gemma features,
9+
including [Tool Calling](/adk-docs/tools-custom/)
10+
and [Structured Output](/adk-docs/agents/llm-agents/#structuring-data-input_schema-output_schema-output_key).
11+
12+
You can use Gemma 4 through the [Gemini API](https://ai.google.dev/gemini-api/docs),
13+
or by using one of many self-hosting options on Google Cloud:
14+
[Vertex AI](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma4),
15+
[Google Kubernetes Engine](https://docs.cloud.google.com/kubernetes-engine/docs/tutorials/serve-gemma-gpu-vllm),
16+
[Cloud Run](https://docs.cloud.google.com/run/docs/run-gemma-on-cloud-run).
17+
18+
## Gemini API Example
19+
20+
Create an API key in [Google AI Studio](https://aistudio.google.com/app/apikey).
21+
22+
```python
23+
# Set GEMINI_API_KEY environment variable to your API key
24+
# export GEMINI_API_KEY="YOUR_API_KEY"
25+
26+
# Simple tool to try
27+
def get_weather(location: str) -> str:
28+
return f"Location: {location}. Weather: sunny, 76 degrees Fahrenheit, 8 mph wind."
29+
30+
root_agent = LlmAgent(
31+
model="gemma-4-31b-it",
32+
name="weather_agent",
33+
instruction="You are a helpful assistant that can provide current weather.",
34+
tools=[get_weather] # Tools!
35+
)
36+
```
37+
38+
## Self-hosted vLLM Example
39+
40+
To access Gemma 4 endpoints in these services,
41+
you can use vLLM models through the [LiteLLM](/adk-docs/agents/models/litellm/) library
42+
for Python.
43+
44+
The following example shows how to use a Gemma 4 vLLM endpoint with ADK agents.
45+
46+
### Setup
47+
48+
1. **Deploy Model:** Deploy your chosen model using
49+
[Vertex AI](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma4),
50+
[Google Kubernetes Engine](https://docs.cloud.google.com/kubernetes-engine/docs/tutorials/serve-gemma-gpu-vllm),
51+
or [Cloud Run](https://docs.cloud.google.com/run/docs/run-gemma-on-cloud-run),
52+
and use its OpenAI-compatible API endpoint.
53+
Note that the API base URL includes `/v1` (e.g., `https://your-vllm-endpoint.run.app/v1`).
54+
* *Important for ADK Tools:* When deploying, ensure the serving tool
55+
supports and enables compatible tool/function calling and reasoning parsers.
56+
2. **Authentication:** Determine how your endpoint handles authentication (e.g.,
57+
API key, bearer token).
58+
59+
### Code
60+
61+
```python
62+
import subprocess
63+
from google.adk.agents import LlmAgent
64+
from google.adk.models.lite_llm import LiteLlm
65+
66+
# --- Example Agent using a model hosted on a vLLM endpoint ---
67+
68+
# Endpoint URL provided by your model deployment
69+
api_base_url = "https://your-vllm-endpoint.run.app/v1"
70+
71+
# Model name as recognized by *your* vLLM endpoint configuration
72+
model_name_at_endpoint = "openai/google/gemma-4-31B-it"
73+
74+
# Simple tool to try
75+
def get_weather(location: str) -> str:
76+
return f"Location: {location}. Weather: sunny, 76 degrees Fahrenheit, 8 mph wind."
77+
78+
# Authentication (Example: using gcloud identity token for a Cloud Run deployment)
79+
# Adapt this based on your endpoint's security
80+
try:
81+
gcloud_token = subprocess.check_output(
82+
["gcloud", "auth", "print-identity-token", "-q"]
83+
).decode().strip()
84+
auth_headers = {"Authorization": f"Bearer {gcloud_token}"}
85+
except Exception as e:
86+
print(f"Warning: Could not get gcloud token - {e}.")
87+
auth_headers = None # Or handle error appropriately
88+
89+
root_agent = LlmAgent(
90+
model=LiteLlm(
91+
model=model_name_at_endpoint,
92+
api_base=api_base_url,
93+
# Pass authentication headers if needed
94+
extra_headers=auth_headers
95+
# Alternatively, if endpoint uses an API key:
96+
# api_key="YOUR_ENDPOINT_API_KEY",
97+
extra_body={
98+
"chat_template_kwargs": {
99+
"enable_thinking": True # Enable thinking
100+
},
101+
"skip_special_tokens": False # Should be set to False
102+
},
103+
),
104+
name="weather_agent",
105+
instruction="You are a helpful assistant that can provide current weather.",
106+
tools=[get_weather] # Tools!
107+
)
108+
```
109+
110+
## Build a food tour agent with Gemma 4, ADK, and Google Maps MCP
111+
This sample shows how to build a personalized food tour agent using Gemma 4, ADK, and the Google Maps MCP server. The agent takes a user’s dish photo or text description, a location, and an optional budget, then recommends places to eat and organizes them into a walking route.
112+
113+
### Prerequisites
114+
- Enable [Google Maps API](https://console.cloud.google.com/maps-api/) on Google Cloud Console.
115+
- Create a [Google Maps Platform API key](https://console.cloud.google.com/maps-api/credentials).
116+
- Create a Gemini API key in [Google AI Studio](https://aistudio.google.com/app/apikey).
117+
- ADK installed and configured in your Python environment
118+
119+
### Project structure
120+
food_tour_app/
121+
├── __init__.py
122+
└── agent.py
123+
124+
`agent.py`
125+
```
126+
import os
127+
import dotenv
128+
from google.adk.agents import LlmAgent
129+
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset
130+
from google.adk.tools.mcp_tool.mcp_session_manager import StreamableHTTPConnectionParams
131+
132+
dotenv.load_dotenv()
133+
134+
system_instruction = """
135+
You are an expert personalized food tour guide.
136+
Your goal is to build a culinary tour based on the user's inputs: a photo of a dish (or a text description), a location, and a budget.
137+
138+
Follow these 4 rigorous steps:
139+
1. **Identify the Cuisine/Dish:** Analyze the user's provided description or image URL to determine the primary cuisine or specific dish.
140+
2. **Find the Best Spots:** Use the `search_places` tool to find highly rated restaurants, stalls, or cafes serving that cuisine/dish in the user's specified location.
141+
**CRITICAL RULE FOR PLACES:** `search_places` returns AI-generated place data summaries along with `place_id`, latitude/longitude coordinates, and map links for each place, but may lack a direct, explicit name field. You must carefully associate each described place to its provided `place_id` or `lat_lng`.
142+
3. **Build the Route:** Use the `compute_routes` tool to structure a walking-optimized route between the selected spots.
143+
**CRITICAL ROUTING RULE:** To avoid hallucinating, you MUST provide the `origin` and `destination` using the exact `place_id` string OR `lat_lng` object returned by `search_places`. Do NOT guess or hallucinate an `address` or `place_id` if you do not know the exact name.
144+
4. **Insider Tips:** Provide specific "order this, skip that" insider tips for each location on the tour.
145+
146+
Structure your response clearly and concisely. If the user provides a budget, ensure your suggestions align with it.
147+
"""
148+
149+
MAPS_MCP_URL = "https://mapstools.googleapis.com/mcp"
150+
151+
def get_maps_mcp_toolset():
152+
dotenv.load_dotenv()
153+
maps_api_key = os.getenv('MAPS_API_KEY')
154+
if not maps_api_key:
155+
print("Warning: MAPS_API_KEY environment variable not found.")
156+
maps_api_key = "no_api_found"
157+
158+
tools = MCPToolset(
159+
connection_params=StreamableHTTPConnectionParams(
160+
url=MAPS_MCP_URL,
161+
headers={
162+
"X-Goog-Api-Key": maps_api_key
163+
}
164+
)
165+
)
166+
print("Google Maps MCP Toolset configured.")
167+
return tools
168+
169+
maps_toolset = get_maps_mcp_toolset()
170+
171+
root_agent = LlmAgent(
172+
model="gemma-4-31b-it",
173+
name="food_tour_agent",
174+
instruction=system_instruction,
175+
tools=[maps_toolset],
176+
)
177+
```
178+
179+
### Environment variables
180+
Set the required environment variables before running the agent.
181+
```
182+
export MAPS_API_KEY="YOUR_GOOGLE_MAPS_API_KEY"
183+
export GEMINI_API_KEY="YOUR_GEMINI_API_KEY"
184+
```
185+
186+
### Example usage
187+
To test out the capabilities of the Food Tour Agent, try pasting one of these prompts into the chat:
188+
189+
- *"I want to do a ramen tour in Toronto. My budget is $60 for the day. Give me a walking route for the top 3 spots and tell me what I should order at each."*
190+
- *"I have this photo of a deep dish pizza [insert image URL]. I want to find the best places for this around Navy Pier in Chicago. Structure a walking tour and tell me what the must-have slice is at each stop."*
191+
- *"I'm in Downtown Austin looking for an authentic BBQ tour. Let's keep the budget under $100. Build a walking route between 3 highly-rated spots and give me insider tips on the best cuts of meat to get."*
192+
193+
The agent will:
194+
1. Infer the likely cuisine or dish style
195+
2. search for relevant places using Google Maps MCP tools
196+
3. Compute a walking route between selected stops
197+
4. Return a structured food tour with recommendations and insider tips

docs/agents/models/vllm.md

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,13 +30,13 @@ import subprocess
3030
from google.adk.agents import LlmAgent
3131
from google.adk.models.lite_llm import LiteLlm
3232

33-
# --- Example Agent using a model hosted on a vLLM endpoint ---
33+
# --- Example Agent using a Gemma 4 model hosted on a vLLM endpoint ---
3434

3535
# Endpoint URL provided by your vLLM deployment
3636
api_base_url = "https://your-vllm-endpoint.run.app/v1"
3737

3838
# Model name as recognized by *your* vLLM endpoint configuration
39-
model_name_at_endpoint = "hosted_vllm/google/gemma-3-4b-it" # Example from vllm_test.py
39+
model_name_at_endpoint = "hosted_vllm/google/gemma-4-E4B-it" # Example from vllm_test.py
4040

4141
# Authentication (Example: using gcloud identity token for a Cloud Run deployment)
4242
# Adapt this based on your endpoint's security
@@ -53,8 +53,15 @@ agent_vllm = LlmAgent(
5353
model=LiteLlm(
5454
model=model_name_at_endpoint,
5555
api_base=api_base_url,
56+
# This extra_body values specific to Gemma 4.
57+
extra_body={
58+
"chat_template_kwargs": {
59+
"enable_thinking": True # Enable thinking
60+
},
61+
"skip_special_tokens": False # Should be set to False
62+
},
5663
# Pass authentication headers if needed
57-
extra_headers=auth_headers
64+
extra_headers=auth_headers,
5865
# Alternatively, if endpoint uses an API key:
5966
# api_key="YOUR_ENDPOINT_API_KEY"
6067
),

mkdocs.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -289,6 +289,7 @@ nav:
289289
- Models for Agents:
290290
- agents/models/index.md
291291
- Gemini: agents/models/google-gemini.md
292+
- Gemma: agents/models/google-gemma.md
292293
- Claude: agents/models/anthropic.md
293294
- Vertex AI hosted: agents/models/vertex.md
294295
- Apigee AI Gateway: agents/models/apigee.md

0 commit comments

Comments
 (0)